OFC 2025 Shows Off Solutions to Meet AI Challenges

Major network AI advancements for increasing data center scale and reducing energy consumption were on display at OFC 2025 as cutting-edge test solutions proved ready to validate network infrastructures and power high-speed performance.
This year Optical Fiber Communications Conference (OFC) celebrated 50 years of innovation in optical networking and communications as more than 17,000 attendees from 83 countries descended upon San Francisco to check out the latest offerings from 685 exhibitors.
This year’s event marked an evolution from learning about new technology options for meeting the challenges of AI infrastructure deployments to a focus on real AI solutions and the tools that validate them. Participants expanded their focus beyond transceiver and optical innovations to take a broader, system-level view, sharing insights from early AI use cases, test cases, and open Ethernet standards.
Three important developments stood out:
- Arrival of the first wave of 1.6 terabit interconnect solutions to scale AI data center capacity
- Realization that reducing power requirements is a bigger issue than just technology
- Standards and specifications to expand AI data center back-end networks
Here Comes 1.6 Terabit Ethernet
Market demand from AI/ML workloads, video streaming, remote work, and IoT appliances shows no sign of slowing as increased loads are placed on the network. This is putting pressure on hyperscalers to expand quickly from 800G to 1.6T.
Across our customer conversations, we are hearing how hyperscalers need 1.6T to satisfy the traffic demand on their network. As traffic growth continues unabated, service providers and large enterprises are expected to follow soon.
In a major network capacity scale leap, the first wave of 1.6T interconnect solutions were unveiled to support exponential growth in both traditional and AI-driven traffic environments. While companies spent a lot of time talking up 1.6T at OFC 2024, this year’s show was all about action, with a dozen 1.6T optical solutions and demos already on display.
Power Efficiency Takes Priority
As surging traffic drives rapid expansion of compute, network, and storage infrastructure, power consumption has become a top concern for hyperscale data centers—shifting the focus from cost per bit to power per bit as the new design imperative.
In another sign of positive industry progress since last year, Linear-drive Pluggable Optics (LPOs) are moving quickly from concept towards widespread adoption, with several LPO products shown at OFC.
LPOs are an efficient way to lower optical module power consumption while enabling more high-speed links. LPO takes electrical signals directly and modulates them using lasers. Because it’s analog, LPO is a relatively low power transceiver solution with signal integrity and fast propagation.
On the cooling side, innovations ranged from liquid cooling to cold plate technology, both pushing the limits of hardware power efficiency.
Expanding AI Data Center Back-End Networks
Hyperscalers are demanding open, interoperable solutions to scale up and scale out AI/ML back-end networks. The sense from OFC was that both scale-up and scale-out architectures are expected to draw large investments.
The need to scale up and speed GPU to GPU communication has been boosted by the Ultra Accelerator interconnect, an open industry standard, memory-centric protocol. UALink expands the number of accelerators supported in a pod to 1,000 and optimizes performance of compute-intensive workloads while leveraging Ethernet infrastructure.
Meanwhile, the Ultra Ethernet Consortium (UEC) open Ethernet standards will scale-out data center GPUs by expanding connection to additional pods. The UEC is close to announcing a new transport standard that aims to reduce vendor lock-in and accelerate innovation.
AI Testing Solutions
The ecosystem needs to seamlessly transition to 800G and 1.6T Ethernet with trusted, high-performance solutions.
Success hinges on those high-speed technology solutions being quickly validated for reliability, scalability, and interoperability. As AI-driven workloads continue to fuel unprecedented demand for high-bandwidth, low-latency networking, these workloads need to be tested via emulated real-world traffic patterns to avoid delays and costs associated with purchasing GPUs in the lab.
We are proud to support and validate these next-generation innovations with purpose-built test solutions for next-gen data centers, cloud providers, and AI/ML infrastructures. At OFC 2025, we highlighted cutting-edge 1.6T, 800G, and 400G test platforms that included:
- 1.6T test solutions that help validate cutting-edge high-speed Ethernet infrastructure to enable network equipment manufacturers and service providers to ensure the highest levels of performance, lossless transport, and ultra-low latency. This included demonstration of a full line-rate 1.6T traffic across interconnects from multiple vendors, proving the strength of a multivendor, interoperable ecosystem.
- The award-winning B3 800G Appliance, the industry’s first high-density 800G OSFP and QSFP-DD test platform supporting IEEE 802.3df specifications to accelerate AI-driven Ethernet adoption with the intelligence to emulate AI workloads. B3 enables the AI workload testing without the need to purchase costly GPUs. The B3 800G Appliance demonstrated high-speed Ethernet testing and early Ultra Ethernet Consortium transport support to help customers test the scale-out networks conforming to the new UEC 1.0 specification.
- Support for LPO optics with tools to ensure network equipment meets real-world performance and scale demands.
- 400G test solutions designed to provide high-performance, cost-effective, and interoperable cloud-scale networking.
- The award-winning M1 Compact Appliance, a space-efficient platform for functional, performance, and benchmark testing in IP networking and automotive Ethernet applications.
We are here to ensure your investments in network infrastructure are ready for what’s next—from AI scalability to 1.6T performance.