400G Testing: 3 Challenges on the Horizon
In this Olympic year, the motto Citius, Altius, Fortius (Latin for Faster, Higher, Stronger) seems very relevant to where we are with Ethernet networking. We need faster bandwidth, higher levels of service and stronger performance to push around all those videos of cats—as well as the more serious (and less cute) business traffic. Approximately every eight years, a new highest-speed premium rate for Ethernet is defined. We are seeing 100G, first standardized in 2010, now entering prime time. Today, the IEEE is working on a 400G standard (802.3 .bs) which should be ready around 2018. These advances in speed allow for development of new generations of optical transport equipment that will meet the market’s ever-increasing demands for bandwidth.
The standards for 100G spurred a path to market for early adapters (10 x 10G based) using mature technology. At the same time, it revealed a clear blueprint to building cost-effective interfaces based on technology that would mature about five years after standardization. At that time, 25G-based signaling served as the basis for 100G technology becoming the cost-effective and impactful solution it is today.
The emergence of 400G is promising a similar path. Early adapters can go to market with 25G-based technology (16 x 25G NRZ), but should have a migration plan to PAM-4 electrical interfaces—all while maintaining the appropriate photonic inter-operability (FR8/LR8). Even with this approach, 400G technologies remain extremely challenging to bring to market:
- Photonic interfaces will use PAM-4 signaling, a significant departure from conventional NRZ
- PAM-4 will require significant improvements in SNR and linearity as well as bandwidth
To mitigate the challenges of bringing ultra-high performance PAM-4 components to market and to allow realistic reach in photonic links, 400G also uses FEC technology. While FEC has been widely used in OTN technology for many years, its use in general-purpose client interfaces is relatively novel. In fact, we now have the expectation that PAM-4-based links will run with an error floor (potentially as high as 10^-4) with the FEC layer correcting this to give an effective error-free link at packet level.
For now, let’s look at three critical considerations when planning your road to higher speeds while maintaining a rigorous, effective testing process:
- Higher speeds create a highly non-linear test scenario. We no longer test for ‘zero’ errors at a defined confidence level. We accept that raw (PRBS/PRBSQ) testing will give rise to errors, but to validate elements as “good” we also need in-depth understanding of error statistics and FEC behavior. Of course, we can test with framed traffic and validate error-free packets (and perhaps monitor pre-FEC rate), but this does not highlight the margin we have. In other words, tools that validate the margin of individual elements are critical for these 400G elements.
- A second consideration when adopting new speeds will be establishing visibility in multiple domains. Physical-layer testing (often used today for photonic components such as optical modules) must now understand the impact of the FEC and the fact that the link is not expected to have raw, error-free performance. The impact of skew, lane rate, and pattern sensitivity will require validation across multiple domains—from the raw PRBS/PRBSQ (with error statistical analysis) to full FEC-based Ethernet traffic with post and pre-FEC monitoring.
- The third challenge is the increasing integration of elements. The best example can be seen with pluggable optics such as the CFP8, the first-generation 400G pluggable. This marvel of integration includes eight lasers with highly linear modulators, linear laser drivers, PAM-4 to NRZ bridges, high performance photodiodes, highly linear TIAs, photonic lambda couplers, and microcontrollers all integrated into a small pluggable form factor with significant cooling challenges.
For all of these new testing requirements to work successfully, instrumentation must be able to test and validate all core elements in parallel. These elements include electrical and photonic signal integrity and pattern sensitivity as well as micro-controller inter-action such as MDIO, I^2C. The subtle interactions in such a tightly integrated key component present an extremely challenging test scenario.
400G will bring considerable new challenges, central to which are the moves to PAM-4 (electrically and optically) and FEC coding. Newer, more insightful testing will be required to validate margin and diagnose issues through the coding and PAM-4 modulation. No longer can testing be confined to just one of the layers, but it must cover the link from the physical layer through to Ethernet. Test results must be able to reconcile where the issues lie and fully validate the margin implications of the FEC channel. Having been through this transition with 100G, I recommend that companies start considering the implications of 400G—for both its opportunities and its challenges. Make no mistake, the path to 400G is fraught with challenges, but the rewards are great. I have little doubt that our industry will find a way through, just like we did with 100G. Our success depends on it.
Dr. Paul Brooks, product line manager for high-speed transport at Viavi, is a voting member of IEEE 802.3 and holds a Ph.D. in optical signal processing from the University of Southampton. Meet Dr. Brooks at ECOC 2016.
Learn more about Viavi’s 400G Optical Network Tester, the industry’s first 400G support to include forward error correction (FEC) and PAM4 modulation.