The transition to 400G Ethernet has so far been primarily an event confined to hyperscalers and carrier networks, but expectations of users and data center customers are ultimately heading for at least 800 Gbps, or even faster 1.6 Tbps.
For Ethernet networking visionaries, 800 Gbps seems to be a sufficiently achievable goal, but there are quite a few challenges in terms of optics, power, architecture, etc.
There are many reasons why higher speeds are needed in data centers and cloud services. For example, there is the continued growth of hyperscale networks such as Google, Amazon, and Facebook, as well as more decentralized cloud, AI, video, and mobile application workloads. Both must be supported by the current and future networks.
Increased traffic is another driving factor. According to the IEEE 802.3 Industrial Connected Ethernet Bandwidth Assessment Report published in April 2020, monthly global IP traffic will increase from 177 exabytes (EB) in 2017 to 396 EB in 2022. The report highlights that the underlying factors, such as an increase in the number of users, an increase in access speed and methods, and an increase in services, all point to an ever-increasing demand for bandwidth.
The industry is also active for faster Ethernet technology. For example, the IEEE and IEEE Standards Association formed an Ethernet study group called IEEE 802.3 Beyond 400 Gbps at the end of 2020. John Dambrosia, the special engineer at Futurewei Technologies, said in a statement about the formation of the study group, "There is a path toward Ethernet beyond 400G, but there are many options and physical challenges to consider in order to take Ethernet speed to the next level." said.
Additionally, late last year, the Optical Internetworking Forum (OIF) launched new projects for higher speed Ethernet, including the 800G Coherent project. According to Tad Hofmeister, OIF Vice President and Technical Lead for Optical Networking Technologies at Google, this activity aims to define an interchangeable 800G coherent line specification for campus and data center interconnect applications. Basically, it is the task of defining the long-distance communication method of high-speed switch equipment.
At the recently held Ethernet Alliance's Technology Research Forum (TEF), experts from leading industry companies including Dambrosia and Hopemeister, including Cisco, Juniper, Google, Facebook, and Microsoft, are on issues related to next-generation Ethernet speed. And participated to discuss the requirements.
Power consumption holding back Ethernet speed
One of the major challenges to be solved in the process of going above 400Gbps is the power to run the system.
"Power is growing at an unsustainable rate," said Rakesh Chopra, a Cisco research fellow. What can be built, deployed, and maintained in a facility is limited by power, so you have to solve the power problem in the end. Power per bit continued to improve. The bandwidth can be increased by 80 times, but the power required for this is increased by 22 times. The more power you consume on the network, the fewer servers you can deploy. "It's not a matter of how small the equipment can be broken down, it's a matter of how much efficiency can be improved."
Samee Bouzelbin, senior director of Dell'Oro Group, said power is the biggest limiting factor at speeds exceeding 400G. “Power is already affecting the implementation of higher speeds in hyperscalers. This is because we have to wait for the various technology elements to work efficiently within the existing power budget. "This problem gets bigger as the speed increases."
"It's about whether you hit the wall first, bandwidth or power," said Brad Booth, senior engineer at Microsoft's Azure Hardware Systems Group. More and more power is required, but power is limited. "You have to rely on what's available through the facilities and supporting infrastructure being built now."
Looking forward to CPO combining optical system and switch
Booth said that DARPA and many other industries and research institutions are working on how to build higher bandwidth densities with improved power. This requires creative answers. "Future data center networks may require a combination of photonic innovation and optimized network architecture," said Buzelbin.
One of these potential innovations is Co-Packaged Optics (CPO), which is being developed by Broadcom, Cisco, and Intel. CPO combines the currently separated optical system and switch semiconductor into one package to drastically reduce power consumption.
"The CPO is a big step forward in terms of power reduction," said Rob Stone, Facebook's technology sourcing manager. "What's needed is a standard-supported CPO ecosystem for broad adoption," Stone, who is also chair of the technology working group of the Ethernet Technology Consortium, which announced the completion of the 800GbE specification.
In a statement on the CPO website, Facebook and Microsoft said, “The two companies are working together to develop a CPO specification to cope with the increase in data center traffic as a way to reduce the power consumption of the switch optical-electrical interface. "In order for optical and switch solution companies to rapidly develop joint packaging solutions and support the establishment of a diverse vendor ecosystem, publicly disclosed common system specifications are required."
OIF is also working on the Co-Packaging Framework, a specification that encompasses application domains and related technical considerations for co-packaging communication interfaces containing one or more ASICs. Hofmeister explained that the primary goal of this specification is to uncover new opportunities for interoperability standards for future research activities by OIF or other standards bodies.
Major transformations and challenges in the beginning stage
But experts point out that CPOs have a long way to go. In a recent blog post about CPO, Cisco's Chopra said, “Architecting, designing, deploying, and operating a system that uses CPOs is a very difficult task, so it's important to get started before it's too late. Looking at current service providers and web-scale networks, the links outside the rack are mostly optical cables, but the wiring inside the rack is copper. As speed increases, long copper links must be converted to optical communication. Ultimately, all links coming out of the silicon package will be optical, not electrical.”
"It's getting harder and harder to get up to speed," said David Opelt, Juniper Networks engineer. It is not yet known whether a system that supports the next generation of speeds and higher densities can be built using conventional methods. Even if possible, it is not clear whether the results will be acceptable from an end-user perspective.”
Ofelt said that it will take a lot of time in the future for technologies that support higher Ethernet speeds to come out in large quantities, with proper packaging and system support. There is” he added.
Another problem with the massive transition to higher rates is that the adoption rate gap can be quite large.
For example, the majority of corporate customers will switch from 10G to 25G within the next two to five years, and for many of these companies, the next-level speed will be 50G-100G. However, Brad Kozlov, founder, and CEO of market research firm LightCounting said that expectations for wireless at the edge of the network could change this outlook. “There is a strong reliance on digital services or Enterprises will switch from 100G to 400G in the next two to five years, and the next speed will be 800G or 1.6T. Meanwhile, this situation may change in the future due to AI services that use a lot of bandwidth. "This is because most companies will need a faster connection to send video monitoring their operations."
"There's a lot to do, and we're just getting started," Dambrosia said. "What we really need is a flexible infrastructure to support future bandwidth demands beyond 400Gbps," said Dambrosia.
0 Comments