400ZR: Enable 400G for Next-Generation DCI

To cope with large-scale cloud services and other growing data center storage and processing needs, the data center systems have become increasingly decentralized and difficult to manage. And applications like artificial intelligence (AI) urgently need low-latency, high-bandwidth network architectures to support the large number of machine-to-machine input/output (I/O) generated between servers. To ensure the basic performance of these applications, the maximum fiber propagation between these distributed data centers must be limited to about 100 km. Therefore, these data centers must be connected in distributed clusters. In order to ensure high-bandwidth and high-density data center interconnection at the same time, 400G ZR came into being. In this post, we will reveal what 400ZR is, how it works and the influences it brings about.

What Is 400ZR?

400ZR, or 400G ZR, is a standard that will enable the transmission of multiple 400GE payloads over Data Center Interconnect (DCI) links up to 80 km using dense wavelength division multiplexing (DWDM) and higher-order modulation. It aims to ensure an affordable and long-term implementation based on single-carrier 400G using dual-polarization 16 QAM (16-state quadrature amplitude modulation) at approximately 60 gigabaud (Gbaud). Developed by Optical Interconnect Forum (OIF), the 400ZR project is essential to facilitate the reduction of the cost and complexity of high-bandwidth data center interconnects and to promote interoperability among optical module manufacturers.

400G ZR

Figure 1: 400G ZR Transceiver in DCI Switch or Router

How Does 400ZR Work?

400G ZR proposes a technology-driven solution for high-capacity data transmission, which could be matched with the 400GE switch port. It uses a unique design of advanced coherent optical technology for small, pluggable form factor modules. Although the product form factor is not specified in the IA (implementation agreement), the companies or groups contributing to the 400ZR have defined this specification to fit the solution. These form factors defined separately by Multi-Source Agreement (MSA) bodies specify compact mechanical transceivers like QSFP-DD and OSFP, which are connectorized and pluggable into a compatible socket in a system platform. That is to say, the compatible 400ZR solutions that come to market will also be interoperable since the OIF and form factor MSAs are industry-wide organizations. And the interoperability of the 400ZR solutions offers the dual benefit of simplified supply chain management and deployment.

400ZR+ for Longer-reach Optical Transmission

Like other 400G transceivers, the pluggable coherent 400ZR solution can support 400G Ethernet interconnection and multi-vendor interoperability. However, it is not suitable for next-generation metro-regional networks that need transmission over 80 km with a line capacity of 400 Gb/s. Under such circumstances, 400ZR+, or 400G ZR+ is proposed. The 400ZR+ is expected to further enhance modularity by supporting multiple different channel capacities based on coverage requirements and compatibility with installed metro optical infrastructure. With 400ZR+, both the transmission distance and line capacity could be assured.

What Influences Will 400ZR Bring About?

Although 400ZR technology is still in its infancy, once it is rolled out, it will have a significant impact on many industries as the following three: hyper-scale data centers, distributed campuses & metropolitan areas and telecommunications providers.

400ZR Helps Cloud and Hyperscale Data Centers Adapt to the Growing Demand for Higher Bandwidth

The development of DCI and 400ZR could help cloud and hyper-scale data centers adapt to the growing demand for higher bandwidth on the network. They could deal with the exponential growth of applications such as cloud services, IoT devices, and streaming video. As time goes by, 400G ZR will contribute more to the ever-growing applications and users for the whole networking.

400ZR Will Support Interconnects in Distributed Data Centers

As is mentioned above, 400ZR technology will support the necessary high-bandwidth interconnects to connect distributed data centers. With this connection, distributed data centers can communicate with each other, share data, balance workloads, provide backup, and expand data center capacity when needed.

400ZR Allows Telecommunications Companies to Backhaul Residential Traffic

400G ZR standard will allow telecommunications companies to backhaul residential traffic. When running at 200 Gb/s using 64 Gbaud signalings and QPSK modulation, 400ZR can increase the range of high loss spans. For 5G networks, 400G ZR provides mobile backhaul by aggregating multiple 25 Gb/s streams. 400ZR helps promote emerging 5G applications and markets.

400ZR+/400ZR- Will Provide Greater Convenience Based on 400ZR

In addition to the interoperable 400G mode, the 400ZR transceiver is also expected to support other modes to increase the range of addressable applications. These modes are called 400ZR + and 400ZR-. “+” indicates that the power consumption of the module exceeds the 15W required by IA and some pluggable devices, enabling the module to use more powerful signal processing technology to transmit over distances of hundreds of kilometers. “-” indicates that the module supports low-speed modes, such as 300G, 200G, and 100G, which provide network operators with more flexibility.

Will 400ZR Stay Popular In the Next Few Years?

According to the data source below from LightCounting, 400ZR will lead the growth of optical module sales in 2021-2024. The figure below shows the shipment data of high-speed (100G and above) and low-speed (10G and below) DWDM modules sold on the market. It is clear that modules used in Cloud or DCI have an increasing tendency in 2021-2024. That means 400ZR will lead annual growth from 2021.

Source

In addition, with the first 100Gbps SerDes implementation in switching chips expected in 2021, the necessary data rate will move to 800 Gbps within the next 1-2 years for the optics interface. Since the OSFP form factor has been defined to allow an 8x 100GE interface without changing the definition of the transceiver. Similarly, in parallel, the coherent optics on the line side will transition to support 128GBaud 16QAM within a similar time frame, making it easy to migrate from the current 400ZR to the next-generation 800ZR. Therefore, 400ZR is crucial no matter in the current or the future network development.

Article Source

https://community.fs.com/blog/400zr-enable-400g-for-next-generation-dci.html

Related Articles

https://community.fs.com/blog/400g-qsfp-dd-transceiver-types-overview.html

https://community.fs.com/blog/400g-osfp-transceiver-types-overview.html

400G Data Center Deployment Challenges and Solutions

As technology advances, specific industry applications such as video streaming, AI, and data analytics are increasingly pushing for increased data speeds and massive bandwidth demands. 400G technology, with its next-gen optical transceivers, brings a new user experience with innovative services that allow for faster and more data processing at a time.

Large data centers and enterprises struggling with data traffic issues embrace 400G solutions to improve operational workflows and ensure better economics. Below is a quick overview of the rise of 400G, the challenges of deploying this technology, and the possible solutions.

The Rise of 400G Data Centers

The rapid transition to 400G in several data centers is changing how networks are designed and built. Some of the key drivers of this next-gen technology are cloud computing, video streaming, AI, and 5G, which have driven the demand for high-speed, high-bandwidth, and highly scalable solutions. The large amount of data generated by smart devices, the Internet of Things, social media, and other As-a-Service models are also accelerating this 400G transformation.

The major benefits of upgrading to a 400G data center are the increased data capacity and network capabilities required for high-end deployments. This technology also delivers more power, efficiency, speed, and cost savings. A single 400G port is considerably cheaper than four individual 100G ports. Similarly, the increased data speeds allow for convenient scale-up and scale-out by providing high-density, reliable, and low-cost-per-bit deployments.

How 400G Works

Before we look at the deployment challenges and solutions, let’s first understand how 400G works. First, the actual line rate or data transmission speed of a 400G Ethernet link is 425 Gbps. The extra 25 bits establish a forward error connection (FEC) procedure, which detects and corrects transmission errors.

400G adopts the 4-level pulse amplitude modulation (PAM4) to combine higher signal and baud rates. This increases the data rates four-fold over the current Non-Return to Zero (NRZ) signaling. With PAM4, operators can implement four lanes of 100G or eight lanes of 50G for different form factors (i.e., OSFP and QSFP-DD). This optical transceiver architecture supports transmission of up to 400 Gbit/s over either parallel fibers or multiwavelength.

PM4
PAM4

Deployment Challenges & Solutions

Interoperability Between Devices

The PAM4 signaling introduced with 400G deployments creates interoperability issues between the 400G ports and legacy networking gear. That is, the existing NRZ switch ports and transceivers aren’t interoperable with PAM4. This challenge is widely experienced when deploying network breakout connections between servers, storage, and other appliances in the network.

400G transceiver transmits and receives with 4 lanes of 100G or 8 lanes of 50G with PAM4 signaling on both the electrical and optical interfaces. However, the legacy 100G transceivers are designed on 4 lanes of 25G NRZ signaling on the electrical and optical sides. These two are simply not interoperable and call for a transceiver-based solution.

One such solution is the 100G transceivers that support 100G PAM4 on the optical side and 4X25G NRZ on the electrical side. This transceiver performs the re-timing between the NRZ and PAM4 modulation within the transceiver gearbox. Examples of these transceivers are the QSFP28 DR and FR, which are fully interoperable with legacy 100G network gear, and QSFP-DD DR4 & DR4+ breakout transceivers. The latter are parallel series modules that accept an MPO-12 connector with breakouts to LC connectors to interface FR or DR transceivers.

NRZ & PM4
Interoperability Between Devices

Excessive Link Flaps

Link flaps are faults that occur during data transmission due to a series of errors or failures on the optical connection. When this occurs, both transceivers must perform auto-negotiation and link training (AN-LT) before data can flow again. If link flaps frequently occur, i.e., several times per minute, it can negatively affect throughput.

And while link flaps are rare with mature optical technologies, they still occur and are often caused by configuration errors, a bad cable, or defective transceivers. With 400GbE, link flaps may occur due to heat and design issues with transceiver modules or switches. Properly selecting transceivers, switches, and cables can help solve this link flaps problem.

Transceiver Reliability

Some optical transceiver manufacturers face challenges staying within the devices’ power budget. This results in heat issues, which causes fiber alignment challenges, packet loss, and optical distortions. Transceiver reliability problems often occur when old QSFP transceiver form factors designed for 40GbE are used at 400GbE.

Similar challenges are also witnessed with newer modules used in 400GbE systems, such as the QSFP-DD and CFP8 form factors. A solution is to stress test transceivers before deploying them in highly demanding environments. It’s also advisable to prioritize transceiver design during the selection process.

Deploying 400G in Your Data Center

Keeping pace with the ever-increasing number of devices, users, and applications in a network calls for a faster, high-capacity, and more scalable data infrastructure. 400G meets these demands and is the optimal solution for data centers and large enterprises facing network capacity and efficiency issues. The successful deployment of 400G technology in your data center or organization depends on how well you have articulated your data and networking needs.

Upgrading your network infrastructure can help relieve bottlenecks from speed and bandwidth challenges to cost constraints. However, making the most of your network upgrades depends on the deployment procedures and processes. This could mean solving the common challenges and seeking help whenever necessary.

A rule of thumb is to enlist the professional help of an IT expert who will guide you through the 400G upgrade process. The IT expert will help you choose the best transceivers, cables, routers, and switches to use and even conduct a thorough risk analysis on your entire network. That way, you’ll upgrade appropriately based on your network needs and client demands.
Article Source: 400G Data Center Deployment Challenges and Solutions
Related Articles:

NRZ vs. PAM4 Modulation Techniques
400G Multimode Fiber: 400G SR4.2 vs 400G SR8
Importance of FEC for 400G

Silicon Photonics: Next Revolution for 400G Data Center

400G

With the explosion of 5G applications and cloud services, traditional technologies are facing fundamental limits of power consumption and transmission capacity, which drives the continual development of optical and silicon technology. Silicon photonics is an evolutionary technology enabling major improvements in density, performance and economics that is required to enable 400G data center applications and drives the next-generation optical communication networks. What is silicon photonics? How does it promote the revolution of 400G applications in data centers? Please keep reading the following contents to find out.

What Is Silicon Photonics Technology?

Silicon photonics (SiPh) is a material platform from which photonic integrated circuits (PICs) can be made. It uses silicon as the main fabrication element. PICs consume less power and generate less heat than conventional electronic circuits, offering the promise of energy-efficient bandwidth scaling.

It drives the miniaturization and integration of complex optical subsystems into silicon photonics chips, dramatically improving performance, footprint, and power efficiency.

Conventional Optics vs Silicon Photonics Optics

Here is a Technology Comparison Chart between Conventional Optics vs Silicon Photonics Optics, taking QSFPDD DR4 400G module and QDD DR4 400G Si for example:

The difference between a 400GBASE-DR4 QSFP-DD PAM4 optical transceiver module and a silicon photonic one just lies in: 400G silicon photonic chips — breaking the bottleneck of mega-scale data exchange, showing great advantages in low power consumption, small footprint, relatively low cost, easiness for large volume integration, etc.

Silicon photonic integrated circuits provide an ideal solution to realize the monolithic integration of photonic chips and electronic chips. Adopting silicon photonic design, a QDD-DR4-400G-Si module combines high-density & low-consumption, which largely reduces the cost of optical modules, thereby saving data center construction and operating expenses.

Why Adopt Silicon Photonics in Data Centers?

To Solve I/O Bottlenecks

The world’s growing data demand has caused bandwidths and computing power resources in data centers to be used up. Chips have to become faster when facing the growing demand for data consumption, which can process information faster than the signal can be transmitted in and out. That is to say, chips are becoming faster, but the optical signal (coming from the fiber) must still be converted to an electronic signal to communicate with the chip sitting on a board deep in the data center. And since the electrical signal still needs to travel some distance from the optical transceiver, where it was converted from light, to the processing and routing electronics — we’ve reached a point where the chip can process information faster than the electrical signal can get in and out of it.

To Reduce Power Consumption

Heating and power dissipation are enormous challenges for the computing industry. Power consumption will directly translate to heat. Power consumption causes heat, so what causes power dissipation? Mainly, data transmissions. It’s estimated that data centers have consumed 200TWh each year — more than the national energy consumption of some countries. Thus, some of the world’s largest Data Centers, including those of Amazon, Google, and Microsoft are located in Alaska and similar-climate countries due to the cold weather.

To Save Operation Budget

At present, a typical ultra-large data center has more than 100,000 servers and over 50,000 switches. The connection between them requires more than 1 million optical modules with around US$150 million-US$250 million, which accounts for 60% of the cost of the data center network, exceeding the sum of equipment such as switches, NICs, and cables. The high cost forces the industry to reduce the unit price of optical modules through technological upgrades. The introduction of fiber optic modules adopting Silicon Photonics technology is expected to solve this problem.

Silicon Photonics Applications in Communication

Silicon photonics has proven to be a compelling platform for enabling next-generation coherent optical communications and intra-data center interconnects. This technology can support a wide range of applications, from short-reach interconnects to long-haul communications, making a great contribution to next-generation networks.

  • 100G/400G Datacom: data centers and campus applications (to 10km)
  • Telecom: metro and long-haul applications (to 100 and 400 km)
  • Ultra short-reach optical interconnects and switches within routers, computers, HPC
  • Functional passive optical elements including AWGs, optical filters, couplers, and splitters
  • 400G transceiver products including embedded 400G optical modules400G DAC Breakout cables, transmitters/receivers, active optical cables (AOCs), as well as 400G DACs.

Now & Future of Silicon Photonics

Yole predicted that the silicon optical module market would grow from approximately US$455 million in 2018 to around US$4 billion in 2024 at a CAGR of 44.5%. According to Lightcounting, the overall data communication high-speed optical module market will reach US$6.5 billion by 2024, and silicon optical modules will account for 60% (3.3% in 20 years).

Intel, as one of the leading Silicon photonics companies, has a 60% market share in silicon photonic transceivers for datacom. Indeed, Intel has already shipped more than 3 million units of its 100G pluggable transceivers in just a few short years, and is continuing to expand its Silicon Photonics’ product offerings. And Cisco acquired Accacia for US$2.6 billion and Luxtera for US$660 million. Other companies like Inphi and NeoPhotonics are proposing silicon photonic transceivers with strong technologies.

Original Source: Silicon Photonics: Next Revolution for 400G Data Center

400G OTN Technologies: Single-Carrier, Dual-Carrier and Quad-Carrier

400G

In order to achieve 400G long-haul (LH) transmission, three 400G Optical Transport Network (OTN) technologies come into being to meet the needs: single-carrier 400G, dual-carrier 400G, and quad-carrier 400G. They differ from each other mainly in the number of wavelengths used for transmission. This post will reveal what they are and their respective pros and cons.

Single-Carrier for 400G OTN

Single-carrier 400G, or single-wavelength 400G, means there is 400G capacity on a single wavelength. The single-carrier 400G adopts high-order modulation formats such as PM-16QAM, PM-32QAM and PM-64QAM. Normally, a single-carrier for 400G optical transport network is used only in network access, metro, or DCI (Data Center Interconnection) transmission.

Single-Carrier for 400G OTN

Figure 1: Single-Carrier for 400G OTN

Take PM-16QAM (Polarization-Multiplexed-16 Quadrature Amplitude Modulation) as an example. PM refers to a process where the 400G (448Gbit/s) optical signal is separated into two signals and modulated to transmit in two polarization directions – X and Y, which can cut the original signal rate in half (224Gbit/s). QAM is a process of separating the signals in X and Y to further reduce the rate. 16 stands for 4 bits, which means the signal in X and Y is respectively divided into 4 signals and the rate will accordingly decrease to 1/4 on the basis of the previous 224Gbit/s. By using PM-16QAM, the signal rate at this moment becomes 56G Baud (the rate of electrical processing).

Note: Because in current circuit technology, 100Gbit/s has approached the limit of the electronic bottleneck. If the Baud continues to increase, problems like signal loss, power dissipation, and electromagnetic interference will remain a hassle, which will, even if solved, require tremendous costs.

PM-16QAM

Figure 2: PM-16QAM

Pros of Single-Carrier for 400G Optical Transport Network

  • Compared with the multi-carriers scheme, single-carrier 400G is an easier wavelength allocation solution with simpler structure and smaller size that can provide easy network management and low power consumption.
  • With higher-order QAM, single-carrier for 400G OTN network can increase signal rates and spectrum efficiency, which will significantly expand network capacity and increase the number of users to support.
  • Also, with high system integration, it can connect the separate subsystems into a complete one and make them work in coordination with each other and achieve the best overall performance.

Cons of Single-Carrier for 400G Optical Transport Network

Since single-carrier for 400G OTN network adopts more advanced QAM, it requires a higher OSNR (Optical Signal Noise Ratio) and greatly reduces transmission distance (less than 200km). Also, single-carrier is more susceptible to laser phase noise and fiber nonlinear effects. Therefore, it is the best solution only for some specific applications that don’t require ultra long-haul transmission distance, but need large bandwidth capacity.

Dual-Carrier for 400G OTN

Dual-carrier 400G, also named dual-wavelength 400G, offers 400G capacity via two 200G wavelengths. The dual-carrier 400G system based on the 2× 200G super-channel scheme adopts lower-order modulation formats like PM-QPSK (Quadrature Phase Shift Keying, a symbol represents two bits, which means the rate is reduced to 1/2), PM-8QAM or PM-16QAM. Dual-carrier for 400G optical transport network is applied in more complex metro networks to achieve 400G long-haul transmission.

Dual-Carrier for 400G OTN

Figure 3: Dual-Carrier for 400G OTN

Pros of Dual-Carrier for 400G Optical Transport Network

  • The spectrum efficiency of dual-carrier 400G has increased by more than 165%, with relatively high system integration, small size, low power consumption. Dual-carrier 400G is regarded as the most commonly-used technology for 400G OTN.
  • The span of dual-carrier 400G is longer than single-carrier 400G, which can reach up to 500km for commercial use. When deployed with low-attenuation fiber optic cable and EDFA (Erbium Doped Fiber Amplifiers), dual-carrier for 400G OTN network can cover more than 1000km, which can basically satisfy the 400G long-haul transmission application.

Cons of Dual-Carrier for 400G Optical Transport Network

Even with low-attenuation fiber optic cable and EDFA, dual-carrier 400G still fails to reach as long as quad-carrier 400G does, not suitable for ultra long-haul (ULH) transmission over 2000km.

Quad-Carrier for 400G OTN

Quad-carrier 400G refers to a solution that offers 400G capacity through four 100G wavelengths. It is achieved by constructing a 400G super-channel based on 100G PM-QPSK with four carriers, suitable for ultra long-haul (ULH) transmission over 2000km.

Quad-Carrier for 400G OTN

Figure 4: Quad-Carrier for 400G OTN

Pros of Quad-Carrier for 400G Optical Transport Network

  • Quad-carrier for 400G OTN network adopts the mature 100G transmission technology that has been widely-used for commercial purpose.
  • It can achieve ultra long-haul transmission of more than 2000km at relatively low cost.

Cons of Quad-Carrier for 400G Optical Transport Network

Quad-carrier 400G system makes sense only when spectrum compression technology is introduced to improve spectrum efficiency, and the 100G chip is upgraded to solve the problems of integration and power consumption. Otherwise, a 400G system built on the current 100G chip is essentially a 100G system.

Conclusion

In all, 400G long-haul transmission is mainly realized by single-carrier, dual-carrier and quad-carrier. Single-carrier for 400G OTN network can only cover a distance of less than 200km; dual-carrier for 400G OTN network is the ideal solution for MAN transmission (with PM-16QAM) and medium long-haul transmission (with PM-QPSK); quad-carrier for 400G OTN network has the same transmission distance as 100G and is appropriate for ULH transmission. As global data traffic keeps climbing, there is no end to bandwidth demands. While it may take time to transit to 400G, you can learn about What’s the Current and Future Trend of 400G Ethernet? to make preparations first.

Original Source: 400G OTN Technologies: Single-Carrier, Dual-Carrier and Quad-Carrier

400G Optics in Hyperscale Data Centers

Since their advent, data centers have been striving hard to address the rising bandwidth requirements. A look at the stats reveals that 3.04 Exabytes of data is being generated on a daily basis. Whenever a hyperscale data center is taken into consideration, the bandwidth requirements are massive as the relevant applications require a preemptive approach due to their scalable nature. As the introduction of 400G data centers has taken the data transfer speed to a whole new level, it has brought significant convenience in addressing various areas of concern. In this article, we will dig a little deeper and try to answer the following questions:

  • What are the driving factors of 400G development?
  • What are the reasons behind the use of 400G optics in hyperscale data centers?
  • What are the trends in 400G devices in large-scale data centers?

What Are the Driving Factors For 400G Development?

The driving factors for 400G development are segregated into video streaming services and video conferencing services. These services require pretty high data transfer speeds in order to function smoothly across the globe.

Video Streaming Services

Video streaming services were already taking a toll on the bandwidth requirements. That, combined with the COVID-19 pandemic, forced a large population to stay and work from home. This automatically increased the usage of video streaming platforms. A look at the stats reveals that a medium-quality stream on Netflix consumes 0.8 GB per hour. See that in relation to over 209 million subscribers. As the traveling costs came down, the savings went to improved quality streams on Netflix like HD and 4K. What stood at 0.8 GB per hour rose to 3 and 7 GB per hour. This evolved the need for 400G development.

Video Conferencing Services

As COVID-19 made working from home the new norm, video conferencing services also saw a major boost. Till 2021, 20.56 million people have been reported to be working from home in the US alone. As video conferencing took center stage, Zoom, which consumes 500 MB per hour, saw a huge increase in its user base. This also puts great pressure on the data transfer needs.

What Makes 400G Optics the Ideal Choice For Hyperscale Data Centers?

Significant Decrease in Energy and Carbon Footprint

To put it simply, 400G raises the data transfer speed four times. 400G reduces the cost of 100G ports as breakouts when comparing a 4 x 100G solution to facilitate 400GbE with a single 400G solution to do the same. A single node at the output minimizes the risk of failures as well as lower the energy requirement. This brings down the ESG footprint that has become a KPI for the organizations going forward.

Reduced Operational Cost

As mentioned earlier, a 400G solution requires a single 400G port, whereas addressing the same requirement via a 100G solution requires four 100G ports. On a router, four ports cost way more than a single port that can facilitate rapid data transfer. The same is the case with power. Combined together, these two bring the operational cost down to a considerable extent.400G Optics

Trends of 400G Optics in Large-Scale Data Centers—Quick Adoption

The introduction of 400G solution in large-scale data centers has reshaped the entire sector. This is due to a humongous increase in the data transfer speeds. According to research, 400G is expected to replace 100G and 200G deployments way faster than its predecessors. Since its introduction, more and more vendors are upgrading to network devices that support 400G. The following image truly depicts the technology adoption rate.Trends of 400G Optics

Challenges Ahead

Lack of Advancement in the 400G Optical Transceivers sector

Although the shift towards such network devices is rapid, there are a number of implementation challenges. This is because it is not only the devices that need to be upgraded but also the infrastructure. Vendors are trying to upgrade them in order to stay ahead of the curve but the cost of the development and maturity of optical transceivers is not at the expected benchmark. The same is the case with their cost and reliability. As optical transceivers are a critical element, this comes as a major challenge in the deployment of 400G solutions.

Latency Measurement

In addition, the introduction of this solution has also made network testing and monitoring more important than ever. Latency measurement has always been a key indicator when evaluating performance. Data throughput combined with jitter and frame loss also comes as a major concern in this regard.

Investment in Network Layers

Lastly, the creation of a plug-and-play environment for this solution also needs to be more realistic. This will require a greater investment in the physical, higher level, and network-IP components layers.

Conclusion

Rapid technological advancements have led to concepts like the Internet of Things. These implementations require greater data transfer speeds. That, combined with the world going to remote work, has exponentially increased the traffic. Hyperscale data centers were already feeling the pressure and the introduction of 400G data centers is a step in the right direction. It is a preemptive approach to address the growing global population and the increasing number of internet users.

Article Source: 400G Optics in Hyperscale Data Centers

Related Articles:

How Many 400G Transceiver Types Are in the Market?

Global Optical Transceiver Market: Striding to High-Speed 400G Transceivers