400G Data Center Deployment Challenges and Solutions

As technology advances, specific industry applications such as video streaming, AI, and data analytics are increasingly pushing for increased data speeds and massive bandwidth demands. 400G technology, with its next-gen optical transceivers, brings a new user experience with innovative services that allow for faster and more data processing at a time.

Large data centers and enterprises struggling with data traffic issues embrace 400G solutions to improve operational workflows and ensure better economics. Below is a quick overview of the rise of 400G, the challenges of deploying this technology, and the possible solutions.

The Rise of 400G Data Centers

The rapid transition to 400G in several data centers is changing how networks are designed and built. Some of the key drivers of this next-gen technology are cloud computing, video streaming, AI, and 5G, which have driven the demand for high-speed, high-bandwidth, and highly scalable solutions. The large amount of data generated by smart devices, the Internet of Things, social media, and other As-a-Service models are also accelerating this 400G transformation.

The major benefits of upgrading to a 400G data center are the increased data capacity and network capabilities required for high-end deployments. This technology also delivers more power, efficiency, speed, and cost savings. A single 400G port is considerably cheaper than four individual 100G ports. Similarly, the increased data speeds allow for convenient scale-up and scale-out by providing high-density, reliable, and low-cost-per-bit deployments.

How 400G Works

Before we look at the deployment challenges and solutions, let’s first understand how 400G works. First, the actual line rate or data transmission speed of a 400G Ethernet link is 425 Gbps. The extra 25 bits establish a forward error connection (FEC) procedure, which detects and corrects transmission errors.

400G adopts the 4-level pulse amplitude modulation (PAM4) to combine higher signal and baud rates. This increases the data rates four-fold over the current Non-Return to Zero (NRZ) signaling. With PAM4, operators can implement four lanes of 100G or eight lanes of 50G for different form factors (i.e., OSFP and QSFP-DD). This optical transceiver architecture supports transmission of up to 400 Gbit/s over either parallel fibers or multiwavelength.

PM4
PAM4

Deployment Challenges & Solutions

Interoperability Between Devices

The PAM4 signaling introduced with 400G deployments creates interoperability issues between the 400G ports and legacy networking gear. That is, the existing NRZ switch ports and transceivers aren’t interoperable with PAM4. This challenge is widely experienced when deploying network breakout connections between servers, storage, and other appliances in the network.

400G transceiver transmits and receives with 4 lanes of 100G or 8 lanes of 50G with PAM4 signaling on both the electrical and optical interfaces. However, the legacy 100G transceivers are designed on 4 lanes of 25G NRZ signaling on the electrical and optical sides. These two are simply not interoperable and call for a transceiver-based solution.

One such solution is the 100G transceivers that support 100G PAM4 on the optical side and 4X25G NRZ on the electrical side. This transceiver performs the re-timing between the NRZ and PAM4 modulation within the transceiver gearbox. Examples of these transceivers are the QSFP28 DR and FR, which are fully interoperable with legacy 100G network gear, and QSFP-DD DR4 & DR4+ breakout transceivers. The latter are parallel series modules that accept an MPO-12 connector with breakouts to LC connectors to interface FR or DR transceivers.

NRZ & PM4
Interoperability Between Devices

Excessive Link Flaps

Link flaps are faults that occur during data transmission due to a series of errors or failures on the optical connection. When this occurs, both transceivers must perform auto-negotiation and link training (AN-LT) before data can flow again. If link flaps frequently occur, i.e., several times per minute, it can negatively affect throughput.

And while link flaps are rare with mature optical technologies, they still occur and are often caused by configuration errors, a bad cable, or defective transceivers. With 400GbE, link flaps may occur due to heat and design issues with transceiver modules or switches. Properly selecting transceivers, switches, and cables can help solve this link flaps problem.

Transceiver Reliability

Some optical transceiver manufacturers face challenges staying within the devices’ power budget. This results in heat issues, which causes fiber alignment challenges, packet loss, and optical distortions. Transceiver reliability problems often occur when old QSFP transceiver form factors designed for 40GbE are used at 400GbE.

Similar challenges are also witnessed with newer modules used in 400GbE systems, such as the QSFP-DD and CFP8 form factors. A solution is to stress test transceivers before deploying them in highly demanding environments. It’s also advisable to prioritize transceiver design during the selection process.

Deploying 400G in Your Data Center

Keeping pace with the ever-increasing number of devices, users, and applications in a network calls for a faster, high-capacity, and more scalable data infrastructure. 400G meets these demands and is the optimal solution for data centers and large enterprises facing network capacity and efficiency issues. The successful deployment of 400G technology in your data center or organization depends on how well you have articulated your data and networking needs.

Upgrading your network infrastructure can help relieve bottlenecks from speed and bandwidth challenges to cost constraints. However, making the most of your network upgrades depends on the deployment procedures and processes. This could mean solving the common challenges and seeking help whenever necessary.

A rule of thumb is to enlist the professional help of an IT expert who will guide you through the 400G upgrade process. The IT expert will help you choose the best transceivers, cables, routers, and switches to use and even conduct a thorough risk analysis on your entire network. That way, you’ll upgrade appropriately based on your network needs and client demands.
Article Source: 400G Data Center Deployment Challenges and Solutions
Related Articles:

NRZ vs. PAM4 Modulation Techniques
400G Multimode Fiber: 400G SR4.2 vs 400G SR8
Importance of FEC for 400G

FAQs on 400G Transceivers and Cables

400G transceivers and cables play a vital role in the process of constructing a 400G network system. Then, what is a 400G transceiver? What are the applications of QSFP-DD cables? Find answers here.

FAQs on 400G Transceivers and Cables Definition and Types

Q1: What is a 400G transceiver?

A1: 400G transceivers are optical modules that are mainly used for photoelectric conversion with a transmission rate of 400Gbps. 400G transceivers can be classified into two categories according to the applications: client-side transceivers for interconnections between the metro networks and the optical backbone, and line-side transceivers for transmission distances of 80km or even longer.

Q2: What are QSFP-DD cables?

A2: QSFP-DD cables contain two forms: one is a form of high-speed cable with QSFP-DD connectors on either end, transmitting and receiving 400Gbps data over a thin twinax cable or a fiber optic cable, and the other is a form of breakout cable that can split one 400G signal into 2x 200G, 4x 100G, or 8x 50G, enabling interconnection within a rack or between adjacent racks.

Q3: What are the 400G transceivers packaging forms?

A3: There are mainly the following six packaging forms of 400G optical modules:

  • QSFP-DD: 400G QSFP-DD (Quad Small Form Factor Pluggable-Double Density) is an expansion of QSFP, adding one row to the original 4-channel interface to 8 channels, running at 50Gb/s each, for a total bandwidth of 400Gb/s.
  • OSFP: OSFP (Octal Small Formfactor Pluggable, Octal means 8) is a new interface standard and is not compatible with the existing photoelectric interface. The size of 400G OSFP modules is slightly larger than that of 400G QSFP-DD.
  • CFP8: CFP8 is an expansion of CFP4, with 8 channels and a correspondingly larger size.
  • COBO: COBO (Consortium for On-Board Optics) means that all optical components are placed on the PCB. COBO is with good heat-dissipation and small-size. However, since it is not hot-swappable, once a module fails, it will be troublesome to repair.
  • CWDM8: CWDM 8 is an extension of CWDM4 with four new center wavelengths (1351/1371/1391/1411 nm). The wavelength range becomes wider and the number of lasers is doubled.
  • CDFP: CDFP was born earlier, and there are three editions of the specification. CD stands for 400 (Roman numerals). With 16 channels, the size of CDFP is relatively large.

Q4: What 400G transceivers and QSFP-DD cables are available on the market?

A4: The two tables below show the main types of 400G transceivers and cables on the market:

400G TransceiversStandardsMax Cable DistanceConnectorMediaTemperature Range
400G QSFP-DD SR8QSFP-DD MSA Compliant70m OM3/100m OM4MTP/MPO-16MMF0 to 70°C
400G QSFP-DD DR4QSFP-DD MSA, IEEE 802.3bs500mMTP/MPO-12SMF0 to 70°C
400G QSFP-DD XDR4/DR4+QSFP-DD MSA2kmMTP/MPO-12SMF0 to 70°C
400G QSFP-DD FR4QSFP-DD MSA2kmLC DuplexSMF0 to 70°C
400G QSFP-DD 2FR4QSFP-DD MSA, IEEE 802.3bs2kmCSSMF0 to 70°C
400G QSFP-DD LR4QSFP-DD MSA Compliant10kmLC DuplexSMF0 to 70°C
400G QSFP-DD LR8QSFP-DD MSA Compliant10kmLC DuplexSMF0 to 70°C
400G QSFP-DD ER8QSFP-DD MSA Compliant40kmLC DuplexSMF0 to 70°C
400G OSFP SR8IEEE P802.3cm; IEEE 802.3cd100mMTP/MPO-16MMF0 to 70°C
400G OSFP DR4IEEE 802.3bs500mMTP/MPO-12SMF0 to 70°C
4000G OSFP XDR4/DR4+/2kmMTP/MPO-12SMF0 to 70°C
400G OSFP FR4100G lambda MSA2kmLC DuplexSMF0 to 70°C
400G OSFP 2FR4IEEE 802.3bs2kmCSSMF0 to 70°C
400G OSFP LR4100G lambda MSA10kmLC DuplexSMF0 to 70°C



QSFP-DD CablesCatagoryProduct DescriptionReachTemperature RangePower Consumption
400G QSFP-DD DACQSFP-DD to QSFP-DD DACwith each 400G QSFP-DD using 8x 50G PAM4 electrical lanesno more than 3m0 to 70°C<1.5W
400G QSFP-DD Breakout DACQSFP-DD to 2x 200G QSFP56 DACwith each 200G QSFP56 using 4x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.1W
QSFP-DD to 4x 100G QSFPs DACwith each 100G QSFPs using 2x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.1W
QSFP-DD to 8x 50G SFP56 DACwith each 50G SFP56 using 1x 50G PAM4 electrical laneno more than 3m0 to 80°C<0.1W
400G QSFP-DD AOCQSFP-DD to QSFP-DD AOCwith each 400G QSFP-DD using 8x 50G PAM4 electrical lanes70m (OM3) or 100m (OM4)0 to 70°C<10W
400G QSFP-DD Breakout AOCQSFP-DD to 2x 200G QSFP56 AOCwith each 200G QSFP56 using 4X 50G PAM4 electrical lane70m (OM3) or 100m (OM4)0 to 70°C/
QSFP-DD to 8x 50G SFP56 AOCwith each 50G SFP56 using 1x 50G PAM4 electrical lane70m (OM3) or 100m (OM4)0 to 70°C/
400G OSFP DACOSFP to OSFP DACwith each 400G OSFP using 8x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.5W
400G OSFP Breakout DACOSFP to 2x 200G QSFP56 DACwith each 200G QSFP56 using 4x 50G PAM4 electrical lanesno more than 3m0 to 70°C/
OSFP to 4x100G QSFPs DACwith each 100G QSFPs using 2x 50G PAM4 electrical lanesno more than 3m0 to 70°C/
OSFP to 8x 50G SFP56 DACwith each 50G SFP56 using 1x 50G PAM4 electrical laneno more than 3m//
400G OSFP AOCOSFP to OSFP AOCwith each 400G OSFP using 8x 50G PAM4 electrical lanes70m (OM3) or 100m (OM4)0 to 70°C<9.5W



Q5: What do the suffixes “SR8, DR4 / XDR4, FR4 / LR4 and 2FR4” mean in 400G transceivers?

A5: The letters refer to reach, and the number refers to the number of optical channels:

  • SR8: SR refers to 100m over MMF. Each of the 8 optical channels from an SR8 module is carried on separate fibers, resulting in a total of 16 fibers (8 Tx and 8 Rx).
  • DR4 / XDR4: DR / XDR refer to 500m / 2km over SMF. Each of the 4 optical channels is carried on separate fibers, resulting in a total of 4 pairs of fibers.
  • FR4 / LR4: FR4 / LR4 refer to 2km / 10km over SMF. All 4 optical channels from an FR4 / LR4 are multiplexed onto one fiber pair, resulting in a total of 2 fibers (1 Tx and 1 Rx).
  • 2FR4: 2FR4 refers to 2 x 200G-FR4 links with 2km over SMF. Each of the 200G FR4 links has 4 optical channels, multiplexed onto one fiber pair (1 Tx and 1 Rx per 200G link). A 2FR4 has 2 of these links, resulting in a total of 4 fibers, and a total of 8 optical channels.

FAQs on 400G Transceivers and Cables Applications

Q1: What are the benefits of moving to 400G technology?

A1: 400G technology can increase the throughput of data and maximize the bandwidth and port density of the data centers. With only 1/4 the number of optical fiber links, connectors, and patch panels when using 100G platforms for the same aggregate bandwidth, 400G optics can also reduce operating expenses. With these benefits, 400G transceivers and QSFP-DD cables can provide ideal solutions for data centers and high-performance computing environments.

Q2: What are the applications of QSFP-DD cables?

A2: QSFP-DD cables are mainly used for short-distance 400G Ethernet connectivity in the data centers, and 400G to 2x 200G / 4x 100G / 8x 50G Ethernet applications.

Q3: 400G QSFP-DD vs 400G OSFP/CFP8: What are the differences?

A3: The table below includes detailed comparisons for the three main form factors of 400G transceivers.

400G Transceiver400G QSFP-DD400G OSFPCFP8
Application ScenarioData centerData center & telecomTelecom
Size18.35mm× 89.4mm× 8.5mm22.58mm× 107.8mm× 13mm40mm× 102mm× 9.5mm
Max Power Consumption12W15W24W
Backward Compatibility with QSFP28YesThrough adapterNo
Electrical signaling (Gbps)8× 50G
Switch Port Density (1RU)363616
Media TypeMMF & SMF
Hot PluggableYes
Thermal ManagementIndirectDirectIndirect
Support 800GNoYesNo



For more details about the differences, please refer to the blog: Differences Between QSFP-DD and QSFP+/QSFP28/QSFP56/OSFP/CFP8/COBO

Q4: What does it mean when an electrical or optical channel is PAM4 or NRZ in 400G transceivers?

A4: NRZ is a modulation technique that has two voltage levels to represent logic 0 and logic 1. PAM4 uses four voltage levels to represent four combinations of two bits logic-11, 10, 01, and 00. PAM4 signal can transmit twice faster than the traditional NRZ signal.

When a signal is referred to as “25G NRZ”, it means the signal is carrying data at 25 Gbps with NRZ modulation. When a signal is referred to as “50G PAM4”, or “100G PAM4”, it means the signal is carrying data at 50 Gbps, or 100 Gbps, respectively, using PAM4 modulation. The electrical connector interface of 400G transceivers is always 8x 50Gb/s PAM4 (for a total of 400Gb/s).

FAQs on Using 400G Transceivers and Cables in Data Centers

Q1: Can I plug an OSFP module into a 400G QSFP-DD port, or a QSFP-DD module into an OSFP port?

A1: No. OSFP and QSFP-DD are two physically distinct form factors. If you have an OSFP system, then 400G OSFP optics must be used. If you have a QSFP-DD system, then 400G QSFP-DD optics must be used.

Q2: Can a QSFP module be plugged into a 400G QSFP-DD port?

A2: Yes. A QSFP (40G or 100G) module can be inserted into a QSFP-DD port as QSFP-DD is backward compatible with QSFP modules. When using a QSFP module in a 400G QSFP-DD port, the QSFP-DD port must be configured for a data rate of 100G (or 40G).

Q3: Is it possible with a 400G OSFP on one end of a 400G link, and a 400G QSFP-DD on the other end?

A3: Yes. OSFP and QSFP-DD describe the physical form factors of the modules. As long as the Ethernet media types are the same (i.e. both ends of the link are 400G-DR4, or 400G-FR4 etc.), 400G OSFP and 400G QSFP-DD modules will interoperate with each other.

Q4: How can I break out a 400G port and connect to 100G QSFP ports on existing platforms?

A4: There are several ways to break out a 400G port to 100G QSFP ports:

  • QSFP-DD-DR4 to 4x 100G-QSFP-DR over 500m SMF
400G to 4x 100G
  • QSFP-DD-XDR4 to 4x 100G-QSFP-FR over 2km SMF
400G to 4x 100G
  • QSFP-DD-LR4 to 4x 100G-QSFP-LR over 10km SMF
400G to 4x 100G
  • OSFP-400G-2FR4 to 2x QSFP-100G-CWDM4 over 2km SMF
400G to 4x 100G

Apart from the 400G transceivers mentioned above, 400G to 4x 100G breakout cables can also be used.

Article Source: FAQs on 400G Transceivers and Cables

Related Articles:

400G Transceiver, DAC, or AOC: How to Choose?

400G OSFP Transceiver Types Overview

100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

NIC, short for network interface card, which can be called network interface controller, network adapter or LAN adapter, allows a networking device to communicate with other networking devices. Without NIC, networking can hardly be done. There are NICs with different types and speeds, such as wireless and wired NIC, from 10G to 100G. Among them, 100G NIC, as a product appearing in recent years, hasn’t taken a large market share yet. This post gives a description of 100G NIC and the trends in NIC as follows.

What Is 100G NIC?

NIC is installed on a computer and used for communicating over a network with another computer, server or other network devices. It comes in many different forms but there are two main different types of NIC: wired NIC and wireless NIC. Wireless NICs use wireless technologies to access the network, while wired NICs use DAC cable or transceiver and fiber patch cable. The most popular wired LAN technology is Ethernet. In terms of its application field, it can be divided into computer NIC card and server NIC card. For client computers, one NIC is needed in most cases. However, for servers, it makes sense to use more than one NIC to meet the demand for handling more network traffic. Generally, one NIC has one network interface, but there are still some server NICs that have two or more interfaces built in a single card.

100G NIC

Figure 1: FS 100G NIC

With the expanding of data center from 10G to 100G, 25G server NIC has gained a firm foothold in the NIC market. In the meantime, the growth in demand for bandwidth is driving data center to higher bandwidth, 200G/400G and 100G transceivers have been widespread, which paves the way for 100G server.

How to Select 100G NIC?

How to choose the best 100G NIC from all the vendors? If you are stuck in this puzzle, see the following section listing recommendations and considerations to consider.

Connector

Connector types like RJ45, LC, FC, SC are commonly used connectors on NIC. You should check the connector type supported by NIC. Today many networks are only using RJ45, so it may be not that hard to choose the NIC for the right connector type as it has been in the past. Even so, some network may utilize a different interface such as coax. Therefore, check if the card you are planning to buy supports this connection before purchasing.

Bus Type

PCI is a hardware bus used for adding internal components to the computer. There are three main PCI bus types used by servers and workstations now: PCI, PCI-X and PCI-E. Among them, PCI is the most conventional one. It has a fixed width of 32 bits and can handle only 5 devices at a time. PCI-X is a higher upgraded version, providing more bandwidth. With the emergence of PCI-E, PCI-X cards are gradually replaced. PCI-E is a serial connection so that devices no longer share bandwidth like they do on a normal bus. Besides, there are different physical sizes of PCI-E card in the market: x16, x8, x4, and x1. Before purchasing a 100G NIC, it is necessary to make sure which PCI version and slot width can be compatible with your current equipment and network environment.

Hot swappable

There are some NICs that can be installed and removed without shutting down the system, which helps minimize downtime by allowing faulty devices to be replaced immediately. While you are choosing your 100G NIC, be sure to check if it supports hot swapping.

Trends in NIC

NICs were commonly used in desktop computers in the 1990s and early 2000s. Up to now, it has been widely used in servers and workstations with different types and rates. With the popularization of wireless networking and WiFi, wireless NICs gradually grows in popularity. However, wired cards are still popular for relatively immobile network devices owing to the reliable connections.NICs have been upgrading for years. As data centers are expanding at an unprecedented pace and driving the need for higher bandwidth between the server and switches, networking is moving from 10G to 25G and even 100G. Companies like Intel and Mellanox have launched their 100G NIC in succession.

During the upgrading from 10G to 100G in data centers, 25G server connectivity popularized for 100G migration can be realized by 4 strands of 25G. 25G NIC is still the mainstream. However, considering the fact that the overall bandwidth for data centers grows quickly and hardware upgrade cycles for data centers occur every two years, the ethernet speed can be faster than we expect. 400G data center is just on the horizon. It stands a good chance that 100G NIC will play an integral role in next-generation 400G networking.

Meanwhile, the need of 100G NIC will drive the demand for other network devices as well. For instance, 100G transceiver, the device between NIC and network, is bound to pervade. Now 100G transceivers are provided by many brands with different types such as CXP, CFP, QSFP28 transceivers,etc. FS supplies a full series of compatible 100G QSFP28 and CFP transceivers that can be matched with the major brand of 100G Ethernet NIC, such as Mellanox and Intel.

Conclusion

Nowadays with the hyping of the next generation cellular technology, 5G, the higher bandwidth is needed for data flow, which paves the way for 100G NIC. On the occasion, 100G transceivers and 400G network switches will be in great need. We believe that the new era of 5G networks will see the popularization of 100G NIC and change towards a new era of network performance.

Article Source: 100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

Related Articles:

400G QSFP Transceiver Types and Fiber Connections

How Many 400G Transceiver Types Are in the Market?

Data Center Layout

Data center layout design is a challenging task requiring expertise, time, and effort. However, the data center can accommodate in-house servers and many other IT equipment for years if done properly. When designing such a modest facility for your company or cloud-service providers, doing everything correctly is crucial.

As such, data center designers should develop a thorough data center layout. A data center layout comes in handy during construction as it outlines the best possible placement of physical hardware and other resources in the center.

What Is Included in a Data Center Floor Plan?

The floor plan is an important part of the data center layout. Well-designed floor plan boosts the data centers’ cooling performance, simplifies installation, and reduces energy needs. Unfortunately, most data center floor plans are designed through incremental deployment that doesn’t follow a central plan. A data center floor plan influences the following:

  • The power density of the data center
  • The complexity of power and cooling distribution networks
  • Achievable power density
  • Electrical power usage of the data center

Below are a few tips to consider when designing a data center floor plan:

Balance Density with Capacity

“The more, the better” isn’t an applicable phrase when designing a data center. You should remember the tradeoff between space and power in data centers and consider your options keenly. If you are thinking of a dense server, ensure that you have enough budget. Note that a dense server requires more power and advanced cooling infrastructure. Designing a good floor plan allows you to figure this out beforehand.

Consider Unique Layouts

There is no specific rule that you should use old floor layouts. Your floor design should be based on specific organizational needs. If your company is growing exponentially, your data center needs will keep changing too. As such, old layouts may not be applicable. Browse through multiple layouts and find one that perfectly suits your facility.

Think About the Future

A data center design should be based on specific organizational needs. Therefore, while you may not need to install or replace some equipment yet, you might have to do so after a few years due to changing facility needs. Simply put, your data center should accommodate company needs several years in the future. This will ease expansion.

Floor Planning Sequence

A floor or system planning sequence outlines the flow of activity that transforms the initial idea into an installation plan. The floor planning sequence involves the following five tasks:

Determining IT Parameters

The floor plan begins with a general idea that prompts the company to change or increase its IT capabilities. From the idea, the data center’s capacity, growth plan, and criticality are then determined. Note that these three factors are characteristics of the IT function component of the data center and not the physical infrastructure supporting it. Since the infrastructure is the ultimate outcome of the planning sequence, these parameters guide the development and dictate the data centers’ physical infrastructure requirements.

Developing System Concept

This step uses the IT parameters as a foundation to formulate the general concept of data center physical infrastructure. The main goal is to develop a reference design that embodies the desired capacity, criticality, and scalability that supports future growth plans. However, with the diverse nature of these parameters, more than a thousand physical infrastructure systems can be drawn. Designers should pick a few “good” designs from this library.

Determining User Requirements

User requirements should include organizational needs that are specific to the project. This phase should collect and evaluate organizational needs to determine if they are valid or need some adjustments to avoid problems and reduce costs. User requirements can include key features, prevailing IT constraints, logistical constraints, target capacity, etc.

Generating Specifications

This step takes user requirements and translates them into detailed data center design. Specifications provide a baseline for rules that should be followed in the last step, creating a detailed design. Specifications can be:

  • Standard specifications – these don’t vary from one project to another. They include regulatory compliance, workmanship, best practices, safety, etc.
  • User specifications – define user-specific details of the project.

Generating a Detailed Design

This is the last step of the floor planning sequence that highlights:

  • A detailed list of the components
  • Exact floor plan with racks, including power and cooling systems
  • Clear installation instructions
  • Project schedule

If the complete specifications are clear enough and robust, a detailed design can be automatically drawn. However, this requires input from professional engineers.

Principles of Equipment Layout

Datacenter infrastructure is the core of the entire IT architecture. Unfortunately, despite this importance, more than 70% of network downtime stems from physical layer problems, particularly cabling. Planning an effective data center infrastructure is crucial to the data center’s performance, scalability, and resiliency.

Nonetheless, keep the following principles in mind when designing equipment layout.

Control Airflow Using Hot-aisle/Cold-aisle Rack Layout

The principle of controlling airflow using a hot-aisle/cold-aisle rack layout is well defined in various documents, including the ASHRAE TC9.9 Mission Critical Facilities. This principle aims to maximize the separation of IT equipment exhaust air and fresh intake air by placing cold aisles where intakes are present and hot aisles where exhaust air is released. This reduces the amount of hot air drawn through the equipment’s air intake. Doing this allows data centers to achieve power densities of up to 100%.

Provide Safe and Convenient Access Ways

Besides being a legal requirement, providing safe and convenient access ways around data center equipment is common sense. The effectiveness of a data center depends on how row layouts can double up as aisles and access ways. Therefore, designers should factor in the impact of column locations. A column can take up three or more rack locations if it falls within the row of racks. This can obstruct the aisle and lead to the complete elimination of the row.

Align Equipment With Floor and Ceiling Tile Systems

Floor and ceiling tiling systems also play a role in air distribution systems. The floor grille should align with racks, especially in data centers with raised floor plans. Misaligning floor grids and racks can compromise airflow significantly.

You should also align the ceiling tile grid to the floor grid. As such, you shouldn’t design or install the floor until the equipment layout has been established.

data center

Plan the Layout in Advance

The first stages of deploying data center equipment heavily determine subsequent stages and final equipment installation. Therefore, it is better to plan the entire data center floor layout beforehand.

How to Plan a Server Rack Installation

Server racks should be designed to allow easy and secure access to IT servers and networking devices. Whether you are installing new server racks or thinking of expanding, consider the following:

Rack Location

When choosing a rack for your data center, you should consider its location in the room. It should also leave enough space in the sides, front, rear, and top for easy access and airflow. As a rule of thumb, a server rack should occupy at least six standard floor tiles. Don’t install server racks and cabinets below or close to air conditioners to protect them from water damage in case of leakage.

Rack Layout

Rack density should be considered when determining the rack layout. More free space within server racks allows for more airflow. As such, you can leave enough vertical space between servers and IT devices to boost cooling. Since hot air rises, place heat-sensitive devices, such as UPS batteries, at the bottom of server racks, heavy devices should also be placed at the bottom.

Cable Layout

Well-planned rack layout is more than a work of art. Similarly, an excellent cable layout should leverage cable labeling and management techniques to ease the identification of power and network cables. Cables should have markings at both ends for easy identification. Avoid marking them in the middle. Your cable management system should also have provisions for future additions or removal.

Conclusion

Designing a data center layout is challenging for both small and established IT facilities. Building or upgrading data centers is often perceived to be intimidating and difficult. However, developing a detailed data center layout can ease everything. Remember that small changes in the plan during installation lead to costly consequences downstream.

Article Source: Data Center Layout

Related Articles:

How to Build a Data Center?

The Most Common Data Center Design Missteps

How to Utilize Data Center Space More Effectively?

What is data center space?

Data center space refers to the area of leased space available for servers to be stored in a data facility, including racks, cabinets, private suits, etc. It typically monitors all electrical and mechanical systems 24 hours a day, seven days a week. Nowadays, more and more companies choose data centers with larger space to meet their growing storage requirements.

However, many enterprises today encounter the challenges of limited data center space. One of the reasons is that the advancement of technology increases their demands for larger data center space, but it will cost a lot to build a new data center. Another factor is the underutilization of data center space. According to the research from an energy consortium called The Green Gird, 43 percent of respondents said they had no strategies in place to boost energy efficiency.

Therefore, it is necessary to learn some strategies to optimize the available space of data centers. Here are ten ways to make the best use of data center space.

How to utilize data center space?

  • Combine white space and gray space: Data center white space refers to the space where IT equipment and infrastructure are located, while data center gray space means the space where back-end equipment is located. By consolidating these two types of data center space, enterprises can use some technologies like cloud computing, which can save a large amount of space in data centers.
  • Refresh technologies: To improve data center space efficiency, technologies must be upgraded to minimize power consumption. For instance, new technologies like flywheels can increase the power of the machine, reducing the number of batteries required for the power supply. Besides, replacing the old and inefficient servers with new and energy-efficient servers can improve operational efficiency and reduce power consumption.
  • Use the smaller-diameter cable: Choosing the right cables is also an essential factor that should be considered. Tangled cables may cause cable congestion and then impede airflow. To prevent data center space from that problem, it’s necessary to use cables with smaller diameters, such as FS high-density fiber cables, which are more space-saving. They also allow rack space to be used to accommodate more equipment and reduce the demand for more cable management systems.smaller-diameter cable
  • Try virtualization solutions: According to the U.S. Environmental Protection Agency, most high-capacity servers are utilized at 15% or less, wasting space and power. Using virtualization technologies can reduce the number of new servers required to replace inefficient servers by sharing workloads among multiple servers, which can maximize data center space utilization.
  • Improve architecture efficiency: Data center architecture and the way that hardware is deployed have a vital impact on data center space. Terrible deployment may impede energy efficiency and lead to heating problems. Therefore, when planning a new data center, it is important to consider carefully the current design, future servers, and equipment, and how these devices will integrate with each other.
  • Optimize vertical space: Compared with horizontal data center space, making use of vertical space can increase the capacity and density of the data center without occupying floor space. Traditional racks and cabinets support from 42U to 45U of rack space, while taller racks offer up to 58U of rack space. Besides, it’s more efficient to use the space above the rack to patch the racks and cabinets.vertical space
  • Increase cabinet power density: Server racks and cabinets take up a lot of space, so it’s essential to make the best use of them. By increasing cabinet power density, the requirements for cabinets will be reduced, thus lessening the occupied floor space. Besides, this can also reduce management equipment and increase companies’ return on investment.
  • Use cooling technologies: Cooling accounts for about half of a data center’s entire energy consumption. Since computer room air conditioning (CRAC) and air handling units cannot handle the higher power densities, some companies may use liquid cooling systems, which take up a lot of valuable floor space. Using technologies like hot/cold aisle containment can save data center space to some extent while also maintaining suitable temperatures.

All the methods mentioned above work very well on boosting data center space utilization. The key is to choose a plan that best meets your goals and needs.

Article Source: How to Utilize Data Center Space More Effectively? | FS Community

Related Articles:

Data Center White Space and Gray Space | FS Community

What Is Data Center Storage? | FS Community