MTP HD Cassette and TAP Cassette Over Standard LGX Cassette

Pre-terminated fiber cabling has become a favorable choice for today’s high speed networks in data centers as this technology enables high bandwidth, high port density, easy management, future data rates migration and security monitoring. And modular system allows for rapid deployment of high density data center infrastructure as well as improved troubleshooting and reconfiguration during moves, adds and changes. MTP cassette is such a modular module. Usually it employs configuration of 12 fibers or 24 fibers, containing MTP adapter, LC/SC adapter, MTP-LC or MTP-SC patch cable etc, of which MTP-LC cassette is more widely used. As an assembly of the high density MTP/MPO pre-terminated fiber devices, it is dominant in high-density data centers for its reliable interface, optimized performance and minimized rack space. There are commonly three types of MTP cassettes available in market, including MTP LGX cassette, MTP HD cassette and MTP TAP cassette. This article will introduce the advantages of MTP HD cassette and MTP TAP cassette over the standard MTP LGX cassette.

MTP cassette

MTP HD Cassette Over MTP LGX Cassette

Though the MTP cassette is preferred for its high density, there is still difference in used rack space between MTP LGX cassette and HD cassette. For standard LGX cassette, usually 3pcs of LGX cassette are put inside a 1U19’’ rack, or 12pcs inside a 4U 19’’ rack, as shown in the figure below. However, HD cassette is more optimized for high-density applications than LGX cassette for being more compact in package. 5pcs of HD cassette can be put inside a 1U 19’’ rack. So if the space for a data center is urgent to be saved, MTP HD cassette will be the best selection to help minimize the rack space for the most fiber count.

standard LGX in 1U and 4U

MTP TAP Cassette Over MTP LGX Cassette

TAP (traffic access point) is usually added in the network for network monitoring. MTP TAP cassette is an effective device for real-time monitoring in high performance network and high density cabling. TAP cassette integrates TAP functionality into cable patching system. A TAP uses a passive fiber optic splitter to create an exact copy of the light signal passing through it. The fiber carrying the signal from a device’s transmit port is connected to the splitter input; the splitter’s live output is connected to the receive port of the downstream device, while a second output carries the copy of the live signal for out-of-band access. A TAP uses two of these splitters, installed on the two fibers supporting both channels of a duplex fiber channel link, to create a complete copy of all traffic between the two devices. And the transmission for the network data will not be affected since there are ports for monitoring and ports for transmission. The MTP TAP cassette can adopt both the package of HD and LGX cassette. The MTP TAP cassette can be easily deployed in network by connecting to the monitoring device and the user device with MTP trunk/breakout cables or LC/SC patch cables. For this hardware tool, TAP cassette is more expensive than the other two cassette types.


This article compares two MTP cassettes with the MTP standard LGX cassette and states the advantages of them. So if space is the primary consideration in high density cabling, MTP HD cassette is a better design choice; if a secure network with high performance is the priority, MTP TAP cassette is recommended to be deployed in the network. For special applications where high density and monitoring are both required, MTP TAP cassette with compact design is the best choice!

Data Center Architecture Designs Comparison: ToR Vs. EoR

The interconnection of switches and warranty of data communication are the basic aspects to consider when designing a data center architecture. Today’s data centers have been shifted into 1RU and 2RU appliances, thus setting the 1RU and 2RU switches into the same-sized racks can greatly save space and reduce cabling demands. Typically, Top of Rack (ToR) and End of Row (EoR) are now the common infrastructure designs for data centers. In this article, we will mainly discuss the differences between these two approaches.


Overview of ToR & EoR
What Is ToR?

ToR approach refers to the physical placement of network access switch in the top of a server rack. Servers are directly linked to the access switch in this method. Each server rack usually has one or two access switches. Then all the access switches are connected with the aggregation switch located in the rack. Only a small amount of cables are needed to run from server rack to aggregation rack.


What Is EoR?

In the EoR architecture, each server in individual racks are directly linked to a aggregation switch eliminating the use of individual switches in each rack. It reduces the number of network devices and improves the port utilization of the network. However, a large amount of cables is needed for the horizontal cabling. Along with the EoR approach, there is also a variant model named as MoR (Middle of Row). The major differences are that the switches are placed in the middle of the row and cable length is reduced.


Comparison Between ToR & EoR

As for ToR, the cost of cables are reduced since all server connections are terminated to its own rack and less cables are installed between the server and network racks. Cable management is also easier with less cables involved. Technicians can also add or remove cables in a simpler way.

In the EoR, device count is decreased because not every rack should equip the switches. Thus, less rack space is required in the architecture. With less devices in data center, there will be less requirements for the cooling system which also reduces the using of electricity power.


In reverse, there are also some limitations for each architecture. For ToR, although the cables are reduced, the number of racks is still increased. The management of switches may be a little tricky. In addition, ToR approach takes up more rack space for the installation of switches.

As for EoR, its Layer 2 traffic efficiency is lower than the ToR. Because when two servers in the same rack and VLAN (virtual local area network) need to talk to each other, the traffic will go to the aggregation switch first before comes back. As less switches are used in EoR design, more cables are deployed between racks triggering higher possibility of cable mess. Skillful technicians are required when carrying out the cable management.

Physical Deployments of ToR & EoR
ToR Deployment

One is the redundant access switch deployment which usually demands two high-speed and individual ToR switches that connect to the core network. Servers are interconnected to access switches deployed within the server racks. Another is the server link aggregation with ToR deployment. Two high-speed ToR switches are part of the same virtual chassis. Servers can connect to both of the switches located at top of rack with link aggregation technology.

EoR Deployment

EoR access switch deployment is very common to extend all the connections from servers to the switching rack at the end of row. If the deployment is needed to support the existing wiring, you can also deploy a virtual chassis.


ToR and EoR are the common designs for data center architecture. Choosing the proper one for your network can promote the data center efficiency. From this article, you may have a general understanding about these two methods. Hope you can build up your data center into a desired architecture.

Key Components to Form a Structured Cabling System

Building a structured cabling system is instrumental to the high performance of different cable deployments. Typically, a structured cabling system contains the cabling and connectivity products that integrates the voice, data, video, and various management systems (e.g. security alarms, security access, energy system, etc.) of a building. The structured cabling system is based on two standards. One is the ANSI/TIA-568-C.0 of generic telecommunications cabling for customer premises, and another is the ANSI/TIA-568-C.1 of commercial building telecommunications cabling for business infrastructures. These standards define how to design, build, and manage a cabling system that is structured. Six key components are included to form a structured cabling system.

Six Subsystems of a Structured Cabling System

Generally speaking, there are six subsystems of a structured cabling system. Here will introduce them respectively for better understanding.


Horizontal Cabling

The horizontal cabling is all the cabling between telecommunications outlet in a work area and the horizontal cross-connect in the telecommunications closet, including horizontal cable, mechanical terminations, jumpers and patch cords located in the telecommunications room or telecommunications enclosure, multiuser telecommunications outlet assemblies and consolidation points. This type of wiring runs horizontally above ceilings or below floors in a building. In spite of the cable types, the maximum distance allowed between devices is 90 meters. Extra 6 meters is allowed for patch cables at the telecommunication closet and in the work area, but the combined length of these patch cables cannot exceed 10 meters.

Backbone Cabling

Backbone cabling is also known as vertical cabling. It offers the connectivity between telecommunication rooms, equipment rooms, access provider spaces and entrance facilities. The cable runs on the same floor, from floor to floor, and even between buildings. Cable distance depends on the cable type and the connected facilities, but twisted pair cable is limited to 90 meters.

Work Area

Work area refers to space where cable components are used between communication outlets and end-user telecommunications equipment. The cable components often include station equipment (telephones, computers, etc.), patch cables and communication outlets.

Telecommunications Closet (Room & Enclosure)

Telecommunications closet is an enclosed area like a room or a cabinet to house telecommunications equipment, distribution frames, cable terminations and cross connects. Each building should have at least one wiring closet and the size of closet depends on the size of service area.

Equipment Room

Equipment room is the centralized place to house equipment inside building telecommunications systems (servers, switches, etc.) and mechanical terminations of the telecommunications wiring system. Unlike the telecommunications closet, equipment room houses more complex components.

Entrance Facility

Entrance facility encompasses the cables, network demarcation point, connecting hardware, protection devices and other equipment that connect to the access provider or private network cabling. Connections are between outside plant and inside building cabling.

Benefits of Structured Cabling System

Why do you need the structured cabling system? Obviously, there are many benefits for using the system. A structured cabling can standardize your cabling systems with consistency so that the future cabling updates and troubleshooting will be easier to handle. In this way, you are able to avoid reworking the cabling when upgrading to another vendor or model, which prolongs the lifespan of your equipment. All the equipment moves, adds and changes can be simplified with the help of structured cabling. It is a great support for future applications.



From this article, we can know that a structured cabling system consists of six important components. They are horizontal cabling, backbone cabling, work area, telecommunications closet, equipment room and entrance facility. Once you split the whole system into small categories, the cabling target will be easier to get. As long as you keep good management of these subsystems, your cabling system is a success to be called as structured wiring.

Importance of Using Fiber Color Codes in Data Center

The utilization of color codes in data center effectively helps technicians make better cable management and reduce human errors. Without redundant checking process, people can easily get the information of the device by only one look. Making good use of the color code system can surely save much time during work. This article will mainly present the widely accepted color code system and its important functions.


Introduction to Color Code Systems

Fibers, tubes and ribbons in fiber optic cables are usually marked with different color codes to facilitate identification. There are many color code systems for national or international use. All these systems are characterized by using 12 different colors to identify fibers that are grouped together in a common bundle such as a tube, ribbon, yarn wrapped bundle or other types of bundle.

Different color code standards may be used in different regions. For example, the S12 standard is used for micro cables and nano cables in Sweden and other countries. The Type E standard is defined by Televerket and Ericsson used in Sweden. The FIN2012 standard is used in Finland, etc. However, there is one color code system widely recognized in the world, namely the TIA/EIA-598 standard.

Specifications of TIA/EIA-598 Color Codes

The following picture gives the fiber color coding of TIA/EIA-598 standard. If more than 12 fibers or tubes are to be separated, the color sequence is normally repeated with ring marks or lines on the colored fibers and tubes. As for the fiber cable jacket, orange, yellow, aqua and black color codes are used for their distinction.


Functions of Fiber Color Codes in Data Center
Distinguishing Fiber Grades

As mentioned above, the outer jacket color codes are able to identify the fiber grades. OM1/OM2 cables often adopts the orange jacket, OM3/OM4 cables with aqua jacket, single-mode cables with yellow jacket and hybrid cables (indoor/outdoor cables and outside plant cables) with black jacket. One thing to note is that the mix of OM1 and OM2 or OM3 and OM4 cables may be troublesome. You should make sure not to mingle these cables with the same color code.

Identifying Fiber Patch Cords

Using color codes to label fiber patch cords can reduce the potential for human error. For instance, you may highlight mission-critical patch cords in red, and then teach all technicians that a red patch cord should only be moved with proper authorization or under supervision. Likewise, keeping the fiber connector color consistent with fiber grade color standards will make it simple for technicians to use the right connectors with the cables.

Separating Different Ports

The color-coded port icons can be helpful in identifying different network routings in accordance with internal needs. By tagging each patch panel port, you can simplify and streamline network management.

Differentiating Connector Boots

You can use color codes on connector boots to make routine maintenance and moves, adds and changes easier by helping technicians preserve correct parallel groupings for switch ports. If you change your connector color, you need to ensure that your fiber cable color represents the fiber grade to avoid confusion. You can also change the color of a connector boot to differentiate between different aspects of the network, making it easy for technicians to view the contrast within a panel.


Visual management is more intuitive for specialists to supervise the data center. Color code system has provided an ideal and easy way to solve the cabling problem. Inside the cables, the fiber buffers are also color-coded with standard colors to make connections and splices easier. Therefore, if you are still bothered by these issues of fiber patch cables, using the color code system is a good way to go.

10GbE or More – The Choice of Virtualization and Cloud Computing

At one time, data centers were discrete entities consisting of independent silos of computing power and storage bridged together by multiple networks. Servers were consolidated from time to time, but they still used discrete and independent networks such as Token Ring, Ethernet, Infiniband*, and Fibre Channel.

Then along came virtualization and cloud computing. They brought with them a variety of storage technologies and, more recently, Software Defined Networking (SDN). Collectively, these technologies are providing dramatic gains in productivity and cost savings. They are also fundamentally changing enterprise computing and driving a complete rethinking of enterprises’ networking architecture strategies.

Cloud Computing

As virtualization continues to take hold in data centers, the silos of computing, network, and storage that were once fixtures are increasingly being replaced by resource pools, which enable on-demand scalable performance. Hence, as application needs grow, the infrastructure can respond.

Unfortunately, this is a two-edged sword. Virtualization has increased server utilization, reducing the once-prevalent server sprawl by enabling enterprises to do more with less, and simplifying and maximizing server resources. But it has also driven the demand for networking bandwidth through the roof in complexity and created a major bottleneck in the system.

For some enterprises, server virtualization alone isn’t enough and they have deployed a private cloud-based infrastructure within their data center. With this comes the need to not only scale resources for a specific application but also for the data center itself to be able to scale to meet dynamic business needs. This step requires storage and networking resources to move away from the restriction of dedicated hardware and be virtualized as well.

This next step in virtualization is the creation of a virtual network that can be controlled and managed independent of the physical underlying compute, network, and storage infrastructure. Network virtualization and SDN will play a key role, and network performance will ultimately determine success.

Today, discrete data center networks for specific components, such as storage and servers, are no longer appropriate. As companies move to private cloud computing, in which systems are consolidated onto fewer devices, networks must be simpler to deploy, provision, and manage.

Hence, 1GbE connectivity is no longer enough. Think about it: Servers are getting faster, and enterprises are running a growing number of Virtual Machines (VMs) that compete for existing I/O on each server. An increased amount of data must be processed, analyzed, shared, stored, and backed up due to increased VM density and enterprises’ rising storage demands. If this growth is left unmanaged, the network will become even more of a bottleneck, even as server speeds continue to increase.

To truly reap these gains, the network needs to keep up. More bandwidth means faster access to storage and backup, and faster network connections mean lower latency and a minimal bottleneck.

Moving to 10GbE

Despite its recent maturity, 10GbE has had a long journey. Initially ratified by the IEEE in June 2002 as 802.3ae, the standard is a supplement to the 802.3 standard that defines Ethernet.

Officially known as 10 Gigabit Ethernet, 10GbE (also referred to as 10G) operates in only full-duplex mode and supports data transfer rates of 10 gigabits per second for distances up to 300 meters on multimode fiber optic cables and up to 10 kilometers on single-mode fiber optic cables.

Although the technology has been around for many years, its adoption has been slow. After spending nearly a decade building out their 1GbE networks, enterprise have been reluctant to overhaul the resources invested in the network, including adapters, controllers and other devices, and—perhaps most of all—cabling. But as virtualization and cloud operations become core technology components, they are bringing with them changing network requirements, key to which is that the minimum for an advanced dynamic architectures is now 10GbE.

Crehan Research reported that while 17 percent of server Ethernet ports conformed to the 10GbE standard in 2012, the majority—83 percent—still followed the 1GbE standard. In 2013, those numbers remained steady, with 81 percent and 19 percent following the 1GbE and 10GbE standards, respectively. 2014, however, was expected to be a year of change. Crehan Research forecasted that 10GbE usage would increase, with 28 percent of server Ethernet ports adhering to the standard. Inversely, 1GbE installs would decline to 72 percent. This trend was expected to continue through 2018, at which time 79 percent of server Ethernet ports would be using 10GbE and a mere 4 percent 1GbE. The remaining ports would have migrated to 40GbE.

For some time now, hardware vendors have been designing their products—from processors to interconnects and switches—with 10GbE in mind. Software vendors are now well-versed in these needs as well and are designing applications that take advantage of 10GbE. Enterprises are now in effect paying for an optimization for which they may not be reaping the benefits. Fortunately, while this ecosystem around 10GbE has been growing and the speed is now expected if an enterprise is to achieve its desired performance, price points of 10GbE supported products have been dropping.

The benefits 10GbE brings to data centers can be classified into three categories: Performance, Simplification, and Unification.

Performance, or increased speed, is likely the first enhancement that comes to mind, but performance improvements are not the only advantage 10GbE brings to data centers. It also unifies and simplifies the data center, thus reducing cost and complexities associated with maintaining the network. From the very beginning, simplicity and low cost were goals for Ethernet. And indeed, by unifying the data center storage and network fabrics by adding support for both Fibre Channel or iSCSI technologies over Ethernet and thus a single, low cost, high bandwidth interconnect, 10GbE is able to reduce network costs. In addition, 10GbE simplifies the network infrastructure by reducing power, improving bandwidth, and lessening cabling complexity.

As enticing as these advantages are, few technologies are compelling without a comprehensive ecosystem behind them. Ecosystems for new technologies can sometimes be a chicken and egg cycle, however. Vendors are reluctant to invest resources in building an ecosystem if enterprises aren’t buying product, but enterprises are reluctant to buy a new type of product if it lacks an ecosystem. A comprehensive ecosystem is an indicator of a technology’s maturity, and 10GbE does not disappoint.

On the market today are a variety of products that support 10GbE, including processors, servers, adapters, switches, and cables. There is also support for multiple media types within 10GbE as well as improved cable technologies.

Getting the Most Out of Popular Technologies

For a technology improvement to be considered worth pursuing, it must facilitate the enterprise in more easily achieving its business goal. In the case of 10GbE, virtualization was the first technology to truly feel its benefits. Virtualization enables enterprises to satisfy a host of business goals from resource maximization to agility. The benefits virtualization brings to enterprises are enhanced and fully realized when the network is migrated to 10GbE.

Virtualization was, in effect, the watershed use case for 10GbE. It offered a way to address the growing complexity and bottlenecks associated with virtualization’s need for more network bandwidth. Now, IT could consolidate the ever-growing numbers of 1GbE ports and replace them with 10GbE. The move to 10GbE also enabled IT to unify data center storage and network fabrics, and in some cases I/O virtualization, by adding support for Fibre Channel or iSCSI technologies over Ethernet.

For many enterprises, the next stage after deploying a virtual infrastructure is to add a cloud component. This transition enables enterprises to not only scale resources for a specific application, but also, and more importantly, for the data center to scale to meet dynamic business needs.

For this to be successful, both storage and networking resources must move away from the restrictions endemic to dedicated hardware and be virtualized. This next step creates a virtual infrastructure that can be controlled and managed independent of the physical underlying compute, network, and storage infrastructure in the form of network virtualization and an SDN. For these infrastructures to function, the low latency of 10GbE, and in some cases 40GbE, is required.

Cloud and virtualization are not the only technologies driving 10GbE adoption. Rapidly increasing volumes of data, both structured and unstructured, must be stored and backed up. Being able to scale workloads quickly and efficiently by creating a single storage and data network, and enabling unified resource pools of compute and storage resources, is critical for the network to function at the speeds and capacities enterprises expect.

10GbE makes this possible.

10GbE Worldwide


10GbE enables enterprises to boost network performance while simplifying network infrastructure, reducing power, improving bandwidth, and reducing cabling complexity. Unifying different types of traffic onto a single, low-cost, high-bandwidth interconnect further simplifies the network.

The performance improvements and benefits of simplification and unification will be most acutely felt by enterprises deploying a virtual infrastructure. With a virtual infrastructure fast becoming the norm for many enterprises, the importance of a network that can meet performance, maintenance and other usability challenges is critical. In addition to virtualization, 10GbE offers numerous benefits to cloud-based infrastructures and enterprises with heavy storage requirements.