Network Virtualization and Challenges in SDN/NFV Implementation

Software defined networking (SDN) and network functions virtualization (NFV) are two closely related technologies that are both toward network virtualization and automation. The occurrence of these two technologies are mainly driven by the requirements for robust data management systems and access to bandwidth by servers located at different sites and connected over long distances through public and private clouds. SDN and NFV have some similarities but they are different in many aspects. In addition, though SDN and NFV are highly promoted as next-generation dominants in recent years, there are still many challenges in successfully deploying them. This post will give some basic knowledge about SDN and NFV, and the challenges faced in implementing SDN and NFV.

Understand SDN and NFV

Although SDN and NFV are both network virtualization technologies, they’re really not dependent on each other. And it is not always necessary to involve them in the same network. The infrastructures of SDN and NFV will be explained in the following text, and the major differences between them will be displayed.

What Is SDN?

The function of SDN is somewhat hinted by its name. With SDN, the users are able to manage and control the entire network through software that makes networks centrally programmable. It achieves this by separating the system that decides where traffic is sent (the control plane) from the underlying system that pushes packets of data to specific destinations (the data plane). As known to network administrators and value added resellers (VARs), SDN is built on switches that can be programmed through an SDN controller based on an industry standard controller like OpenFlow.

What Is NFV?

Network function virtualization is similar to traditional server virtualization mechanisms but clearly focuses on networking services. Within NFV, they’re virtualized network functions. It means NFV separates network functions from routers, firewalls, load balancers and other dedicated hardware devices and allows network services to be hosted on virtual machines. Virtual machines have a manager, which allows multiple operating systems to share a single hardware processor.

Differences Between SDN and NFV

Both SDN and NFV rely on software that operates on commodity servers and switches, but both technologies operate at different levels of the network. They are not dependent and you could perfectly have just an NFV platform operating a piece of your environment without the inclusion of full-developed SDN or only SDN. The following figure shows a use case of SDN and NFV.

SDN fabric with NFV

The differences between SDN and NFV can be summarized from five aspects. They are presented in the table below.

Basics SDN separates control and data and centralizes control and programmability of the network. NFV transfers network functions from dedicated appliances to generic servers.
Areas of Operation SDN operates in a campus, data center and/or cloud environment. NFV targets the service provider network.
Initial Application Target SDN software targets cloud orchestration and networking. NFV software targets routers, firewalls, gateways, WAN (wide area network), CDN (content delivery network), accelerators and SLA (service level agreement) assurance.
Protocols OpenFlow. No protocols, yet.
Supporting Organization Open Networking Foundation (ONF). ETSI NFV working group.


Challenges in SDN/NFV Implementation

Though SDN and NFV are promising technologies, there are still many roadblocks in their deployments. Complete standards and proven examples are still needed for wider implementation of SDN/NFV.

Security is one of the biggest concerns in implementing SDN. While centralized control and virtualization of network topology are powerful assets that SDN allows, they also create new security vulnerabilities that must be addressed. The positive side of implementing SDN is that the user is able to make uniform security policies across the whole system. But naturally, the negative side is that, if the SDN controller is successfully hacked, the attacker would have complete control of the system.

Another major challenge is the scalability of SDN systems, in the view of the virtualization that comes with the SDN systems (via NFV). It is a fact that the continuous growth of network data consumption makes scalability a challenge for any network system. If integrated properly, SDN can improve the scalability in a given data center or network. But there are scalability concerns raised by the SDN architecture. Since it is a single item, the centralized SDN controller is not necessarily scalable for larger networks. This also presents a single point of failure in the network, which would be dangerous if the controller or an uplink device fails. There are potential solutions to this problem, but these are still in development.

As for NFV implementation, there are challenges for NFV independent software vendors (ISVs). The first challenge is to develop an innovative, virtualized product that meets the reliability and scalability requirements of the telecom industry. In addition to technical challenges, ISVs also have to develop a concise value proposition to convince the large telcos why they should adopt a new, unproven product into their highly complex network operations.


To sum up, it is no doubt that SDN and NFV can bring many benefits to network administrators by accomplishing virtualization and automation of the systems. And it also cannot be denied that there are still many improvements needed to be made for SDN and NFV deployments. Knowing the pros and cons of them can help in correctly facing these technologies and avoid blind following up or complete refusal to new products. FS.COM has announced new 10/40/100GbE open networking switches for data centers and enterprise networks, which support SDN/NFV. Also high performance 40G and 100G DAC and optical transceivers are provided at competitive prices. For more details about SDN switches, please visit or e-mail to

Interconnect Solutions for Arista QSFP-40G-PLRL4 and SFP-10G-LR

Usually for single-mode fiber optic transceivers, the interface will be designed as LC duplex type. And for these optical modules, it will be easy to achieve structured cabling by using single-mode LC duplex infrastructure. But for 40G QSFP+, some single-mode transceivers do not follow this common rule. For example, 40GBASE-PLRL4 is a single-mode module supporting a transmission distance up to 1 km, but it has to be connected with an MTP/MPO-12 UPC connector. When migrating from 10G to 40G network using 40GBASE-PLRL4 modules, both single-mode LC duplex cable and single-mode MTP/MPO cable will be used. This article will take Arista QSFP-40G-PLRL4 and SFP-10G-LR optical modules as examples to explain several interconnect solutions for them.

Specifications of Arista QSFP-40G-PLRL4 and SFP-10G-LR

Arista 40GBASE-PLRL4 QSFP+ module is designed with a single-mode parallel MTP/MPO port. It can support a maximum link distance of 1 km on single-mode fiber operating at 1310nm wavelength. Arista 10GBASE-LR SFP+ module also has a single-mode port but its interface is LC duplex type. This SFP-10G-LR transceiver supports a long transmission distance up to 10 km over single-mode fiber operating at 1310 nm. Both of them support digital and optical monitoring.

Interconnect Solutions for Arista QSFP-40G-PLRL4 and SFP-10G-LR

In the first solution, a breakout cassette is used to move one 40G signal to four individual 10G signals. A 40G MPO cable is used on the QSFP-40G-PLRL4 side and four LC uniboot cables are connected to four SFP+s. The MTP/MPO equipment we used in this solution and the solutions below are all aligned as polarity B type.

interconnect for single-mode QSFP+ and SFP+ with MPO-12 to LC cassette

Figure 1: interconnect for single-mode QSFP+ and SFP+ with MPO-12 to LC cassette.

The second connection is a very cost-effective solution for three QSFP-40G-PLRL4 to twelve SFP-10G-LR modules. Here the three breakout cables on the left are female MPO to 4xLC 8 fibers harness. Then by using two 6 LC duplex adapter panel, the three groups of 40G signals are divided into two groups that each has six 10G network devices. In this link, no fiber or port is wasted. Besides, it also allows flexible location of the QSFP+ modules, like in different chassis. By using customized bend insensitive single-mode LC duplex fiber patch cable, high performance transmission at longer lengths can be achieved.

 interconnect for single-mode QSFP+ and SFP+ with MPO-8 to LC harness cable

Figure 2: interconnect for single-mode QSFP+ and SFP+ with MPO-8 to LC harness cable.

The next solution illustrated in figure 3 is a bit similar to the previous example in figure 2. It is also for three 40G parallel and twelve 10G duplex single-mode optical transceivers. But it is an application of MTP conversion harness cable and breakout patch panel. Here we used 3×8 strand MTP (female) to 2×12 strand MTP (female) single-mode conversion harness cable to connect the three QSFP+ transceivers to the 96 fibers 12xMTP/MPO-8 (male) to LC single-mode 40G breakout patch panel. Twelve LC uniboot patch cables are connected to the SFP-10G-LR transceivers.

interconnect for single-mode QSFP+ and SFP+ with 2x3 24-fiber MTP conversion harness cable

Figure 3: interconnect for single-mode QSFP+ and SFP+ with 2×3 24-fiber MTP conversion harness cable.

The last interconnect solution is for two single-mode QSFP+ and eight SFP+ modules. Here another type of MTP conversion cable is used. It is a 2×12 strand MTP (female) to 1×24 strand MTP (female) single-mode conversion harness cable. A 24 fibers male MTP-24 to LC UPC duplex single-mode cassette is used to connect the MTP-24 connector and the eight LC duplex connectors. Low loss LC uniboot cables are again used for this high-density cabling.

interconnect for single-mode QSFP+ and SFP+ with 1x2 24-fiber MTP conversion harness cable

Figure 4: interconnect for single-mode QSFP+ and SFP+ with 1×2 24-fiber MTP conversion harness cable.


This post introduced four interconnect solutions for single-mode parallel QSFP-40G-PLRL4 transceiver and single-mode duplex SFP-10G-LR transceiver. In order to meet different requirements, different equipment is deployed in different examples. Hope that these connections can be a guide for your single-mode network and can work well in specific applications.

The Role of OM5 and MTP Fiber in 40GbE and Beyond

In order to meet the overwhelming trend of growing bandwidth, different standards for single-mode and multimode fibers are published, and parallel fiber connector (MTP/MPO) is designed to solve the problem of increasing fiber count. Though the fiber types are changing, the use of the parallel connector seems not to be outdated, not only for present 40G and 100G applications, but also for future 200G and 400G. This post will discuss the issue on a new fiber type and the role of parallel fiber in 40GbE and beyond networks.

Overview on Multimode and Single-mode Fibers

Since the establishment of multimode fiber in the early 1980s, there has been OM1 and OM2, and laser optimized OM3 and OM4 fibers for 10GbE, 40GbE and 100GbE. OM5, the officially designated wideband multimode fiber (WBMMF), is a new fiber medium specified in ANSI/TIA-492AAAE. The channel capacity of multimode fiber has multiplied by using parallel transmission over several fiber strands. In terms of single-mode fiber, there are only OS1 and OS2; and it has been serving for optical communications without much change for a long time. Compared with the constant updates of multimode fiber and considering other factors, some enterprise customers prefer to use single-mode fiber more over the past years and for the foreseeable future. With the coming out of the new OM5 fiber, it seems that multimode fiber might last for a longer time in the future 200G and 400G applications.

The Issue on the Upcoming Fiber Type

The new fiber medium OM5 is presented as the first laser-optimized MMF that specifies a wider range of wavelengths between 840 and 953 nm to support wavelength division multiplexing (WDM) technology (at least four wavelengths). It is also specified to support legacy applications and emerging short wavelength division multiplexing (SWDM) applications. Although OM5 has been anticipated to be “performance compliant and superior to OM4” based on the following parameters, there are still some arguments on the statement that OM5 is a better solution for data centers.

OM4 & OM5 comparison

Figure 1: OM4 and OM5 comparison.

OM5 supporters talk about the problems of present multimode fibers in long-term development. The opinion holds that the future 400GBASE-SR16 which will reuse 100GBASE-SR4 technology specified in IEEE 802.3bs Standard draft, calls for a new 32 fibers 2-row MTP/MPO connector instead of a 12 fibers MTP/MPO connector. It will be hard for current structured cabling that uses MTP-12 to move to MTP-16 requirements.

12f and 32f MTP-MPO connectors

Figure 2: 12f MTP connector (left) and 32f MTP connector (right).

However, the OM5 fiber solution, which can support 4 WDM wavelengths, will enable 4 fiber count reduction in running 40G, 100G and 200G using duplex LC connections. Combined with parallel technology, 400G can also be effectively transmitted over OM5 fibers using only 4 or 8 fibers.

40G, 100G, 200G, and 400G WDM transmission over OM5 fiber

Figure 3: 40G, 100G, 200G, and 400G WDM transmission over OM5 fiber.

On the other side, some people don’t support the idea that OM5 is a good solution for future 400G network. They argue that OM5 isn’t that optimized than current MMF types. The first reason is that for all the current and future multimode IEEE applications including 40GBASE-SR4, 100GBASE-SR4, 200GBASE-SR4, and 400GBASE-SR16, the maximum allowable reach is the same for OM5 as OM4 cabling.


Figure 4: Multimode fiber standard specifications.


Figure 4 continued.

The second reason is that, even by using SWDM technology, the difference on the reaches for OM4 and OM5 in 40G and 100G is minimal. For 40G-SWDM4, OM4 could support a 400-meter reach and OM5 a 500-meter reach. For 100G-SWDM4, OM4 could support 100 meters and OM5 is only 50 meters more than OM4.

And thirdly, the PAM4 technology can increase the bandwidth of each fiber from 25G to 50-56G, which means we can stick to current 12-fiber and 24-fiber MTP/MPO connectors as cost-effective solutions in the 40G, 100G and beyond applications.


The options for future higher speed transmission are still in discussion, but there is no doubt that no matter we choose to use new OM5 fiber or continue to use single-mode fiber and OM3/OM4 fiber, the “parallel fibers remain essential to support break-out functionality” as stated in WBMMF standardization. It is the fact that parallel fiber solution enables higher density ports via breakout cabling and reduces cost per single-lane channel.


Multifiber MTP/MPO cable is a preferable choice for high-density telecom and datacom cabling. For the outer jacket of MTP/MPO cable, there are many terms to describe it, such as CM, LSZH, CMP, CMR, PVC, etc. FS.COM carries several of these technologies. Do you know the differences between them? And what are the characteristics of each type? Most importantly, which one do you need for the task? This post will introduce some major jacket types for MTP/MPO cable and the other acronyms for communication cable ratings.

MTP cabling

Figure 1: MTP/MPO cabling.


CMP (plenum-rated) MTP/MPO cable complies the IEC (International Electrotechnical Commission) 60332-1 flammability standard. CMP MTP/MPO cable is designed to be used in plenum spaces, where air circulation for heating and air conditioning systems can be facilitated, by providing pathways for either heated/conditioned or return airflows. Typical plenum spaces are between the structural ceiling and the drop ceiling or under a raised floor. CMP rated communication cable is suitable for telephone and computer network exactly for this matter. It is designed to restrict flame propagation no more than five feet, and to limit the amount of smoke emitted during fire. Additionally, CMP MTP/MPO cable is more fire-retardant than LSZH, and as a result, sites are better protected. As an excellent performer cable, it is usually more costly than other cable types.

It has to be noted that some CMP cable made of fluorinated ethylene polymer (FEP) still has shortcomings of potential toxicity. Thus better CMP cable with a non-halogen plenum compound is further produced. For safety reason, no high-voltage equipment is allowed in plenum space because presence of fresh air can greatly increase danger of rapid flame spreading if the equipment catch on fire.


The LSZH (low smoke zero halogen, also refers to LSOH or LS0H or LSFH or OHLS) has no exact IEC code equivalent. The LSZH cable is based on the compliance of IEC 60754 and IEC 61034. LSZH MTP/MPO cable is better than other cables in been safer to people during a fire. It has no halogens in its composition and thus does not produce a dangerous gas/acid combination when exposed to flame. LSZH cable reduces the amount of toxic and corrosive gas emitted during inflammation. LSZH MTP/MPO cables are suitable to be used in places that is poorly aired such as aircraft, rail cars or ships, to provide better protection to people and equipment. LSZH MTP/MPO cable is more widely applied type than other materials, both for its secure properties and lower cost than CMP.

Other Types

The cable jackets will be discussed in the following part are not as frequently used for MTP/MPO cable as CMP and LSZH.

CMR (riser-rated) complies IEC 60332-3 standards. CMR cable is constructed to prevent fires from spreading floor to floor in vertical installations. It can be used when cables need to be run between floors through risers or vertical shafts. PVC is most often associated with riser-rated cable, but nor all PVC cable is necessarily riser-rated; FEP is most often associated with CMP. Since the fire requirements for CMR cable is not that strict, CMP cable can always replace CMR cable, but not reversibly.

CM (in-wall rated) cable is a general purpose type, which is used in cases where the fire code does not place any restrictions on cable type. Some examples are home or office environments for CPU to monitor connections.

The figure below generally illustrates the applicable environments for CMP, CMR and CM rated cables.

CMP, CMR, CM cable application

Figure 2: CMP, CMR, CM cable application.


Knowing the relevant details of cable ratings of MTP/MPO will certainly help in selecting the best one for your applications, which is as important as other factors. FS.COM provides high quality plenum and LSZH MTP/MPO trunk cables and MTP breakout cables at affordable prices.

Connectivity Solutions for Parallel to Duplex Optics

Since we have discussed connectivity solutions for two duplex optics or two parallel optics in the last post (see previous post: Connectivity Solutions for Duplex and Parallel Optics), the connectivity solutions for parallel to duplex optics will be discussed in this article, including 8-fiber to 2-fiber, and 20-fiber to 2-fiber.

Parallel to Duplex Direct Connectivity

When directly connecting one 8-fiber transceiver to four duplex transceivers, an 8-fiber MTP to duplex LC harness cable is needed. The harness will have four LC duplex connectors and the fibers will be paired in a specific way, assuring the proper polarity is maintained. This solution is suggested only for short distance within a given row or in the same rack/cabinet.

8-fiber to 2-fiber direct connectivity

Figure 1: 8-fiber to 2-fiber direct connectivity

Parallel to Duplex Interconnect

This is an 8-fiber to 2-fiber interconnect. The solution in figure 2 allows for patching on both ends of the fiber optic link. The devices used in this link are recorded in the table below figure 2.

8-fiber to 2-fiber interconnect

Figure 2: 8-fiber to 2-fiber interconnect

Item Description
1 8 fibers MTP trunk cable (not pinned to pinned)
2 96 fibers MTP adapter panel (8 ports)
3 8 fibers MTP trunk cable (not pinned)
4 MTP-8 to duplex LC breakout module (pinned)
5 LC to LC duplex patch cable (SMF/MMF)

Figure 3 is also an interconnect for 8-fiber parallel QSFP+ to 2-fiber SFP+. This solution is an easy way for migration from 2-fiber to 8-fiber, but it has disadvantage that the flexibility of the SFP+ end is lacked because the SFP+ ports have to be located on the same chassis.

8-fiber to 2-fiber interconnect

Figure 3: 8-fiber to 2-fiber interconnect

Item Description
1 8 fibers MTP trunk cable (not pinned to pinned)
2 96 fibers MTP adapter panel (8 ports)
3 8 fibers MTP trunk cable (not pinned)
4 8 fibers MTP (pinned) to duplex 4 x LC harness cable

Figure 4 shows how to take a 20-fiber CFP and break it out to ten 2-fiber SFP+ transceivers. The breakout modules divide the twenty fibers into three groups, and ten LC duplex cables are used to accomplish the connectivity to SFP+ modules.

20-fiber to 2-fiber interconnect

Figure 4: 20-fiber to 2-fiber interconnect

Item Description
1 1×3 MTP breakout harness cable(24-fiber MTP to three 8-fiber MTP) (not pinned)
2 MTP-8 to duplex LC breakout module (pinned)
3 LC to LC duplex cable (SMF/MMF)
Parallel to Duplex Cross-Connect

There are two cross-connect solutions for 8-fiber parallel to 2-fiber duplex. The main difference for figure 5 and 6 is on the QSFP+ side. The second cross-connect is better for a greater distance between distribution areas where the trunk cables need to be protected from damage in a tray.

8-fiber to 2-fiber cross-connect (1)

Figure 5: 8-fiber to 2-fiber cross-connect (1)

Item Description
1 8 fibers MTP trunk cable (not pinned)
2 MTP-8 to duplex LC breakout module (pinned)
3 LC to LC duplex cable (SMF/MMF)

8-fiber to 2-fiber cross-connect (2)

Figure 6: 8-fiber to 2-fiber cross-connect (2)

Item Description
1 8 fibers MTP trunk cable (not pinned to pinned)
2 96 fibers MTP adapter panel (8 ports)
3 8 fiber MTP trunk cable (not pinned)
4 MTP-8 to duplex LC breakout module (pinned)
5 LC to LC duplex cable (SMF/MMF)

These solutions are simple explanations to duplex and parallel optical links. It seems that the difference between each solution is not that significant in plain drawing, but actually the requirements for components are essential to an efficient fiber optic network infrastructure in different situations. Whether it is a narrow-space data center or a long-haul distribution network that will mostly determine the cabling structure and the products used.