Network Virtualization and Challenges in SDN/NFV Implementation

Software defined networking (SDN) and network functions virtualization (NFV) are two closely related technologies that are both toward network virtualization and automation. The occurrence of these two technologies are mainly driven by the requirements for robust data management systems and access to bandwidth by servers located at different sites and connected over long distances through public and private clouds. SDN and NFV have some similarities but they are different in many aspects. In addition, though SDN and NFV are highly promoted as next-generation dominants in recent years, there are still many challenges in successfully deploying them. This post will give some basic knowledge about SDN and NFV, and the challenges faced in implementing SDN and NFV.

Understand SDN and NFV

Although SDN and NFV are both network virtualization technologies, they’re really not dependent on each other. And it is not always necessary to involve them in the same network. The infrastructures of SDN and NFV will be explained in the following text, and the major differences between them will be displayed.

What Is SDN?

The function of SDN is somewhat hinted by its name. With SDN, the users are able to manage and control the entire network through software that makes networks centrally programmable. It achieves this by separating the system that decides where traffic is sent (the control plane) from the underlying system that pushes packets of data to specific destinations (the data plane). As known to network administrators and value added resellers (VARs), SDN is built on switches that can be programmed through an SDN controller based on an industry standard controller like OpenFlow.

What Is NFV?

Network function virtualization is similar to traditional server virtualization mechanisms but clearly focuses on networking services. Within NFV, they’re virtualized network functions. It means NFV separates network functions from routers, firewalls, load balancers and other dedicated hardware devices and allows network services to be hosted on virtual machines. Virtual machines have a manager, which allows multiple operating systems to share a single hardware processor.

Differences Between SDN and NFV

Both SDN and NFV rely on software that operates on commodity servers and switches, but both technologies operate at different levels of the network. They are not dependent and you could perfectly have just an NFV platform operating a piece of your environment without the inclusion of full-developed SDN or only SDN. The following figure shows a use case of SDN and NFV.

SDN fabric with NFV

The differences between SDN and NFV can be summarized from five aspects. They are presented in the table below.

SDN NFV
Basics SDN separates control and data and centralizes control and programmability of the network. NFV transfers network functions from dedicated appliances to generic servers.
Areas of Operation SDN operates in a campus, data center and/or cloud environment. NFV targets the service provider network.
Initial Application Target SDN software targets cloud orchestration and networking. NFV software targets routers, firewalls, gateways, WAN (wide area network), CDN (content delivery network), accelerators and SLA (service level agreement) assurance.
Protocols OpenFlow. No protocols, yet.
Supporting Organization Open Networking Foundation (ONF). ETSI NFV working group.

 

Challenges in SDN/NFV Implementation

Though SDN and NFV are promising technologies, there are still many roadblocks in their deployments. Complete standards and proven examples are still needed for wider implementation of SDN/NFV.

Security is one of the biggest concerns in implementing SDN. While centralized control and virtualization of network topology are powerful assets that SDN allows, they also create new security vulnerabilities that must be addressed. The positive side of implementing SDN is that the user is able to make uniform security policies across the whole system. But naturally, the negative side is that, if the SDN controller is successfully hacked, the attacker would have complete control of the system.

Another major challenge is the scalability of SDN systems, in the view of the virtualization that comes with the SDN systems (via NFV). It is a fact that the continuous growth of network data consumption makes scalability a challenge for any network system. If integrated properly, SDN can improve the scalability in a given data center or network. But there are scalability concerns raised by the SDN architecture. Since it is a single item, the centralized SDN controller is not necessarily scalable for larger networks. This also presents a single point of failure in the network, which would be dangerous if the controller or an uplink device fails. There are potential solutions to this problem, but these are still in development.

As for NFV implementation, there are challenges for NFV independent software vendors (ISVs). The first challenge is to develop an innovative, virtualized product that meets the reliability and scalability requirements of the telecom industry. In addition to technical challenges, ISVs also have to develop a concise value proposition to convince the large telcos why they should adopt a new, unproven product into their highly complex network operations.

Conclusion

To sum up, it is no doubt that SDN and NFV can bring many benefits to network administrators by accomplishing virtualization and automation of the systems. And it also cannot be denied that there are still many improvements needed to be made for SDN and NFV deployments. Knowing the pros and cons of them can help in correctly facing these technologies and avoid blind following up or complete refusal to new products. FS.COM has announced new 10/40/100GbE open networking switches for data centers and enterprise networks, which support SDN/NFV. Also high performance 40G and 100G DAC and optical transceivers are provided at competitive prices. For more details about SDN switches, please visit www.fs.com or e-mail to sales@fs.com.

Things We Should Know Before Migrating to Base-8 System

Since the introduction of Base-12 connectivity in the mid 1990s, the 12-fiber MTP/MPO connector and Base-12 connectivity have served the data center for about twenty years. It has helped a lot in achieving high-density and manageable cabling. Recently, many documents and posts are discussing about a new technology—Base-8. Its appearance is regarded as the evident need of future networks. Even though most of the words are promoting the overwhelming advantages of Base-8 system, we should still consider the defects and merits of these two systems based on some facts before taking the next step by ourselves. This post is a discussion on this topic.

Facts of Base-12 and Base-8

In this part, the design features of Base-12 and Base-8 systems will be introduced. And their dominant advantages are going to be discussed too.

Design Features

Base-12 connectivity makes use of links based on groups of 12, with 12-fiber connectors such as the MTP. In Base-12 connectivity, for example, trunk cables have fiber counts that are divisible by number 12, like 24-fiber trunk cable, 48-fiber trunk cable and all the way up to 144 fibers. However, in a Base-8 system, we don’t have 12-fiber trunk cable, instead we have 8-fiber trunk cable, 16-fiber trunk cable, 32-fiber trunk cable and so on. All trunk cables are based on increments of 8 fibers.

Base-12 and Base-8 trunk cables are visually different on connector design. A Base-12 trunk cable generally has unpinned (female) connectors on both ends and demands the use of pinned breakout modules. In the new emerging Base-8 system, a trunk cable is designed with pinned (male) connectors, as a result, it should be connected to unpinned components.

pinned & unpinned connectors
Figure: Unpinned Connector and Pinned Connector
Comparison

Compared with Base-8, Base-12 obviously has the benefit of higher connector fiber density. Thus a larger number of fibers can be installed more quickly when using Base-12 connectivity. And it is very easy to be deployed into all-ready existing Base-12 architecture. As the networks are migrating to 40G and 100G data speeds, Base-8 connectivity has some advantages that cannot be denied. For some 40G and 100G applications, including SR4 (40G and 100G over parallel MMF) and PSM4 (100G over parallel SMF) supported eight-fiber transceivers, and SAN switch Base-8/Base-16 port arrangements, Base-8 connectivity is a more cost-effective choice. In these applications, Base-8 enables full fiber utilization for eight-fiber transceiver systems. But Base-8 connectivity is not optimized for all situations, including duplex protocols like 25G and 100G (duplex SMF).

Correct Co-existence of Base-8 and Base-12

If we are going to deploy Base-8 devices in our existing network, it is possible to have Base-12 and Base-8 connectivity at the same time as long as we do not mix them in the same link. On one hand, it is not wise to use conversion module between Base-12 and Base-8 devices, because the added cost and increased insertion loss will surpass the benefits it can brought. As mentioned before, the two systems are not interchangeable since they usually have different connector configurations and have unequal attachment requirements. Therefore, special care should be given when managing the data canter physical layer infrastructure, to ensure that the Base-12 and Base-8 components are used separately.

Conclusion

When a new technology comes out as a new option for us, we need to decide whether to change or not. In terms of the discussion on Base-12 and Base-8 systems, after listening to voices from different sides, the key factors are still determined by own specific needs. If we decided to move to the new technology, the following question is how to realize the best migration. Having comprehensive understanding of the solutions and products vendors supply will never be a bad choice.

MTP-8: Simplest Way to Get 40G Connection

As data centers networks are shifting from 10G to 40G and beyond, it is necessary to seek ideal ways to connect 40G high speed switches populated with higher rate transceivers QSFP+, and to connect 40G switch to existing 10G elements populated with SFP+ modules. There are different approaches to connect 40G switches, or to connect 40G switch to 10G switch. However, by using MTP-8 solution, the simplest way to achieve direct 40G connectivity has been proved feasible and favorable in real applications. This article will introduce the deployment of MTP trunk cable in 40G to 40G connection, and MTP harness cable in 10G to 40G connection.

Basis of MTP Trunk and Harness Cable

MTP trunk cable has MTP connectors terminated on both ends of the fiber optic cable. It is often used to connect MTP port modules for high density backbone cabling in data centers and other high dense degree environments. Currently, most of the MTP trunk cables for high data rate like 40G and 100G are still 12-fiber or 24-fiber. MTP harness cable, also called MTP breakout or fan-out cable, has MTP connectors on one end and discrete connectors (duplex LC, SC, etc.) on the other end. The most common configurations of MTP-LC harness cables are 8-fiber MTP to 4 LC duplex, 12-fiber MTP to 6 LC duplex and 24-fiber MTP to 12 LC duplex. A single MTP connector is able to terminate the combination of 4, 8, 12, 24, 48 fiber ribbon cables. The multi-fiber design provides quick deployment and scalable solution that improves reliability and reduces installation or reconfiguration time and cost.

10G to 40G Connection via MTP Harness Cable

In order easily and quickly finish the migration from 10G network to 40G network, you can use 8-fiber MTP to 4 LC duplex harness cable, 40GBASE-SR4 QSFP+ and 10GBASE-SR SFP+ modules. The configuration of such a link is illustrated by figure 1. On the left the 8-fiber MTP connector is plugged into the MTP port of the 40GBASE-SR4 QSFP+ transceiver installed on the 40G switch; on the right side four duplex LC connectors are plugged into the ports of four 10GBASE-SR SFP+ transceivers installed on the 10G switch. In 10G to 40G migration, using 8-fiber MTP to LC harness cable can ensure every strand of fiber be used, and no one wasted.

10G to 40G via MTP-8 harness

Figure 1: 10G to 40G Migration via MTP-LC Harness Cable

40G to 40G Connection via MTP Trunk Cable

To support your 40G networking needs, you can simply use 12-fiber MTP trunk cable and 40GBASE-SR4 QSFP+ transceiver to accomplish a quick connection for two 40G switches in your network. The following figure shows a concrete example which uses one 12-fiber MTP trunk cable and two 40GBASE-SR4 QSFP+ transceivers to connect two 40G switches. Though the MTP trunk cable in this case is base-12, the fiber count actually in use is eight, leaving four strands unused. That is to say delivering 40G over 4 lanes multimode fiber at 10 Gb/s per lane. Totally only eight fibers (4 transmit, 4 receive) are required for the 4x10G solution. It is the same as the 4x25G solution for 100G.

40G connection via MTP-8 trunk

Figure 2: 40G to 40G Connection via MTP Trunk Cable

The above two examples are both applications of MTP-8 solution in 40G connectivity. You will find that only a few components are needed in the whole installation, and the link will be very easy and flexible, as well as cost-effective.

Conclusion

Current 40G connectivity can be obtained by MTP-8 solution. Though present market is still popular with 12-fiber or 24-fiber MTP, 8-fiber MTP solutions that are starting to hit the market are considered the most efficient option since they support current and future duplex fiber applications (such as 200G and 400G) and using modules that break out 8-fiber MTPs to duplex LCs, as well as current and future 8-fiber applications without the need for conversion cords or modules.

Data Center Architecture Designs Comparison: ToR Vs. EoR

The interconnection of switches and warranty of data communication are the basic aspects to consider when designing a data center architecture. Today’s data centers have been shifted into 1RU and 2RU appliances, thus setting the 1RU and 2RU switches into the same-sized racks can greatly save space and reduce cabling demands. Typically, Top of Rack (ToR) and End of Row (EoR) are now the common infrastructure designs for data centers. In this article, we will mainly discuss the differences between these two approaches.

tor-eor

Overview of ToR & EoR
What Is ToR?

ToR approach refers to the physical placement of network access switch in the top of a server rack. Servers are directly linked to the access switch in this method. Each server rack usually has one or two access switches. Then all the access switches are connected with the aggregation switch located in the rack. Only a small amount of cables are needed to run from server rack to aggregation rack.

top-of-rack

What Is EoR?

In the EoR architecture, each server in individual racks are directly linked to a aggregation switch eliminating the use of individual switches in each rack. It reduces the number of network devices and improves the port utilization of the network. However, a large amount of cables is needed for the horizontal cabling. Along with the EoR approach, there is also a variant model named as MoR (Middle of Row). The major differences are that the switches are placed in the middle of the row and cable length is reduced.

end-of-row

Comparison Between ToR & EoR
Benefits

As for ToR, the cost of cables are reduced since all server connections are terminated to its own rack and less cables are installed between the server and network racks. Cable management is also easier with less cables involved. Technicians can also add or remove cables in a simpler way.

In the EoR, device count is decreased because not every rack should equip the switches. Thus, less rack space is required in the architecture. With less devices in data center, there will be less requirements for the cooling system which also reduces the using of electricity power.

Limitations

In reverse, there are also some limitations for each architecture. For ToR, although the cables are reduced, the number of racks is still increased. The management of switches may be a little tricky. In addition, ToR approach takes up more rack space for the installation of switches.

As for EoR, its Layer 2 traffic efficiency is lower than the ToR. Because when two servers in the same rack and VLAN (virtual local area network) need to talk to each other, the traffic will go to the aggregation switch first before comes back. As less switches are used in EoR design, more cables are deployed between racks triggering higher possibility of cable mess. Skillful technicians are required when carrying out the cable management.

Physical Deployments of ToR & EoR
ToR Deployment

One is the redundant access switch deployment which usually demands two high-speed and individual ToR switches that connect to the core network. Servers are interconnected to access switches deployed within the server racks. Another is the server link aggregation with ToR deployment. Two high-speed ToR switches are part of the same virtual chassis. Servers can connect to both of the switches located at top of rack with link aggregation technology.

EoR Deployment

EoR access switch deployment is very common to extend all the connections from servers to the switching rack at the end of row. If the deployment is needed to support the existing wiring, you can also deploy a virtual chassis.

Conclusion

ToR and EoR are the common designs for data center architecture. Choosing the proper one for your network can promote the data center efficiency. From this article, you may have a general understanding about these two methods. Hope you can build up your data center into a desired architecture.

Key Components to Form a Structured Cabling System

Building a structured cabling system is instrumental to the high performance of different cable deployments. Typically, a structured cabling system contains the cabling and connectivity products that integrates the voice, data, video, and various management systems (e.g. security alarms, security access, energy system, etc.) of a building. The structured cabling system is based on two standards. One is the ANSI/TIA-568-C.0 of generic telecommunications cabling for customer premises, and another is the ANSI/TIA-568-C.1 of commercial building telecommunications cabling for business infrastructures. These standards define how to design, build, and manage a cabling system that is structured. Six key components are included to form a structured cabling system.

Six Subsystems of a Structured Cabling System

Generally speaking, there are six subsystems of a structured cabling system. Here will introduce them respectively for better understanding.

structured-cabling-system

Horizontal Cabling

The horizontal cabling is all the cabling between telecommunications outlet in a work area and the horizontal cross-connect in the telecommunications closet, including horizontal cable, mechanical terminations, jumpers and patch cords located in the telecommunications room or telecommunications enclosure, multiuser telecommunications outlet assemblies and consolidation points. This type of wiring runs horizontally above ceilings or below floors in a building. In spite of the cable types, the maximum distance allowed between devices is 90 meters. Extra 6 meters is allowed for patch cables at the telecommunication closet and in the work area, but the combined length of these patch cables cannot exceed 10 meters.

Backbone Cabling

Backbone cabling is also known as vertical cabling. It offers the connectivity between telecommunication rooms, equipment rooms, access provider spaces and entrance facilities. The cable runs on the same floor, from floor to floor, and even between buildings. Cable distance depends on the cable type and the connected facilities, but twisted pair cable is limited to 90 meters.

Work Area

Work area refers to space where cable components are used between communication outlets and end-user telecommunications equipment. The cable components often include station equipment (telephones, computers, etc.), patch cables and communication outlets.

Telecommunications Closet (Room & Enclosure)

Telecommunications closet is an enclosed area like a room or a cabinet to house telecommunications equipment, distribution frames, cable terminations and cross connects. Each building should have at least one wiring closet and the size of closet depends on the size of service area.

Equipment Room

Equipment room is the centralized place to house equipment inside building telecommunications systems (servers, switches, etc.) and mechanical terminations of the telecommunications wiring system. Unlike the telecommunications closet, equipment room houses more complex components.

Entrance Facility

Entrance facility encompasses the cables, network demarcation point, connecting hardware, protection devices and other equipment that connect to the access provider or private network cabling. Connections are between outside plant and inside building cabling.

Benefits of Structured Cabling System

Why do you need the structured cabling system? Obviously, there are many benefits for using the system. A structured cabling can standardize your cabling systems with consistency so that the future cabling updates and troubleshooting will be easier to handle. In this way, you are able to avoid reworking the cabling when upgrading to another vendor or model, which prolongs the lifespan of your equipment. All the equipment moves, adds and changes can be simplified with the help of structured cabling. It is a great support for future applications.

structured-cabling

Conclusion

From this article, we can know that a structured cabling system consists of six important components. They are horizontal cabling, backbone cabling, work area, telecommunications closet, equipment room and entrance facility. Once you split the whole system into small categories, the cabling target will be easier to get. As long as you keep good management of these subsystems, your cabling system is a success to be called as structured wiring.