How to Choose Optical Distribution Frame

Due to the development of high speed transmission, demands for high density patching have increased in recent years. However, the management of installed cables still remains a difficult task. To achieve a simpler way of cable organization, people often use the cost-effective optical distribution frame (ODF) to arrange optical cable connections. ODF plays an important part in building a safe and flexible operating environment for the optical network. Different kinds of ODFs are provided in the market, but you need to choose the right one according to the actual situation.

Functions of Optical Distribution Frame

ODF is mainly used for fiber optic terminal splicing, fiber optic connector installation, optical path adjusting, excess pigtail storage and fiber optic cable protection. When cable enters into the rack, ODF should mechanically fix the cable and install the ground wire protection. Fiber optic cables will also be divided into groups for better management. When it comes to the spliced fibers, extra parts will be stored as a coil and the fusion splices are well-protected in the ODF. Adapters and connectors are pluggable and optical path can be freely adjusted or tested. Moreover, enough space of ODF is provided to satisfy a clear connection.

Things to Consider for Choosing ODF

Selecting a right ODF is vital to future applications. Here are some recommended aspects for you to consider before purchasing:

1) ODF Types

Generally, there are three types in terms of its structure. They are wall mount ODF, floor mount ODF and rack mount ODF. Wall mount ODF shapes are like a small box installed on the wall. Because space is restrained, wall mount ODF only accepts small fiber counts. Floor mount ODF has a fixed and large fiber capacity in a closed structure. Rack mount ODF is more flexible to be installed on the rack to meet your requirements for different cable counts and specifications. This type is frequently used in the optical distribution system with 19 inches’ specification to accommodate the size of standard transmission rack.

rack-mount-optical distribution frame

2) Fiber Counts

High density fiber counts have become the trend for future data center. Today, a single ODF unit usually has 12, 24, 36, 48, 72, 96 or even 144 ports. Customized ODF according to your needs is also available in the market.

3) Easy Management

Using a high density device will definitely increase the difficulty of cable management. ODF should allow for easy access to the connectors on the front and rear ports for quick insertion and removal, which means that ODF must provide adequate space. Besides, ODF should have the right colored adapters to match with optical connectors in case of wrong connections.

4) Good Protection

One basic function of ODF is the protection function. A standard ODF should comprise protection devices to prevent fiber optic connections from dust or stress damages. For instance, the splicing connection is very sensitive to the outside environment and is important to the normal operation of a network, so the good quality of ODF protection device is of great importance.

Conclusion

In a word, optical distribution frame is now an indispensable equipment for the deployment of optical network. High-density ODF is especially popular in the industry. To find a suitable ODF with a lower price, careful selection is important. This article only provides some basic factors that may affect the application of ODF. For more information, please visit FS.COM.

10GbE or More – The Choice of Virtualization and Cloud Computing

At one time, data centers were discrete entities consisting of independent silos of computing power and storage bridged together by multiple networks. Servers were consolidated from time to time, but they still used discrete and independent networks such as Token Ring, Ethernet, Infiniband*, and Fibre Channel.

Then along came virtualization and cloud computing. They brought with them a variety of storage technologies and, more recently, Software Defined Networking (SDN). Collectively, these technologies are providing dramatic gains in productivity and cost savings. They are also fundamentally changing enterprise computing and driving a complete rethinking of enterprises’ networking architecture strategies.

Cloud Computing

As virtualization continues to take hold in data centers, the silos of computing, network, and storage that were once fixtures are increasingly being replaced by resource pools, which enable on-demand scalable performance. Hence, as application needs grow, the infrastructure can respond.

Unfortunately, this is a two-edged sword. Virtualization has increased server utilization, reducing the once-prevalent server sprawl by enabling enterprises to do more with less, and simplifying and maximizing server resources. But it has also driven the demand for networking bandwidth through the roof in complexity and created a major bottleneck in the system.

For some enterprises, server virtualization alone isn’t enough and they have deployed a private cloud-based infrastructure within their data center. With this comes the need to not only scale resources for a specific application but also for the data center itself to be able to scale to meet dynamic business needs. This step requires storage and networking resources to move away from the restriction of dedicated hardware and be virtualized as well.

This next step in virtualization is the creation of a virtual network that can be controlled and managed independent of the physical underlying compute, network, and storage infrastructure. Network virtualization and SDN will play a key role, and network performance will ultimately determine success.

Today, discrete data center networks for specific components, such as storage and servers, are no longer appropriate. As companies move to private cloud computing, in which systems are consolidated onto fewer devices, networks must be simpler to deploy, provision, and manage.

Hence, 1GbE connectivity is no longer enough. Think about it: Servers are getting faster, and enterprises are running a growing number of Virtual Machines (VMs) that compete for existing I/O on each server. An increased amount of data must be processed, analyzed, shared, stored, and backed up due to increased VM density and enterprises’ rising storage demands. If this growth is left unmanaged, the network will become even more of a bottleneck, even as server speeds continue to increase.

To truly reap these gains, the network needs to keep up. More bandwidth means faster access to storage and backup, and faster network connections mean lower latency and a minimal bottleneck.

Moving to 10GbE

Despite its recent maturity, 10GbE has had a long journey. Initially ratified by the IEEE in June 2002 as 802.3ae, the standard is a supplement to the 802.3 standard that defines Ethernet.

Officially known as 10 Gigabit Ethernet, 10GbE (also referred to as 10G) operates in only full-duplex mode and supports data transfer rates of 10 gigabits per second for distances up to 300 meters on multimode fiber optic cables and up to 10 kilometers on single-mode fiber optic cables.

Although the technology has been around for many years, its adoption has been slow. After spending nearly a decade building out their 1GbE networks, enterprise have been reluctant to overhaul the resources invested in the network, including adapters, controllers and other devices, and—perhaps most of all—cabling. But as virtualization and cloud operations become core technology components, they are bringing with them changing network requirements, key to which is that the minimum for an advanced dynamic architectures is now 10GbE.

Crehan Research reported that while 17 percent of server Ethernet ports conformed to the 10GbE standard in 2012, the majority—83 percent—still followed the 1GbE standard. In 2013, those numbers remained steady, with 81 percent and 19 percent following the 1GbE and 10GbE standards, respectively. 2014, however, was expected to be a year of change. Crehan Research forecasted that 10GbE usage would increase, with 28 percent of server Ethernet ports adhering to the standard. Inversely, 1GbE installs would decline to 72 percent. This trend was expected to continue through 2018, at which time 79 percent of server Ethernet ports would be using 10GbE and a mere 4 percent 1GbE. The remaining ports would have migrated to 40GbE.

For some time now, hardware vendors have been designing their products—from processors to interconnects and switches—with 10GbE in mind. Software vendors are now well-versed in these needs as well and are designing applications that take advantage of 10GbE. Enterprises are now in effect paying for an optimization for which they may not be reaping the benefits. Fortunately, while this ecosystem around 10GbE has been growing and the speed is now expected if an enterprise is to achieve its desired performance, price points of 10GbE supported products have been dropping.

The benefits 10GbE brings to data centers can be classified into three categories: Performance, Simplification, and Unification.

Performance, or increased speed, is likely the first enhancement that comes to mind, but performance improvements are not the only advantage 10GbE brings to data centers. It also unifies and simplifies the data center, thus reducing cost and complexities associated with maintaining the network. From the very beginning, simplicity and low cost were goals for Ethernet. And indeed, by unifying the data center storage and network fabrics by adding support for both Fibre Channel or iSCSI technologies over Ethernet and thus a single, low cost, high bandwidth interconnect, 10GbE is able to reduce network costs. In addition, 10GbE simplifies the network infrastructure by reducing power, improving bandwidth, and lessening cabling complexity.

As enticing as these advantages are, few technologies are compelling without a comprehensive ecosystem behind them. Ecosystems for new technologies can sometimes be a chicken and egg cycle, however. Vendors are reluctant to invest resources in building an ecosystem if enterprises aren’t buying product, but enterprises are reluctant to buy a new type of product if it lacks an ecosystem. A comprehensive ecosystem is an indicator of a technology’s maturity, and 10GbE does not disappoint.

On the market today are a variety of products that support 10GbE, including processors, servers, adapters, switches, and cables. There is also support for multiple media types within 10GbE as well as improved cable technologies.

Getting the Most Out of Popular Technologies

For a technology improvement to be considered worth pursuing, it must facilitate the enterprise in more easily achieving its business goal. In the case of 10GbE, virtualization was the first technology to truly feel its benefits. Virtualization enables enterprises to satisfy a host of business goals from resource maximization to agility. The benefits virtualization brings to enterprises are enhanced and fully realized when the network is migrated to 10GbE.

Virtualization was, in effect, the watershed use case for 10GbE. It offered a way to address the growing complexity and bottlenecks associated with virtualization’s need for more network bandwidth. Now, IT could consolidate the ever-growing numbers of 1GbE ports and replace them with 10GbE. The move to 10GbE also enabled IT to unify data center storage and network fabrics, and in some cases I/O virtualization, by adding support for Fibre Channel or iSCSI technologies over Ethernet.

For many enterprises, the next stage after deploying a virtual infrastructure is to add a cloud component. This transition enables enterprises to not only scale resources for a specific application, but also, and more importantly, for the data center to scale to meet dynamic business needs.

For this to be successful, both storage and networking resources must move away from the restrictions endemic to dedicated hardware and be virtualized. This next step creates a virtual infrastructure that can be controlled and managed independent of the physical underlying compute, network, and storage infrastructure in the form of network virtualization and an SDN. For these infrastructures to function, the low latency of 10GbE, and in some cases 40GbE, is required.

Cloud and virtualization are not the only technologies driving 10GbE adoption. Rapidly increasing volumes of data, both structured and unstructured, must be stored and backed up. Being able to scale workloads quickly and efficiently by creating a single storage and data network, and enabling unified resource pools of compute and storage resources, is critical for the network to function at the speeds and capacities enterprises expect.

10GbE makes this possible.

10GbE Worldwide

Conclusion

10GbE enables enterprises to boost network performance while simplifying network infrastructure, reducing power, improving bandwidth, and reducing cabling complexity. Unifying different types of traffic onto a single, low-cost, high-bandwidth interconnect further simplifies the network.

The performance improvements and benefits of simplification and unification will be most acutely felt by enterprises deploying a virtual infrastructure. With a virtual infrastructure fast becoming the norm for many enterprises, the importance of a network that can meet performance, maintenance and other usability challenges is critical. In addition to virtualization, 10GbE offers numerous benefits to cloud-based infrastructures and enterprises with heavy storage requirements.