As we all know, data center cabling system consists of multiple devices, such as fiber optic transceiver, fiber optic patch cable, fiber patch panel, cable manager, and so on. As the central nerve of the whole cabling system, gigabit switch has been a topic of discussion. To satisfy different sizes of networking deployment, there are various types of Ethernet switches. This article will introduce a kind of 48 port gigabit SFP switch.
Overview of 48 Port Gigabit SFP Switch
FS.COM S5800-48F4S switch is a 48 port gigabit switch with 10gb uplink. It has 48×1GbE SFP ports and 4×10GbE SFP+ ports in a compact 1RU form factor. The switching capacity of this 48 port switch is 176 Gbps and its non-blocking bandwidth is 88 Gbps. And this gigabit switch can provide 130.95 Mpps. Meanwhile, FS.COM S5800-48F4S switch has 2 (1+1 redundancy) hot-swap power supplies and 4 (N+1 redundancy) hot-swappable fans. It is also a low latency L2/L3 Ethernet switch with 2.3us latency. The price of this 48 port gigabit switch with 10gb uplink is US$ 1,699.00. Here is a figure for you which shows front and back panel overview of FS.COM S5800-48F4S 48 port gigabit SFP switch.
Highlights of 48 Port Gigabit SFP Switch
The S5800-48F4S 48 port gigabit SFP switch with 10GE SFP+ uplinks comes with the complete system software with comprehensive protocols and applications to facilitate the rapid service deployment and management for both traditional L2/L3/MPLS networks. With support for advanced features, including MLAG, SFLOW, SNMP etc, this switch is ideal for traditional or fully virtualized data center. The S5800-48F4S hardware also provides high-availability features, including pluggable redundant fans and using high quality electronic components, which ensures low power consumption.
Applications of 48 Port Gigabit SFP Switch
Designed with 48×1GbE SFP ports and 4×10GbE SFP+ ports, FS.COM S5800-48F4S 48 port gigabit SFP switch can accomplish N×1G to N×1G (N≤48) connection or N×10G to N×10G (N≤4) connection. For example, in 5G to 5G connection, on one side, five 1G SFP transceivers are plugged into SFP ports on S5800-48F4S switch; on the other side, another five 1G SFP transceiver modules are plugged into SFP ports on switch, too. Then, these five SFP optical transceivers are connected by five fiber optic cables. It should be noted that the transceivers and fiber patch cables used in the link are of the same type.
Supported Accessories for 48 Port Gigabit SFP Switch
In the above part, we mention that the S5800-48F4S 48 port gigabit SFP switch can be used with SFP transceiver, SFP+ module and fiber optic cable. This part will go on introducing some support accessories for this 48 port gigabit SFP switch.
|ID||Type||Wavelength||Transmission Distance||Interface||DOM Support|
|29838||1000BASE-SX SFP||850 nm||550 m over OM2 MMF||LC duplex, MMF||Yes|
|20057||1000BASE-T SFP||–||100 m over Cat5||RJ45||No|
|29849||1000BASE-LX/LH SFP||1310 nm||10 km||LC duplex, MMF/SMF||Yes|
|11591||10GBASE-LR SFP+||1310 nm||10 km||LC duplex, SMF||Yes|
|11589||10GBASE-SR SFP+||850 nm||300 m over OM3 MMF||LC duplex, MMF||Yes|
|ID||Cable Length||Connector||Type||Fiber Count||Polish Type||Jacket Material|
|21278||2 m||SFP+ to SFP+||Passive Copper Cable (DAC)||–||–||PVC (OFNR)|
|35194||3 m||SFP+ to SFP+||Passive Copper Cable (DAC)||–||–||PVC (OFNR)|
|40191||1 m||LC to LC||OS2||Duplex||UPC to UPC||PVC|
|40192||2 m||LC to LC||OS2||Duplex||UPC to UPC||PVC|
|41730||1 m||LC to LC||OM3||Duplex||UPC to UPC||PVC|
|40180||1 m||LC to LC||OM4||Duplex||UPC to UPC||PVC|
|ID||Wavelength||Channel Spacing||Channel Bandwidth||Line Type||Client Port||Special Port|
|33489||18 channels 1270-1610nm||20 nm||±6.5nm||Dual fiber||Duplex LC/UPC||Monitor Port|
|43099||8 Channels 1470-1610nm||20 nm||±6.5nm||Dual fiber||Duplex LC/UPC||Expansion Port|
As the size of data center becomes larger and larger, cable density increases, too. To simplify the cabling, many data center managers prefer network switch with high density ports. The above 48 port gigabit SFP switch with 10GE SFP+ uplinks is a suitable choice for high density cabling.
Today, big data centers are upgraded from 10G to 40G or 100G, and some small homelabs migrate from 1G to 10G. For some small business data center, 24 port switch is enough. The switches produced by HP are popular with many data center designers for the high quality and low price. The HP ProCurve 2910al-24G Switch (J9145A) is highly recommended in many forums. This article will guide you to have a closer look at HP ProCurve 2910al-24G Switch (J9145A) switch.
The HP ProCurve 2910al-24G switch (J9145A) is a 24 port switch that can be used to build high-performance switched network. It offers low latency for high-speed networking. For port configuration, it has twenty 10/100/1000 ports, four dual-personality ports, one RJ45 serial console port and four 10G ports. The HP ProCurve 2910al-24G switch can provide the most flexible and easy-to-deploy uplinks in its class. It can be deployed at enterprise edge and remote branch offices, converged networks, and data center top of rack.
This part will give a detailed introduction to the network ports on HP ProCurve 2910al-24G switch and cabling solutions for the switch.
- Twenty 10/100/1000 ports: All these ports have the “Auto MDIX” feature, which means you can use either straight-through or crossover twisted-pair cables to connect any network devices to the switch.
- Four dual-personality ports: Each port can be used as either an RJ45 10/100/1000 port or as a mini-GBIC slot for use with mini-GBIC transceivers. By default, the RJ45 connectors are enabled. If a mini-GBIC is installed in a slot, it is enabled and the associated RJ45 connector is disabled and cannot be used. If the mini-GBIC is removed, the associated RJ45 port is automatically re-enabled.
- Four 10G ports: These ports provide connectivity for 10G speed though either copper or fiber optic media.
Since HP ProCurve 2910al-24G switch has 10/100/1000 ports, it can be used for 1G to 1G connection. As shown in the following figure, two 1000BASE-SX SFP transceiver modules are respectively plugged into 1G ports on two HP ProCurve 2910al-24G switches. Then these two 1000BASE-SX SFP transceiver modules are connected via a LC multimode fiber optic cable.
Designed with 10G ports, HP ProCurve 2910al-24G switch can realize 10G to 10G connection just like the above 1G connection. Just replace the 1000BASE-SX SFP transceiver module with 10GBASE-SR SFP+ transceiver. Besides, you can also accomplish 10G to 10 connection by using a 10G SFP+ to SFP+ DAC twinax cable. The following figure shows the cabling solution for you.
The HP ProCurve 2910al-24G switch (J9145A) is a high-performance Gigabit Ethernet switch. It is a good selection for small business network deployment. If you need compatible fiber optic transceiver and fiber optic cable for HP ProCurve 2910al-24G switch, you can have a look at FS.COM. The following table shows some compatible optical components. For more details, you can visit our site.
|ID NO.||Model Number||Description|
|13261||HPE 1000BASE-SX SFP||HPE J4858A Compatible 1000BASE-SX SFP 850nm 550m DOM Transceiver, US$ 10.00|
|30531||HPE 1000BASE-LX SFP||HPE JD119B Compatible 1000BASE-LX SFP 1310nm 10km DOM Transceiver, US$ 12.00|
|32156||HPE 1000BASE-LH SFP||HPE JD061A Compatible 1000BASE-LH SFP 1310nm 40km DOM Transceiver, US$ 14.00|
|36784||HPE 10G SFP+ DAC||1m (3ft) HPE J9281B Compatible 10G SFP+ Passive Direct Attach Copper Twinax Cable, US$ 22.00|
|11559||HPE 10GBASE-SR SFP+||HPE J9150A Compatible 10GBASE-SR SFP+ 850nm 300m DOM Transceiver, US$ 20.00|
|31597||HPE 10GBASE-LRM SFP+||HPE JD093A Compatible 10GBASE-LRM SFP+ 1310nm 220m DOM Transceiver, US$ 34.00|
|15427||HPE 10GBASE-LR SFP+||HPE JD094B Compatible 10GBASE-LR SFP+ 1310nm 10km DOM Transceiver, US$34.00|
With the growing prevalence of Dell PowerEdge Servers, Dell PowerEdge R710 servers gradually caught Ethernet users’ eyes for its competitive price, superb quality and low power consumption. As one part of the Dell PowerEdge Select Network, Intel Ethernet Network Adapters are high performance adapters for 1/10/25/40GbE Ethernet network connections. This article would give a brief introduction to Dell R710, Intel NDC for Dell R710 and optics for Intel NDC.
The Dell PowerEdge R710 is a 2U rack server that can support up to two quad- or six-core Intel Xeon 5500 and 5600 series processors and to eight SATA or SAS hard drives, giving you up to 18TB of internal storage. It has 18 memory slots allow for a maximum of 288GB of memory, allowing the R710 to support and memory-intensive task you can throw at it. It has low power consumption and high performance capacity, which helps you save both money and time. We would provide detailed information about Dell PowerEdge R710 in the next two parts.
Figure1: Dell R710 2U Rack Server(Resource: www.DELL.com)
Dell PowerEdge R710 adapted Intel’s E5-2600 processors, which have up to 20MB cache and up to 8 cores versus the 5500 and 5600 processors which max out at 12MB Cache and 6 cores. These 8-core processors are ideal for increased security, I/O innovation and network capabilities, and overall performance.
The R710 supports a high level of internal storage maxing out at 18TB. That includes up to six 3.5″ hard drives or eight 2.5” hard drives. It provides support for 6Gb/s SAS, SATA, and SSD drives.
The R710 sports Dell’s new iDRAC6 management controller, which has a dedicated network port at the rear of the server. It provides a web browser interface for remote monitoring and viewing the status of critical server components, and the Enterprise upgrade key brings in virtual boot media and KVM-over-IP remote access.
Based on Symantec’s Altiris Notification Server, the Management Console takes over from Dell’s elderly IT Assistant and provides the tools to manage all your IT equipment, instead of just Dell servers. Installation is a lengthy process, but it kicks off with an automated search process that populates its database with discovered systems and SNMP-enabled devices.
According to Dell PowerEdge R710 datasheet, the Dell PowerEdge R710 servers support dual port 10GB enhanced Intel Ethernet server adapter X520-DA2. This Dell Intel Ethernet Network Daughter Card X520/I350 provides low cost, low power flexibility and scalability for the entire data center. The Intel Ethernet Network Daughter Card X520-DA2/1350-T2 provides two 1000BASE-T ports and two 10G SFP+ ports as shown in Figure 2 to support 1GbE/10GbE Ethernet network connectivity.
Figure2: Intel Ethernet Network Daughter Card X520-DA2
As we mentioned before, the Intel Ethernet Network Daughter Card X520-DA2 provides two 1000BASE-T ports and two 10G SFP+ ports respectively. We can plug two Intel compatible SFP transceivers into the two 1G ports on the Intel NDC X520-DA2 respectively to achieve 1G network connection. Likewise, we can also connect Intel compatible SFP+ transceiver to the 10G port on the Intel card for making 10G data transferring. In addition, we can also use Direct Attach Cable (DAC) to achieve 10G network connectivity, such as 10Gb fibre optic cable, Intel 10G SFP+ DAC cable. DAC cables are suitable for data transmission over very short link length while optical modules are more appropriate for longer transmission distance.
Figure3: Intel NDC X520-DA2 with 10G optical modules
The Dell PowerEdge R710 server with the high performance of Intel Ethernet Network Daughter Card X520-DA2 offers you a 2U rack to efficiently address a wide range of key business applications. With the great Intel NDC card, it also provides you a perfect solution for 1G and 10G Ethernet network connectivity. You can rest assured to enjoy a low consumption but high capacity server to keep your business.
Recently I met a trouble that when I want interconnect WS c2960 24 TC L + 2 SFP with 1000base LH GLC LH SMD or GLC LH SM to WS C3750G 12S + 12 SFP ports, and I do not know which model of GLC LH SMD or GLC LH SM want to select, and I am going to use single mode fiber optic cable 1310m 9/125. If I need single mode SMD (2 fiber counts) or single fiber. I just want to find out WS C3750G12S + 12 SFP ports that 12 SFP ports are SM or SMD use LC connector. The solution to SFP transceiver module connections will be provided in this article.
In fact, I received several answers, someone says, the price between the 2 usually the same but GLC LH SMD SFP transceiver module supports additional option of DOM (digital optical monitoring). So he uses GLC LH SMD, as for which fiber cable, just use the single fiber cable, but I think so I use SMD how can I use single fiber cable, this is one way direction and i think i should use single mode duplex (2 fiber count). And can connect single mode duplex (2 fiber count) fiber cable to WS C3750G 12S.
Then someone from Fiberstore answered me that both the GLC LH SM and GLC LH SMD SFP transceiver module supports the IEEE 802.3 1000Base LX/LH standard and be good for compatible with each other. The difference between the two is the case that the GLC-LH-SMD transceiver has additional support for Digital Optical Monitoring capability. As for the second question, 1000base fx sfp, 1000base sx sfp, 1000base lx lh sfp, 1000base zx sfp, 1000base bx10 sfp, DWDM and CWDM SFP transceiver module: LC fiber connectors (single-mode or multimode fiber); 10GBASE-SR, LR, LRM, CX1 (v02 or higher) SFP+ Transceivers: LC fiber connectors (single-mode or multimode fiber).
Then if you have such similar problems, hope it can help you, and there I have a small tips for us, if you want to know where to get sfp transceiver and reach these effects, Fiberstore is a good place to get them, I found it do a professional site, and the price also reasonable, It is worth mentioning that Fiberstore is doing a big sales.
With the continual?expansion of business data center, the most several difficult problems for them are mainly about divided network environments, simple says, separate data network, separate storage network, separate calculation?network, protocols and there are different disadvantages of standards. Different networks need different cards, space, power and cooling infrastructures for their business. Interspersed interlaced network cabling may make many administrators feel dizzy. Aimed at different network environments, the company needs a professional technical team to support, manage, maintenance, these problems all block the forward developments of data center, let alone meet the future of cloud computing.?The figure shows?modern data center.
You may ask, what qualities we are taken for the next generation data center networks? Now we can see the characteristics, such as simplicity, virtualization features, can accommodate the expansion of the size of the data center, support higher bandwidth, low latency, non-blocking and so on. But as for these features, mainstream network equipment vendors aimed the advantages of their products and solutions to this goal, so if there is an architecture able to support these features do? The answer is yes, unified architecture (Unified Fabric), regardless of Cisco UCS. Brocade VCS, juniper3-2-1 plan, H3C’s unified fabric, also exist Unified Fabric. This page we just say two points.
The key technology under Unified Architecture (Unified Fabric)
With the reduction of gigabyte networks at the server connections, Gigabit Ethernet share is rising, which maily due to the growth in the enterprise data center network traffic.In fact, with the 40G / 100G standard developments, it is sure that Gigabit Ethernet replacing the gigabyte networks, the birth of the Gigabit sfp+ transceiver?standard which has low power, sfp+ direct attach cable?can be achieved in the case of low-cost Gigabit Ethernet. And just mainstream might not be enough, the following what I will be introduced to toward greater success.
FCoE protocol (Fiber Channel over Ethernet)
FCoE is one of the most shining technology data center network currently, any vendors have to talk about the technology of the developments of their products. FCoE refers to the Fiber Channel over Ethernet, it can insert fiber channel information into the Ethernet packet, so that the Fiber Channel storage devices -SAN server requests and data can be transmitted over an Ethernet connection, without the need for a dedicated Fiber Channel fabric . Its main benefits: First, make storage traffic and network traffic share the same Ethernet cable and a fusion of the card, simplifying management and reducing energy consumption. Second, provide the same performance with optics. The third is the ability to integrate effectively existing SAN.
Finally, you know, now i work for Fiberstore and in our website, we can provide the most advanced technology and the most effective way to help you solve the fiber optics problems, and at the same time, we also provide all the fiber optic products, such as 10 100 1000base t ethernet sfp, qsfp 4x10g aoc7m and glc fe 100lx rgd to buy. If you are interested, take a decision for it.