Fiber Optic Network Solutions From Gigalight

Gigalight provides Optical Transceivers, Passive Optical Components, Active Optical Cables, MTP/MPO Data Center Cabling Products and other fiber optic network accessories.

A Guide to the Interfaces of Optical Transceiver Modules

In today's optical communications market, there are a variety of transceiver modules with various types of interfaces. Because different types of cables/connectors/adapters are required for different interfaces, we need to pay more attention when selecting the relevant assemblies. This article will give you a detailed introduction to the mainstream transceiver module interfaces on the market, so that everyone has a clearer understanding of the transceiver modules.


First of all, we use the following table to list all transceiver modules' interfaces.


Form Factor Transmission Mode Interface Example
QSFP-DD Parallel MPO 200G QSFP-DD SR8/PSM8
QSFP-DD Multiplexing Dual CS 200G QSFP-DD CWDM8
QSFP28 Parallel MPO 100G QSFP28 SR4/PSM4
QSFP28 Multiplexing Duplex LC 100G QSFP-DD LR4/CLR4/CWDM4/ER4
QSFP+ Parallel MPO 40G QSFP+ SR4/PSM4
QSFP+ Multiplexing Duplex LC 40G QSFP+ LR4
SFP28 Dual Fiber Duplex LC 25G SFP28 SR/LR
SFP28 Single Fiber Bidirectional Simplex LC 25G SFP28 BiDi
SFP+ Dual Fiber Duplex LC SFP+ 10GBASE-SR/LR
SFP+ Single Fiber Bidirectional Simplex LC/SC SFP+ BiDi
SFP+ 2-channel Bidirectional Dual LC SFP+ 2-channel BiDi
SFP+ Electrical Copper Cable RJ-45 SFP+ 10GBASE-T
SFP Dual Fiber Duplex LC SFP 1000BASE-SX/LX
SFP Single Fiber Bidirectional Simplex LC/SC SFP BiDi
SFP 2-channel Bidirectional Dual LC SFP 2-channel BiDi (CSFP)
SFP Electrical Copper Cable RJ-45 SFP 1000BASE-T
CXP Parallel MPO 120G CXP SR10
CFP Parallel MPO 100G CFP SR10
CFP Multiplexing Duplex LC 100G CFP LR4/ER4
CFP2 Parallel MPO 100G CFP SR10
CFP2 Multiplexing Duplex LC 100G CFP2 LR4/ER4
CFP4 Parallel MPO 100G CFP4 SR4
CFP4 Multiplexing Duplex LC 100G CFP4 LR4/ER4

As the table shows, although there are more than a dozen types of transceiver modules, there are only a few types of interfaces. These types of optical interfaces are LC, SC, MPO, and CS. And there are also electrical copper transceiver modules using the RJ-45 interface. Among these interfaces, the LC interface can be divided into duplex and simplex, and there are dual-simplex LC interface (such as CSFP). For BiDi optical transceivers, there are also simplex SC interface, in addition to the simplex LC. We will introduce each of these interfaces one by one, according to the transmission modes of the transceiver modules.


LC/SC


As we konw, a transceiver module consist of a transmiter and a receiver. This means that the transmission has two directions. For the common single-channel optical transceivers, such as SFP28, SFP+, and SFP, the transmitting terminal is connected to one optical fiber and the receiving terminal is also connected to one optical fiber. That's why the common optical transceivers are called dual-fiber transceiver generally. The dual-fiber transceiver has a duplex LC interface connected to a duplex LC patch cable. (The XENPAK, X2, and GBIC dual-fiber transceivers, not listed in the table, have a duplex SC interface connected to a duplex SC patch cable.)


Standard Transmission Mode of Transceiver Modules


Standard Dual-Fiber Optical Transceiver Modules


The single-fiber bidirectional transmission mode is called BiDi for short. The BiDi signals in both directions are combined in a single fiber. The bidirectional transmission means that the light is directional and will not interference each other. The BiDi optical transceiver, such as BiDi SFP+ and BiDi SFP, have a simplex LC or SC interface connected to a simplex LC or SC patch cable. And for high-density BiDi transmission networks, there are 2-channel BiDi SFP+/SFP (CSFP+/CSFP) optical transceivers using dual simplex LC interface.


Single-Fiber Bidirectional Transmission Mode of Transceiver Modules


Single-Fiber BiDi Optical Transceiver Modules


MPO


For multi-channel optical transceivers, such as 4-channel QSFP+, 4-channel QSFP28, and 8-channel QSFP-DD, there are several Tx and several Rx. Some of them (such as 100G QSFP28 SR4 and 100G QSFP28 PSM4) have MPO interfaces, that is, multi-fiber pull on/off, using multiple optical fibers for the parallel transmission shown as the figure below.


Multi-Fiber Parallel Transmission Mode of Transceiver Modules


Multi-Fiber MPO Optical Transceiver Modules


There are also dual-fiber 4-channel optical transceivers using the multiplexing transmission mode, that is, multiple Tx multiplexing and Rx demultiplexing. These optical transceivers, such as 40G QSFP+ LR4 and 100G QSFP28 CWDM4, use two optical fibers for long-distance transmission, saving more optical fiber resources than using multi-core optical fibers. Like the common single-channel optical transceiver, the dual-fiber 4-channel optical transceiver also has a duplex LC interface connected to a duplex LC patch cable.


CS


The QSFP-DD MSA specification defines an 8-channel module, cage and connector system. The cage and connector system provides backward compatibility to the 4-channel QSFP28 modules. Doubling the number of duplex optical links with the QSFP-DD specification requires a new smaller optical interconnect to fit in the same physical cage form factor. For the eight-channel QSFP-DD optical transceivers using the multiplexing transmission mode, a new type of optical interface called dual CS is used to replace the duplex LC. The dual CS interface is connected to the CS connector, a miniature single-position plug which is characterized by duo cylindrical, springloaded butting ferrule(s) of a 1.25 mm typical diameter, and a push-pull coupling mechanism. The CS connector provides the characteristics and simplicity of the duplex LC connector into a smaller footprint to allow 2 pairs of CS connectors to fit within the physical constraints of the QSFP-DD form factor.


CS connector


RJ-45


The RJ-45 interface is used in copper transceiver modules, such as 10G copper SFP+, 1G copper SFP and 100M copper SFP. The copper SFP+ transceivers transmit electrical signals over Category 6a or Category 7 copper cables with RJ-45 connectors, while the copper SFP transceivers transmit electrical signals over Category 5 or Category 5e copper cables with RJ-45 connectors.


RJ-45 copper SFP

Article Source: http://www.gigalight.com/news_detail/newsId=430.html

The Trend of DSP's Application in Data Center

The data center 100G has begun to be used on a scale, and the next-generation 400G is expected to begin commercial use by 2020. For 400G applications, the biggest difference is the introduction of a new modulation format, PAM-4, to achieve a doubled transmission rate at the same baud rate (device bandwidth). For example, the single-lane baud rate of DR4 used for transmissions up to 500m need to reach 100Gb/s. In order to realize the application for such rate, the data center optical transceiver modules began to introduce Digital Signal Processor (DSP) chips based on digital signal processing to replace the clock recovery chips of the past to solve the sensitivity problem caused by insufficient bandwidth of the optical devices. Can DSP become a broad solution for future data center applications as expected in the industry? To answer this question, it is necessary to understand what problems the DSP can solve, what its architecture is, and how the development of its costs and power consumption trends in the future.


The Problems that DSP Can Solve

In the field of physical layer transmission, DSP was first applied in wireless communications for three reasons. First, the wireless spectrum is a scarce resource, and the transmission rate demand has been increasing. Increasing the spectrum efficiency is a fundamental requirement for wireless communications, so DSP is required to support a variety of complex and efficient modulation methods. Second, the transmission equation of the wireless channel is very complicated. The multipath effect, and the Doppler effect in the high-speed motion, can't satisfy the wireless channel's compensation demand with the traditional analog compensation. DSP can use various mathematical models to compensate the channel well Transmission equation. Third, the Signal-to-Noise Ratio (SNR) of the wireless channel is generally low, and the Forward Error Correction (FEC) should be used to improve the sensitivity of the receiver.


In the field of optical communications, DSP was first commercially used in long-distance coherent transmission systems over 100G. The reason is similar to that of wireless communications. In long-distance transmission, since the laying cost of optical fiber resources is very high, the improvement of spectral efficiency to achieve higher transmission rates on a single optical fiber is an inevitable requirement for operators. Therefore, after the use of WDM technology, the use of coherent technology based-on DSP has become an inevitable choice. Secondly, in long-distance coherent transmission systems, by using of a DSP chip, the dispersion effects, non-linear effects caused by transmitter (Tx) and receiver (Rx) devices and the optical fiber itself, and phase noise introduced by the Tx and Rx devices, can be easily compensated without the need for Dispersion Compensation Fiber (DCF) that placed in the optical link in the past. Finally, in long-distance transmission, due to the attenuation effect of optical fibers, an optical amplifier (EDFA) is generally used to amplify the signal every 80km to reach a transmission distance up to 1000km. Each amplification will introduce noise to the signal, reducing the SNR of the signal, therefore, the FEC should be introduced to improve the receiver's receiving ability during long-distance transmission.


To sum up, DSP can solve three problems. First, it supports high-order modulation formats and can improve the spectral efficiency. Second, it can solve the effects caused by components and signal-channel transmission. Third, it can solve the SNR problem.


Then, whether there are similar requirements in data center has become an important basis for us to judge whether we should introduce DSP.


First of all, let's take a look at the spectrum efficiency. Does data center need to improve spectrum efficiency? The answer is yes. But unlike the lack of wireless spectrum resources and insufficient optical fiber resources in the transmission network, the reason for improving spectrum efficiency in data center is that the insufficient bandwidth of the electrical/optical devices and the insufficient number of wavelength division/parallel paths (limited by the size of optical transceiver modules). Therefore, to meet the needs of future 400G applications, we must rely on increasing the single-lane baud rate.


The second point is that for single-lane 100G and above applications, current Tx electrical driven chips and optical devices can not reach bandwidths above 50GHz. Therefore, it is equivalent to that a low-pass filter is introduced at the transmitter. The performance on the code is inter-symbol interference in the time domain. Taking the application of 100G PAM-4 as an example, the bandwidth-limited modulation device will make the width of the optical eye diagram of the signal very small, then the clock recovery based on the analog PLL in the past could not find the best sampling point, making the receiver unable to recover the signal (this is also why the TDECQ needs to introduce an adaptive filter for equalization in the standards). After introducing the DSP, the signal can be directly spectrally compressed at the Tx end. For example, the extreme approach is to artificially introduce intersymbol interference between two symbols to reduce the signal bandwidth of the Tx end. At this time, the eye diagram of PAM-4 on the oscilloscope will become PAM-7 form. The Rx end recovers the signal through an adaptive FIR filter. In this way, the uncontrollable analog bandwidth effect in the modulating/receiving device becomes a known digital spectrum compression, reducing the bandwidth requirement for the optical device. Fujitsu's DMT (Discrect-Multi-Tone) modulation technology, which has been promoted in conjunction with DSP, can even use a 10G optical device to transmit 100G signals.


Third, does FEC technology really need to be introduced at the module end? Inside the data center, the maximum transmission distance is not more than 10km. The link budget is about 4dB with the loss of the joint. Such SNR effects caused by the link is basically negligible. Therefore, the FEC in the data center is not intended to solve the link SNR, but to solve the performance shortage of the optical devices. At the same time, we need to consider that the electrical interface signal at the optical module end is upgraded from 25G NRZ to 50G PAM-4 (net rate) in the 400G era, so it is often necessary to turn on the electrical FEC to meet the requirements for transmission from the optical transceivers to the switches. In this case, reopening the FEC on the module side is not necessary and has no effect. Because for FEC, we mostly discuss the error correction threshold. For example, 7% FEC error correction threshold is at 1E-3 Bit Error Rate (BER), that is to say that FEC is able to correct all errors below this BER, and FEC above this BER is essentially useless (regardless of the Burst Error which is usually solved with Inter-leaver). Therefore, there is no difference between the effect of using multiple FECs and using only the best FEC. Considering the power consumption and delay caused by FEC on the module side, it may be better to open FEC on the switch side in the future.


The Architecture of DSP

In the optical communication field, DSP generally consists of several parts: the front-end analog digital mixing section, including ADC (Digital-to-Analog Converter, required), DAC (Analog to Digital Converter, optional) and SerDes, digital signal processing section (including FEC) ) and the PHY section. The PHY section is similar to the CDR chip with the PHY function, and will not be described here.



The main function of ADC and DAC is to convert analog signal and digital signal, which is a bridge between the modulation device and digital signal processing section. The ADC/DAC mainly has four key indicators which are sampling rate, sampling effective bit width, analog bandwidth and power consumption. For the 100G PAM-4 application, the sampling rate of ADC in the Rx end needs to reach 100Gs/s. Otherwise, Alias will be generated during sampling, which will cause distortion to the signal. The effective sampling width is also very important. For PAM-4 applications, it does not mean that 2 effective bits can satisfy the requirement of digital signal processing, but at least 4. Analog bandwidth is currently the main technical challenge for ADC/DAC. This index is limited by both effective bit widths and power consumption. Generally, there are two ways to implement high bandwidth ADC/DAC which are GeSi and CMOS. The former has a high cutoff frequency and can easily realize the high bandwidth. The disadvantage is very high power consumption, so it is generally used in instrumentation. The cutoff frequency of CMOS is very low, so to achieve high bandwidth, multiple sub-ADCs/DACs must be sampled using an interleaving method. The advantage is low power consumption. For example, in a coherent 100G communication system, a 65Gs/s ADC with 6 effective bits is composed of 256 sub-ADCs with a sampling rate of 254Ms/s. It must be noted that although the ADC has a sampling rate of 65Gs/s, its analog bandwidth is only 18GHz. With a clock jitter of 100fs, the theoretical maximum analog bandwidth of 4 effective bits width is only up to 30GHz. Therefore, an important conclusion is that under the condition of using DSP, the bandwidth limitation of the general system is no longer the optical device, but the ADC and DAC.



In the data center applications, and the digital signal processing unit is still relatively simple. For example, for 100G PAM-4 applications, it performs spectral compression of the transmitted signal, nonlinear compensation, and FEC encoding (optional) in the Tx end, then the ADC uses an adaptive filter to compensate the signal and digital domain CDR in the Rx end (separate external crystal support is required). In the digital signal processing unit, the FIR filter is generally used to compensate the signal. The Tap number and the decision function design of the FIR filter directly determines the performance of the compensation DSP and power consumption. It should be particularly pointed out that the DSP application in the field of optical communications is facing with a large number of parallel computing problems. The main reason is the huge difference between the ADC sampling frequency (tens or even 100Gs/s) and the digital circuit operating frequency (up to several hundred MHz), in order to support the ADC of 100Gs/s sampling rate, digital circuits need to convert the serial 100Gs/s signals into hundreds of parallel digital signals for processing. It can be imagined that when the FIR filter only adds one Tap, the actual situation is that hundreds of Taps needs to be added. Therefore, how to deal with the balance of performance and power consumption in the digital signal processing unit is the key factor to determine the quality of the DSP design. In addition, inside the data center, optical transceiver modules must meet the interoperability prerequisites. In practical applications, the transmission performance of a link depends on the overall performance of the DSP and analog optical devices in the Tx and Rx ends. It is also a difficulty to design a reasonable standard to correctly evaluate the performance of the Tx and Rx ends. When the DSP supports that FEC function is opened in the the physical layer, how to synchronously transmit and receive the FEC function of the optical transceivers also increases the difficulty of data center testing. Therefore, so far, coherent transmission systems are interoperable among manufacturers' devices, and do not require interoperability among different manufacturers. (The TDECQ performance evaluation method is proposed for PAM-4 in 802.3.)


Power Consumption and Cost

Because DSP introduces DAC/ADC and algorithm, its power consumption must be higher than the traditional CDR chip based on simulation technology. And the method that DSP lowers the power consumption is relatively limited, mainly depending on the promotion of the process of tape, for instance, upgrading from the current 16nm to 7nm process can achieve a 65% reduction in power consumption. The current design power consumption of the 400G OSFP/QSFP-DD based on the 16nm DSP solution is around 12W, which is a huge challenge for the thermal design of the module itself or the future front panel of the switch. Therefore, it may be based on the 7nm process to solve the 400G DSP problem.


Price is always a topic of concern to data center. Unlike traditional optical devices, DSP chips are based on mature semiconductor technology. Therefore, larger chip costs can be expected to fall under the support of massive applications. Another advantage of DSP's future application in data centers is flexibility, which can meet the application requirements of different data rates and scenarios by adjusting the DSP configuration in the same optical device configuration.


Article Source: http://www.gigalight.com/news_detail/newsId=422.html


Related Gigalight 100G QSFP28 Optical Transceivers:

Gigalight Launches Industrial-Grade 100G QSFP28 Optical Transceivers

Shenzhen, China, April 30, 2018 − Recently, Gigalight has successfully developed two industrial-grade 100G optical transceivers: 100G QSFP28 LR4 (up to 10km) and 100G QSFP28 4WDM-40 (up to 40km). At the same time, these new products have been implemented in a small batch of trial production.


The Gigalight industrial-grade 100G QSFP28 LR4 transceiver is designed with the reliable Japanese industrial-grade TOSA in the transmitting end, and adopts our high-quality self-developed thermal-design ROSA in the receiving end. In the condition of full temperature of -40 to 85 degrees, there is an excellent performance with a margin more than 31% in the Eye Diagram (as shown in the figure below) . Moreover, it has a power consumption less than 3.5W and can meet the demands of 10km fiber transmission with zero-error-rate at 100GE data rates. It is fully compliant with IEEE 802.3ba 100GBASE-LR4 standard and is an ideal choice for applications of relatively harsh environments, meeting customers' demands of particular applications and the optical transport network transmission between AAU and DU for the future 5G mobile fronthaul application.


Gigalight Industrial-grade 100G QSFP28 LR4


Gigalight Industrial-grade 100G QSFP28 LR4


Eye Diagram of Gigalight Industrial-grade 100G QSFP28 LR4 under -40℃ (left) and -85℃ (right)


Eye Diagram of Gigalight Industrial-grade 100G QSFP28 LR4 under -40℃ (left) and -85℃ (right)


The Gigalight industrial-grade 100G QSFP28 4WDM-40 transceiver adopts the reliable Janpanese industrial-grade TOSA and ROSA on both the transmitting and receiving ends. It has a low power consumption less than 3.8W at the full temperature range of -40 to 85 degrees, ideal for building green data center and reducing energy costs. Its ROSA adopts a highly sensitive APD photodetector with a sensitivity less than -16.5dBm. When the FEC function is enabled in the system side, it can transmit up to 40km and meet the requirements of the 100GE 4WDM-40 MSA specification. This optical transceiver provides a cost-effective long-distance Data Center Interconnection (DCI) solution for distributed data centers under harsh environments, and also meets the demends of the optical transport network transmission between AAU and DU for the future 5G mobile fronthaul application.


Gigalight Industrial-grade 100G QSFP28 4WDM-40


Gigalight Industrial-grade 100G QSFP28 4WDM-40


5G Fronthaul Network


5G Fronthaul Network


There is no doubt that the future 5G network will more rely on the support of optical network. Meanwhile, the fronthaul to 5G needs a strong optical network due to the flat network architecture. Besides, the density and increasing number of base stations will lead a tremendous demand for optical fiber resources and broadband. The successful launch of industrial-grade 100G QSFP28 optical transceivers has not only enriched the 100G QSFP28 product line but also created a product line with unique features, which can better serve the 5G optical transmission demands and supply the gap in the market.

About Gigalight:

Gigalight is global optical interconnection design innovator. A series of optical interconnect products include: optical transceivers, passive optical components, active optical cables, GIGAC MTP/MPO cablings, and cloud programmers & checkers, etc. Three applications are mainly covered: Data Center & Cloud Computing, MAN & Broadcast Video, and Mobile Network & 5G Optical Transmission. Gigalight takes advantage of its exclusive design to provide clients with one-stop optical network devices and cost-effective products.

The Need for a Comprehensive Standard Twisted-pair structured cabling

Twisted-pair cabling in the late 1980s and early 1990s was often installed to support digital or analog telephone systems. Early twisted-pair cabling (Level 1 or Level 2) often proved marginal or insufficient for supporting the higher frequencies and data rates required for network applications such as Ethernet and Token Ring. Even when the cabling did marginally support higher speeds of data transfer (10Mbps), the connecting hardware and installation methods were often still stuck in the "voice" age, which meant that connectors, wall plates, and patch panels were designed to support voice applications only.

Twisted-pair Structured Cabling

The original Anixter Cables Performance Levels document only described performance standards for cables. A more comprehensive standard had to be developed to outline not only the types of cables that should be used but also the standards for deployment, connectors, patch panels, and more.

A consortium of telecommunications vendors and consultants worked in conjunction with the American National Standards Institute (ANSI), Electronic Industries Alliance (EIA), and the Telecommunications Industry Association (TIA) to create a Standard originally known as the Commercial Building Telecommunications Cabling Standard, or ANSI/TIA/EIA-568-1991. This standard has been revised and updated several times. In 1995, it was published as ANSI/ TIA/EIA-568-A, or just TIA/EIA-568-A. In subsequent years, TIA/EIA-568-A was updated with a series of addendums. For example, TIA/EIA-568-A-5 covered requirements for enhanced Category 5 (Category 5e), which had evolved in the marketplace before a full revision of the standard could be published. A completely updated version of this standard was released as ANSI/TIA/EIA-568-B in May 2001. After that, a new standard called ANSI/TIA-568-C was released.

The IEEE maintains the industry standards for Ethernet protocols (or applications). This is part of the 802.3 series of standards and includes applications such as 1000Base-T, 1000Base-SX, 10GBase-T, and 10GBase-SR.

The structured cabling market is estimated to be worth approximately $5 billion worldwide (according to the Building Services Research and Information Association [BSRIA]), due in part to the effective implementation of nationally recognized standards.


Article source: China Cable Suppliers

The Legacy of Proprietary Cabling Systems

Communications Cabling

Early cabling systems were unstructured, proprietary, and often worked only with a specific vendor's equipment. They were designed and installed for mainframes and were a combination of thicknet cable, twinax cable, and terminal cable (RS-232). Because no cabling standards existed, an MIS director simply had to ask the vendor which cable type should be run for a specific type of host or terminal. Frequently, though, vendor-specific cabling caused problems due to lack of flexibility. Unfortunately, the legacy of early cabling still lingers in many places.

PC LANs came on the scene in the mid-1980s; these systems usually consisted of thicknet cable, thinnet cable, or some combination of the two. These cabling systems were also limited to only certain types of hosts and network nodes.

As PC LANs became popular, some companies demonstrated the very extremes of data cabling. Looking back, it's surprising to think that the ceilings, walls, and floor trenches could hold all the cable necessary to provide connectivity to each system. As one company prepared to install a 1,000-node PC LAN, they were shocked to find all the different types of cabling systems needed. Each system was wired to a different wiring closet or computer room and included the following:

  • Wang dual coaxial cable for Wang word processing terminals
  • IBM twinax cable for IBM 5250 terminals
  • Twisted-pair cable containing one or two pairs, used by the digital phone system
  • Thick Ethernet from the DEC VAX to terminal servers
  • RS-232 cable to wiring closets connecting to DEC VAX terminal servers
  • RS-232 cable from certain secretarial workstations to a proprietary NBI word processing system
  • Coaxial cables connecting a handful of PCs to a single Novell NetWare server

Some users had two or three different types of terminals sitting on their desks and, consequently, two or three different types of wall plates in their offices or cubicles. Due to the cost of cabling each location, the locations that needed certain terminal types were the only ones that had cables that supported those terminals. If users moved (and they frequently did), new cables often had to be pulled.

The new LAN was based on a twisted-pair Ethernet system that used unshielded twisted-pair cabling called SynOptics LattisNet, which was a precursor to the 10Base-T standards. Due to budget considerations, when the LAN cabling was installed, this company often used spare pairs in the existing phone cables. When extra pairs were not available, additional cable was installed. Networking standards such as 10Base-T were but a twinkle in the IEEE's (Institute of Electrical and Electronics Engineers) eye, and guidelines such as the ANSI/TIA/EIA-568 series of cabling standards were not yet formulated (see the next section for more information on ANSI/TIA-568-C). Companies deploying twisted-pair LANs had little guidance, to say the least.

Much of the cable that was used at this company was sub–Category 3, meaning that it did not meet minimum Category 3 performance requirements. Unfortunately, because the cabling was not even Category 3, once the 10Base-T specification was approved, many of the installed cables would not support 10Base-T cards on most of the network. So three years into this company's network deployments, it had to rewire much of its building.

KEY TERM: Application

Often you will see the term application used when referring to cabling. If you are like us, you think of an application as a software program that runs on your computer. However, when discussing cabling infrastructures, an application is the technology that will take advantage of the cabling system. Applications include telephone systems (analog voice and digital voice), Ethernet, Token Ring, ATM, ISDN, and RS-232.

Proprietary Cabling Is a Thing of the Past

The company discussed in the previous section had at least seven different types of cables running through the walls, floors, and ceilings. Each cable met only the standards dictated by the vendor that required that particular cable type.

As early as 1988, the computer and telecommunications industry yearned for a versatile standard that would define cabling systems and make the practices used to build these cable systems consistent. Many vendors defined their own standards for various components of a cabling system.


Article source: China Cable Suppliers

The Cost of Poor Cabling and Whether it is to Blame

poor cabling

The costs that result from poorly planned and poorly implemented cabling systems can be staggering. One company that moved into a new datacenter space used the existing cabling, which was supposed to be Category 5e cable. Almost immediately, 10 Gigabit Ethernet network users reported intermittent problems.

These problems included exceptionally slow access times when reading email, saving documents, and using the sales database. Other users reported that applications running under Windows XP and Windows Vista were locking up, which often caused them to have to reboot their PC.

After many months of network annoyances, the company finally had the cable runs tested. Many cables did not even meet the minimum requirements of a Category 5e installation, and other cabling runs were installed and terminated poorly.

WARNING:
Often, network managers mistakenly assume that data cabling either works or it does not, with no in-between. Cabling can cause intermittent problems.

Can faulty cabling cause the type of intermittent problems that the aforementioned company experienced? Contrary to popular opinion, it certainly can. In addition to being vulnerable to outside interference from electric motors, fluorescent lighting, elevators, cell phones, copiers, and microwave ovens, faulty cabling can lead to intermittent problems for other reasons.

These reasons usually pertain to substandard components (patch panels, connectors, and cable) and poor installation techniques, and they can subtly cause dropped or incomplete packets. These lost packets cause the network adapters to have to time out and retransmit the data.

Robert Metcalfe (inventor of Ethernet, founder of 3Com, columnist for InfoWorld, and industry pundit) helped coin the term drop-rate magnification. Drop-rate magnification describes the high degree of network problems caused by dropping a few packets. Metcalfe estimates that a 1 percent drop in Ethernet packets can correlate to an 80 percent drop in throughput. Modern network protocols that send multiple packets and expect only a single acknowledgement are especially susceptible to drop-rate magnification, as a single dropped packet may cause an entire stream of packets to be retransmitted.

Dropped packets (as opposed to packet collisions) are more difficult to detect because they are "lost" on the wire. When data is lost on the wire, the data is transmitted properly but, due to problems with the cabling, the data never arrives at the destination or it arrives in an incomplete format.


Article source: China Cable Suppliers

The Importance of Reliable Cabling

Data Center Cabling
We cannot stress enough the importance of reliable cabling. Two recent studies vindicated our evangelical approach to data cabling. The studies showed:

  • Data cabling typically accounts for less than 10 percent of the total cost of the network infrastructure.
  • The life span of the typical cabling system is upward of 16 years. Cabling is likely the second most long-lived asset you have (the first being the shell of the building).
  • Nearly 70 percent of all network-related problems are due to poor cabling techniques and cable-component problems.

Tips: If you have installed the proper category or grade of cable, the majority of cabling problems will usually be related to patch cables, connectors, and termination techniques. The permanent portion of the cable (the part in the wall) will not likely be a problem unless it was damaged during installation.

Of course, these were facts that we already knew from our own experiences. We have spent countless hours troubleshooting cabling systems that were nonstandard, badly designed, poorly documented, and shoddily installed. We have seen many dollars wasted on the installation of additional cabling and cabling infrastructure support that should have been part of the original installation.

Regardless of how you look at it, cabling is the foundation of your network. It must be reliable!


Article source: China Cable Suppliers