It’s no secret that data centers are experiencing explosive traffic growth, the result of high-performance applications performing highly complex transactions. Because these applications are widely distributed, the bulk of this traffic travels east to west between servers in the data center and majority of this traffic stays within the data center.
To support this growth, data center network architectures have evolved into scale-out designs, providing the plumbing to carry large amounts of latency-sensitive application traffic between racks. To improve bandwidth utilization, data centers are also moving from 10GbE to 40GbE to 100GbE, providing bigger pipes for transmitting application traffic at line rates. At the same time, more and more servers are being packed into data centers, effectively creating huge “mega data centers” spread over great distances, with some racks separated by more than 2km.
So what is the right choice for my data center ?
Selecting the right cabling plant for these environments is critically important. The wrong decision could leave a data center incapable of supporting future growth, requiring an extremely costly cable plant upgrade to move to higher speeds. IT managers must strike a delicate balance: what is the right choice for their data center without depleting their budget? Should they choose single-mode fiber (SMF) or multimode fiber (MMF)? The correct answer, of course, depends on anticipated traffic growth and the size of the data center.
While MMF cabling has been widely and successfully deployed for generations, data center operators have discovered that older MMF cables such as OM1 and OM2 do not support higher speeds such as 40GbE and 100GbE. As a result, some MMF users have been forced to upgrade their cabling plant to meet newer specifications or, at the very least, add later-generation OM3 and OM4 fiber spools to their existing cable plant to support standards-based 40GbE and 100GbE interfaces.
Even with new cable options such as OM3 and OM4 and the installation of extra cables, 100GbE interfaces will only support shorter distances. For instance, the SR4 optic, which supports distances of up to 100m with 40GbE, supports just 70m with the 100GbE. The question, which is still unanswered at this point, is can we have an extended-reach SR4 solution for 100GbE interfaces, or can we have a 100GbE MMF solution that can run on 10GbE cables?
While Juniper is exploring both of these options and driving their adoption by industry bodies, in the absence of any decision, current options are fairly straightforward: if a data center is large and requires greater than 100m reach, SMF cabling is the only choice.
MMF reach defined by IEEE standards -
Is SMF a Viable Alternative?
Previously, organizations were reluctant to implement SMF inside the data center due to the cost of the pluggable optics required, especially compared to MMF. However, newer silicon technologies and manufacturing innovations are driving down the cost of SMF pluggable optics, making them a more viable option for high-speed deployments in large data centers requiring more than 100m reach.
The benefits of SMF infrastructures include:
Flexible reach from 500m to 10km within a single data center
Investment protection by supporting speeds such as 40GbE, 100GbE and 400GbE on the same fiber plant
Cable is not cost prohibitive, SMF is less expensive than MMF fiber
Easy to terminate in the field with LC connectors when compared to MTP connectors
Juniper Networks QFX Series switches, based on a QSFP28 design, are leading the effort to raise 100GbE densities while driving down costs. Both MMF and SMF optics are supported in that form factor; the options are summarized in the table below.
No of fibers
The availability of lower-cost SMF optics is opening up a host of new options for building large data today that can scale to support higher speeds in the future without requiring additional cabling investments. Juniper, a regular contributor to multiple standards bodies and MSAs alliances such as 100GbE CWDM4, CLR4, PSM4 and COBO, is fully dedicated to finding solutions with open ecosystems to help customers build the highest performance data centers possible—all at the right cost points.
Learn more about QFX10000 series by Visiting J-Net Forums :