Juniper Employee , Juniper Employee Juniper Employee
SMF: A New Era for Data Centers?
Aug 24, 2015

It’s no secret that data centers are experiencing explosive traffic growth, the result of high-performance applications performing highly complex transactions. Because these applications are widely distributed, the bulk of this traffic travels east to west between servers in the data center and majority of this traffic stays within the data center.


To support this growth, data center network architectures have evolved into scale-out designs, providing the plumbing to carry large amounts of latency-sensitive application traffic between racks. To improve bandwidth utilization, data centers are also moving from 10GbE to 40GbE to 100GbE, providing bigger pipes for transmitting application traffic at line rates. At the same time, more and more servers are being packed into data centers, effectively creating huge “mega data centers” spread over great distances, with some racks separated by more than 2km.


So what is the right choice for my data center ?


Selecting the right cabling plant for these environments is critically important. The wrong decision could leave a data center incapable of supporting future growth, requiring an extremely costly cable plant upgrade to move to higher speeds. IT managers must strike a delicate balance: what is the right choice for their data center without depleting their budget? Should they choose single-mode fiber (SMF) or multimode fiber (MMF)? The correct answer, of course, depends on anticipated traffic growth and the size of the data center.


While MMF cabling has been widely and successfully deployed for generations, data center operators have discovered that older MMF cables such as OM1 and OM2 do not support higher speeds such as 40GbE and 100GbE. As a result, some MMF users have been forced to upgrade their cabling plant to meet newer specifications or, at the very least, add later-generation OM3 and OM4 fiber spools to their existing cable plant to support standards-based 40GbE and 100GbE interfaces.


Even with new cable options such as OM3 and OM4 and the installation of extra cables, 100GbE interfaces will only support shorter distances. For instance, the SR4 optic, which supports distances of up to 100m with 40GbE, supports just 70m with the 100GbE. The question, which is still unanswered at this point, is can we have an extended-reach SR4 solution for 100GbE interfaces, or can we have a 100GbE MMF solution that can run on 10GbE cables?


While Juniper is exploring both of these options and driving their adoption by industry bodies, in the absence of any decision, current options are fairly straightforward: if a data center is large and requires greater than 100m reach, SMF cabling is the only choice.


MMF reach defined by IEEE standards -






Reach w/OM3





Reach w/OM4






Is SMF a Viable Alternative?


Previously, organizations were reluctant to implement SMF inside the data center due to the cost of the pluggable optics required, especially compared to MMF. However, newer silicon technologies and manufacturing innovations are driving down the cost of SMF pluggable optics, making them a more viable option for high-speed deployments in large data centers requiring more than 100m reach.


The benefits of SMF infrastructures include:


  • Flexible reach from 500m to 10km within a single data center
  • Investment protection by supporting speeds such as 40GbE, 100GbE and 400GbE on the same fiber plant
  • Cable is not cost prohibitive, SMF is less expensive than MMF fiber
  • Easy to terminate in the field with LC connectors when compared to MTP connectors


Customer Choice


Juniper Networks QFX Series switches, based on a QSFP28 design, are leading the effort to raise 100GbE densities while driving down costs. Both MMF and SMF optics are supported in that form factor; the options are summarized in the table below.


QSFP28 Type

Reach OM3/OM4

Reach OS2

No of fibers




8 (MMF)




8 (SMF)




2 (SMF)




2 (SMF)




The availability of lower-cost SMF optics is opening up a host of new options for building large data today that can scale to support higher speeds in the future without requiring additional cabling investments. Juniper, a regular contributor to multiple standards bodies and MSAs alliances such as 100GbE CWDM4, CLR4, PSM4 and COBO, is fully dedicated to finding solutions with open ecosystems to help customers build the highest performance data centers possible—all at the right cost points.


Learn more about QFX10000 series by Visiting J-Net Forums :


100GbE is New 40GbE in the Data Center

Juniper QFX10002 Technical Overview

QFX10000 - a no compromise switching system


Aug 26, 2015



In the datacentres we operate and maintain we already saw a big shift from traditional cabling to SMF.


Not only the move to SMF instead of MMF OM3/OM4 but also some other moves

- copper connections are finally out of the window for intra room connections, TOR / EOR is more and more implemented

- traditional prop. connections on COAX are moving as well to SMF

- a move to more flexible ODF's is needed as fiber densities increase ( and so are the cross connects )

- a move from PC to APC8 polishing of connectors ( 8 degrees polished connector reducing refelections, reflection will become over time the new problem )


To cope with the above changes we chose a currently not widely used hybrid connector type LC/APC8 to increase densities, reduce reflections and standardize on one form factor and presentation.


As a result we improved the fiber infrastructure as a whole and now need to be more aware of optical receiver sensitivity.... trying not to blind the rx side.


And remember to:




even the fresh cables from the ziplocks or the newly created ODF's ( both sides before plugin in a cable ) with more data passing thru the cables more things will be affected by the spec of dust covering the 9um. A reel cleaner and inspection scope are not a huge investment any more and will save you from some future weirdnesses.