Data Center Technologists
Showing results for 
Search instead for 
Do you mean 

SMF: A New Era for Data Centers?

by Juniper Employee ‎08-24-2015 09:00 AM - edited ‎08-24-2015 03:30 PM

It’s no secret that data centers are experiencing explosive traffic growth, the result of high-performance applications performing highly complex transactions. Because these applications are widely distributed, the bulk of this traffic travels east to west between servers in the data center and majority of this traffic stays within the data center.

 

To support this growth, data center network architectures have evolved into scale-out designs, providing the plumbing to carry large amounts of latency-sensitive application traffic between racks. To improve bandwidth utilization, data centers are also moving from 10GbE to 40GbE to 100GbE, providing bigger pipes for transmitting application traffic at line rates. At the same time, more and more servers are being packed into data centers, effectively creating huge “mega data centers” spread over great distances, with some racks separated by more than 2km.

 

So what is the right choice for my data center ?

 

Selecting the right cabling plant for these environments is critically important. The wrong decision could leave a data center incapable of supporting future growth, requiring an extremely costly cable plant upgrade to move to higher speeds. IT managers must strike a delicate balance: what is the right choice for their data center without depleting their budget? Should they choose single-mode fiber (SMF) or multimode fiber (MMF)? The correct answer, of course, depends on anticipated traffic growth and the size of the data center.

 

While MMF cabling has been widely and successfully deployed for generations, data center operators have discovered that older MMF cables such as OM1 and OM2 do not support higher speeds such as 40GbE and 100GbE. As a result, some MMF users have been forced to upgrade their cabling plant to meet newer specifications or, at the very least, add later-generation OM3 and OM4 fiber spools to their existing cable plant to support standards-based 40GbE and 100GbE interfaces.

 

Even with new cable options such as OM3 and OM4 and the installation of extra cables, 100GbE interfaces will only support shorter distances. For instance, the SR4 optic, which supports distances of up to 100m with 40GbE, supports just 70m with the 100GbE. The question, which is still unanswered at this point, is can we have an extended-reach SR4 solution for 100GbE interfaces, or can we have a 100GbE MMF solution that can run on 10GbE cables?

 

While Juniper is exploring both of these options and driving their adoption by industry bodies, in the absence of any decision, current options are fairly straightforward: if a data center is large and requires greater than 100m reach, SMF cabling is the only choice.

 

MMF reach defined by IEEE standards -

 

10GbE

40GbE

100GbE

400GbE

Reach w/OM3

300m

100m

70m

70m

Reach w/OM4

400m

150m

100m

100m

 

Is SMF a Viable Alternative?

 

Previously, organizations were reluctant to implement SMF inside the data center due to the cost of the pluggable optics required, especially compared to MMF. However, newer silicon technologies and manufacturing innovations are driving down the cost of SMF pluggable optics, making them a more viable option for high-speed deployments in large data centers requiring more than 100m reach.

 

The benefits of SMF infrastructures include:

 

  • Flexible reach from 500m to 10km within a single data center
  • Investment protection by supporting speeds such as 40GbE, 100GbE and 400GbE on the same fiber plant
  • Cable is not cost prohibitive, SMF is less expensive than MMF fiber
  • Easy to terminate in the field with LC connectors when compared to MTP connectors

 

Customer Choice

 

Juniper Networks QFX Series switches, based on a QSFP28 design, are leading the effort to raise 100GbE densities while driving down costs. Both MMF and SMF optics are supported in that form factor; the options are summarized in the table below.

 

QSFP28 Type

Reach OM3/OM4

Reach OS2

No of fibers

SR4

70m/100m

 

8 (MMF)

PSM4

 

2km

8 (SMF)

CWDM4 /CLR4

 

2km

2 (SMF)

LR4

 

10km

2 (SMF)

 

Summary

 

The availability of lower-cost SMF optics is opening up a host of new options for building large data today that can scale to support higher speeds in the future without requiring additional cabling investments. Juniper, a regular contributor to multiple standards bodies and MSAs alliances such as 100GbE CWDM4, CLR4, PSM4 and COBO, is fully dedicated to finding solutions with open ecosystems to help customers build the highest performance data centers possible—all at the right cost points.

 

Learn more about QFX10000 series by Visiting J-Net Forums :

 

100GbE is New 40GbE in the Data Center

Juniper QFX10002 Technical Overview

QFX10000 - a no compromise switching system

 

Comments
by speedxs_git
on ‎08-26-2015 02:04 PM

Lakshmi,

 

In the datacentres we operate and maintain we already saw a big shift from traditional cabling to SMF.

 

Not only the move to SMF instead of MMF OM3/OM4 but also some other moves

- copper connections are finally out of the window for intra room connections, TOR / EOR is more and more implemented

- traditional prop. connections on COAX are moving as well to SMF

- a move to more flexible ODF's is needed as fiber densities increase ( and so are the cross connects )

- a move from PC to APC8 polishing of connectors ( 8 degrees polished connector reducing refelections, reflection will become over time the new problem )

 

To cope with the above changes we chose a currently not widely used hybrid connector type LC/APC8 to increase densities, reduce reflections and standardize on one form factor and presentation.

 

As a result we improved the fiber infrastructure as a whole and now need to be more aware of optical receiver sensitivity.... trying not to blind the rx side.

 

And remember to:

 

CLEAN!!!

 

even the fresh cables from the ziplocks or the newly created ODF's ( both sides before plugin in a cable ) with more data passing thru the cables more things will be affected by the spec of dust covering the 9um. A reel cleaner and inspection scope are not a huge investment any more and will save you from some future weirdnesses.

 

Hilmar

Announcements
Juniper Networks Technical Books
About the Author
  • Anil Lohiya is a Principal Engineer in the Campus and Data Center Business unit in Juniper Networks. In his current role, he is leading some of the SDN and Network Virtualization initiatives.
  • I am an Engineer with expertise in Data Packet Forwarding, Software Design & Programming with major domain expertise in QoS (Quality of Services). I have worked across the domains in Data communications field. I love water and am a good swimmer too.
  • Remarkably organized stardust. https://google.com/+JamesKelly
  • I have been in the networking industry for over 35 years: PBXs, SNA, Muxes, ATM, routers, switches, optical - I've seen it all. Twelve years in the US, over 25 in Europe, at companies like AT&T, IBM, Bay Networks, Nortel Networks and Dimension Data. Since 2007 I have been at Juniper, focusing on solutions and services: solving business problems via products and projects. Our market is characterized by amazing technological innovations, but technology is no use if you cannot get it to work and keep it working. That is why services are so exciting: this is where the technology moves out of the glossy brochures and into the real world! Follow me on Twitter: @JoeAtJuniper For more about me, go to my LinkedIn profile: http://fr.linkedin.com/pub/joe-robertson/0/4a/34a
  • Ken Briley is Data Center TME at Juniper Networks focused on Juniper switching product lines. Prior to Juniper Networks, Ken worked at Cumulus Networks as a TME supporting the dis-aggregation movement and before that he spent 15 years at Cisco Systems working in various roles: Technical Support, Technical Marketing Engineer, Network Consulting Engineer and Product Management. Ken has an MS in Electrical Engineering and is CCIE # 9754.
  • Michael Pergament, JNCIE-SP #510, JNCIE-ENT #23, JNCIE-DC #3
  • Raj is a Sr. Cloud Technology Architect with Juniper Networks and focuses on technologies such as VMware, SDN, and OpenStack etc.
  • Rakesh Dubey is the engineering head for Campus and Data Center business unit at Juniper Networks. He has been with Juniper for past six years leading multiple switching products.
  • Sarath Chandra Mekala is a staff engineer with Juniper networks and focuses on implementing Juniper's Openstack Neutron plugins in the areas of Switching, Routing, Firewall and VPN. He is an official contributor to Openstack Neutron FWaaS v2.
  • Sriram is a Sr. Manager in the Campus and Datacenter Business Unit. He is part of the Network Director team and focuses on technologies such as VMware integration, OpenStack etc.
  • An accomplished network engineer with 18+ years’ experience, and a Juniper employee since 2004, Tony leads the IT team focused on deploying “Juniper on Juniper”, using Juniper technology to run the business and deliver core business services across the enterprise. Tony holds a BS degree from California Polytechnic State University. Outside of work, Tony serves on a School Advisory Council, loves biking and good coffee.