Data Center Technologists
Showing results for 
Search instead for 
Do you mean 

Building Multi tenant Data Centers with MPLS L3VPNs

by Juniper Employee ‎08-27-2015 05:48 PM - edited ‎08-31-2015 09:37 AM

Hosting and cloud providers often look to scalable multi-tenant solutions to provide data center/compute/software services to their customers. Delivering these services requires scaling the network, providing isolation and security between customers, and being able to scale the services across data centers and geographies. While there are many options for creating such networks, let’s explore how MPLS can be used to build multi-tenant data centers.


When customers think of MPLS, they typically think of it in terms of transport or WAN networks. MPLS is a robust technology that allows providers to carry traffic for tens of thousands of customers over their network transport infrastructure. MPLS uses label stacking to separate tenant traffic; the outermost label identifies the transport label switched path (LSP) while the innermost label identifies the routing instance associated with the tenant.


One significant difference between providers using a VLAN-based model and those using MPLS model is that MPLS users are not constrained by the limitation of 4,096 VLANs.  MPLS eclipses VLAN scale by using a 20-bit label, increasing the overall number of possible tenants from 4,096 to more than 1 million.


Additionally, the MPLS control plane has been designed and harnessed over the years to meet the strictest of SLAs with features such as Fast Reroute and RSVP-TE. MPLS is a technology that both enterprises and service providers use at the edge of data centers to connect multiple discreet compute/storage fabrics.


Figure 1: Using MPLS in the WAN to interconnect customers across data centers and campus/branches.

The technology of choice inside the data center is often VRF-LITE, which is a hop-by-hop segmentation that uses IP data plane to provide separation between tenants. At the data center edge, customers have to stitch VRF-LITE to MPLS segments, creating operational complexity and preventing customers from realizing the full benefit of MPLS data plane and control plane all the way to the compute edge, which is often a top-of-rack data center switch.


While the benefits of MPLS—including fast reroute, convergence and scale—are available in the WAN, customers have to make sacrifices to extend those segmentation benefits to the data center.


With Juniper Networks, no such sacrifices are required. That’s because Juniper provides a full suite of MPLS control plane and data plane capabilities not only in its fastest and most powerful WAN routers, such as the MX Series and PTX Series, but also in all of its data center switches, including the QFX5100 and QFX10000.


Let’s take a look at how such a network would be built inside a data center leveraging MPLS. From a physical topology perspective, it is very common to build leaf-spine networks in the data center to ensure predictable performance for east-west or machine-to-machine traffic. Figure 2 depicts that network.




Figure 2: Typical 3-tier Clos network with spine and leaf switches. Compute and services connect to leaf switches while spine switches provide interconnectivity between leafs.

Once the physical network is laid out, one can run routing protocols such as BGP to provide connectivity between all leaf switches. To run MPLS on this infrastructure, customers can further enable BGP-labeled unicast (BGP-LU), which assigns labels to all loopback addresses and prefixes in the underlay network so that traffic can use MPLS labels to go from place to place in the data center. BGP-LU is often used by customers who want to build IGP-free networks or have overlays in the network, as BGP is used for many overlays (EVPN and L3VPN, for example).

If customers are using an IGP protocol like OSPF or IS-IS for an underlay network, they can use LDP for label distribution instead of BGP-LU. Juniper recommends the QFX5100 in the leaf role as MPLS provider edge (PE) routers and the QFX10000 as the MPLS provider core (P) routers. We recommend the QFX10000 in the spine for two reasons: one, it offers a high density of 10/40/100GbE ports and, two, we want granular multipathing for traffic between the edge routers, and the QFX10000 provides very good multipathing in the LSR role inside data center.

Once the basic MPLS underlay set up is complete, customers can then enable Multi-Protocol BGP (MP-BGP) on the leaf devices (ToR switches and data center edge routers) to seamlessly interconnect tenant segments over L3VPNs, whether internally within the data center or outside it. Since data centers can be fairly large, we recommend using a route-reflector (either on spine/edge routers or in a virtual form factor such as a vRR, or virtual route reflector) rather than creating peering between each device. This provides a level of scalability and flexibility that is not only unparalleled but equally robust and reliable.

Using this approach, customers can greatly simplify their networks by
• Removing hop-by-hop configuration for each VRF and only configure VRF/VPNs at the edge of the network and take advantage of the MPLS control plane as well as data plane inside the networks.
• It also gives customers consistency when they stitch MPLS in the data center to MPLS in the WAN, as no features (QoS, OAM) are lost since the same control plane and data plane are used instead of stitching multiple control planes and data planes.
• Unlimited tenant scale

For more information on MPLS features on the QFX5100 and QFX10000 switches, visit the following links.
• MPLS on QFX5100:
• MPLS on QFX10000:
• MPLS training:



on ‎11-30-2016 08:50 PM

Hello Salman


Thanks for wrriting such wonderfull article.

Would you be able share some use cases of this architecture and if you have any implementation guidelines.


Juniper Networks Technical Books
About the Author
  • Anil Lohiya is a Principal Engineer in the Campus and Data Center Business unit in Juniper Networks. In his current role, he is leading some of the SDN and Network Virtualization initiatives.
  • I am an Engineer with expertise in Data Packet Forwarding, Software Design & Programming with major domain expertise in QoS (Quality of Services). I have worked across the domains in Data communications field. I love water and am a good swimmer too.
  • Remarkably organized stardust.
  • I have been in the networking industry for over 35 years: PBXs, SNA, Muxes, ATM, routers, switches, optical - I've seen it all. Twelve years in the US, over 25 in Europe, at companies like AT&T, IBM, Bay Networks, Nortel Networks and Dimension Data. Since 2007 I have been at Juniper, focusing on solutions and services: solving business problems via products and projects. Our market is characterized by amazing technological innovations, but technology is no use if you cannot get it to work and keep it working. That is why services are so exciting: this is where the technology moves out of the glossy brochures and into the real world! Follow me on Twitter: @JoeAtJuniper For more about me, go to my LinkedIn profile:
  • Ken Briley is Data Center TME at Juniper Networks focused on Juniper switching product lines. Prior to Juniper Networks, Ken worked at Cumulus Networks as a TME supporting the dis-aggregation movement and before that he spent 15 years at Cisco Systems working in various roles: Technical Support, Technical Marketing Engineer, Network Consulting Engineer and Product Management. Ken has an MS in Electrical Engineering and is CCIE # 9754.
  • Michael Pergament, JNCIE-SP #510, JNCIE-ENT #23, JNCIE-DC #3
  • Raj is a Sr. Cloud Technology Architect with Juniper Networks and focuses on technologies such as VMware, SDN, and OpenStack etc.
  • Rakesh Dubey is the engineering head for Campus and Data Center business unit at Juniper Networks. He has been with Juniper for past six years leading multiple switching products.
  • Sarath Chandra Mekala is a staff engineer with Juniper networks and focuses on implementing Juniper's Openstack Neutron plugins in the areas of Switching, Routing, Firewall and VPN. He is an official contributor to Openstack Neutron FWaaS v2.
  • Sriram is a Sr. Manager in the Campus and Datacenter Business Unit. He is part of the Network Director team and focuses on technologies such as VMware integration, OpenStack etc.
  • An accomplished network engineer with 18+ years’ experience, and a Juniper employee since 2004, Tony leads the IT team focused on deploying “Juniper on Juniper”, using Juniper technology to run the business and deliver core business services across the enterprise. Tony holds a BS degree from California Polytechnic State University. Outside of work, Tony serves on a School Advisory Council, loves biking and good coffee.