Data Center Technologists
Data Center Technologists
Building Multi tenant Data Centers with MPLS L3VPNs

Hosting and cloud providers often look to scalable multi-tenant solutions to provide data center/compute/software services to their customers. Delivering these services requires scaling the network, providing isolation and security between customers, and being able to scale the services across data centers and geographies. While there are many options for creating such networks, let’s explore how MPLS can be used to build multi-tenant data centers.


When customers think of MPLS, they typically think of it in terms of transport or WAN networks. MPLS is a robust technology that allows providers to carry traffic for tens of thousands of customers over their network transport infrastructure. MPLS uses label stacking to separate tenant traffic; the outermost label identifies the transport label switched path (LSP) while the innermost label identifies the routing instance associated with the tenant.


One significant difference between providers using a VLAN-based model and those using MPLS model is that MPLS users are not constrained by the limitation of 4,096 VLANs.  MPLS eclipses VLAN scale by using a 20-bit label, increasing the overall number of possible tenants from 4,096 to more than 1 million.


Additionally, the MPLS control plane has been designed and harnessed over the years to meet the strictest of SLAs with features such as Fast Reroute and RSVP-TE. MPLS is a technology that both enterprises and service providers use at the edge of data centers to connect multiple discreet compute/storage fabrics.


Figure 1: Using MPLS in the WAN to interconnect customers across data centers and campus/branches.

The technology of choice inside the data center is often VRF-LITE, which is a hop-by-hop segmentation that uses IP data plane to provide separation between tenants. At the data center edge, customers have to stitch VRF-LITE to MPLS segments, creating operational complexity and preventing customers from realizing the full benefit of MPLS data plane and control plane all the way to the compute edge, which is often a top-of-rack data center switch.


While the benefits of MPLS—including fast reroute, convergence and scale—are available in the WAN, customers have to make sacrifices to extend those segmentation benefits to the data center.


With Juniper Networks, no such sacrifices are required. That’s because Juniper provides a full suite of MPLS control plane and data plane capabilities not only in its fastest and most powerful WAN routers, such as the MX Series and PTX Series, but also in all of its data center switches, including the QFX5100 and QFX10000.


Let’s take a look at how such a network would be built inside a data center leveraging MPLS. From a physical topology perspective, it is very common to build leaf-spine networks in the data center to ensure predictable performance for east-west or machine-to-machine traffic. Figure 2 depicts that network.




Figure 2: Typical 3-tier Clos network with spine and leaf switches. Compute and services connect to leaf switches while spine switches provide interconnectivity between leafs.

Once the physical network is laid out, one can run routing protocols such as BGP to provide connectivity between all leaf switches. To run MPLS on this infrastructure, customers can further enable BGP-labeled unicast (BGP-LU), which assigns labels to all loopback addresses and prefixes in the underlay network so that traffic can use MPLS labels to go from place to place in the data center. BGP-LU is often used by customers who want to build IGP-free networks or have overlays in the network, as BGP is used for many overlays (EVPN and L3VPN, for example).

If customers are using an IGP protocol like OSPF or IS-IS for an underlay network, they can use LDP for label distribution instead of BGP-LU. Juniper recommends the QFX5100 in the leaf role as MPLS provider edge (PE) routers and the QFX10000 as the MPLS provider core (P) routers. We recommend the QFX10000 in the spine for two reasons: one, it offers a high density of 10/40/100GbE ports and, two, we want granular multipathing for traffic between the edge routers, and the QFX10000 provides very good multipathing in the LSR role inside data center.

Once the basic MPLS underlay set up is complete, customers can then enable Multi-Protocol BGP (MP-BGP) on the leaf devices (ToR switches and data center edge routers) to seamlessly interconnect tenant segments over L3VPNs, whether internally within the data center or outside it. Since data centers can be fairly large, we recommend using a route-reflector (either on spine/edge routers or in a virtual form factor such as a vRR, or virtual route reflector) rather than creating peering between each device. This provides a level of scalability and flexibility that is not only unparalleled but equally robust and reliable.

Using this approach, customers can greatly simplify their networks by
• Removing hop-by-hop configuration for each VRF and only configure VRF/VPNs at the edge of the network and take advantage of the MPLS control plane as well as data plane inside the networks.
• It also gives customers consistency when they stitch MPLS in the data center to MPLS in the WAN, as no features (QoS, OAM) are lost since the same control plane and data plane are used instead of stitching multiple control planes and data planes.
• Unlimited tenant scale

For more information on MPLS features on the QFX5100 and QFX10000 switches, visit the following links.
• MPLS on QFX5100:
• MPLS on QFX10000:
• MPLS training:




Hello Salman


Thanks for wrriting such wonderfull article.

Would you be able share some use cases of this architecture and if you have any implementation guidelines.