Data Center Technologists
Showing results for 
Search instead for 
Do you mean 

MC-LAG is dead, Long live EVPN Multi-homing

by Juniper Employee ‎10-20-2016 05:34 PM - edited ‎01-12-2017 02:44 AM

Practically every day, it seems, someone will ask me: “How can I configure MC-LAG with EVPN to provide multi-homing?


The answer, I tell them, is simple:  you don’t need to.  EVPN is a superset of MC-LAG, and it natively integrates multi-homing. It’s like the better, standard version of MC-LAG that we’ve been waiting for.


EVPN, either with VXLAN or MPLS encapsulation, natively provides N-Way multi-homing by creating the same Ethernet Segment Identifier (ESI) on multiple devices. An ESI is configured on a per-interface basis; all interfaces configured with the same ESI, on any devices within the same EVPN domain, appear as part of the same L2 segment or LAG. On top of an ESI, it’s also possible to configure LACP to provide better fault detection.

Nothing special is required for an ESI, just a LAG or MC-LAG, with or without LACP. Anything can be connected to an ESI: servers, switches, Virtual Chassis configurations, firewalls, load balancers, routers—there are no restrictions. If needed, you can even inter-connect two EVPN instances with an EVPN/ESI on both sides.  See Figure 1 for examples.

Blog_evpn_1.pngFigure 1: Examples of Ethernet Segment Identifiers (ESIs)


On an EVPN/VXLAN fabric deployed inside a data center, EVPN multi-homing can be used to:

  • Connect a server to multiple TORs with an all-active LAG
  • Connect any devices to all spines with an all-active LAG


Figure 2: Mixed type of leaf in a EVPN/VXLAN deployment leveraging EVPN multi-homing


On any existing MC-LAG architecture, two pairs of devices running MC-LAG can be replaced with two or more devices running EVPN. Unlike MC-LAG, however, EVPN/ESI is not limited to two devices; in fact, it’s possible to have four, six, eight, or more physical devices participating in the same ESI.


Figure 3: Replace MC-LAG with EVPN/VXLAN at the Spine layer



Is MC-LAG Going to Die?

All of this begs the question: is MC-LAG going away?  Of course not—at least, not for a very long time.  Juniper is committed to supporting MC-LAG, and there are no plans to change that.  We have a lot of customers who have successfully deployed MC-LAG in their production networks, and there is no need for them to change, nor should they worry about it. MC-LAG has proven to be a robust solution to deploy a L2 Fabric, and it will continue to be.

Having said that, MC-LAG is one of the last proprietary protocols in the data center.  And its two-member limit has restricted our ability to create scale-out L2 fabrics in the past.

It’s great knowing that, with EVPN multi-homing, we have a successor to MC-LAG that improves on the technology’s two main weakness listed above.  


The Mystery of EVPN

When I explain multi-homing, the next question I hear is, why are so many people unaware of such an important feature? Initially, this question really surprised me; for me, it was the most interesting part of EVPN, and I assumed people already knew about it.


Later I came to realize that, outside of Juniper, most vendors haven’t implemented this part of EVPN and still rely on an MC-LAG-like technology at the edge of the fabric to provide multi-homing!  In fact, at the last EANTC interoperability tests in Berlin, only Nokia and Juniper were able to demonstrate a successful implementation of EVPN multi-homing. 

EVPN is not a single, monolithic standard; it is actually composed of multiple RFCs and other features, and as of today, nobody has finished implementing all of them. Each vendor has decided to focus on different parts, and as such, we are seeing some significant differences like multi-homing.  This also explains why multivendor interoperability is still not widely available today.


Hopefully, as everyone continues to implement EVPN, this situation will eventually change.  Soon, we’ll be able to enjoy a true multivendor EVPN implementation.


I’m glad Juniper has decided to prioritize multi-homing.  In my opinion, it’s one of the greatest feature of EVPN, and for some time now, EVPN/ESI have been supported across Juniper MX Series, EX Series, and QFX Series platforms.


If you want to investigate how to use EVPN/ESI to provide multi-homing, please read the Juniper white paper titled “Juniper Networks EVPN Implementation for Next-Generation Data Center Architectures.”  The multi-homing section begins on page 7.

on ‎10-21-2016 03:24 AM

Why can MC-LAG be dead? How would you implement EVPN at scale in Multicast and VXLAN environments?

by Juniper Employee
on ‎10-21-2016 04:08 PM

Stefan, thanks for your comment

Indeed, MC-LAG is not dead and EVPN is not yet able to replace MC-LAG in all environments today. EVPN is a younger technology than MC-LAG but it's a very promising one. For the first time, we have a very serious and standard alternative to MC-LAG and the solution will be more robust over time.

On the multicast part, you're correct, EVPN is still missing some optimization as compare to MC-LAG, especially aroud Multicast. For a multicast heavy environment I would recommend MC-LAG. This is something we are actively working on.



The title is inspired by the general expression "The king is dead, long live the king" which mean that the next generation is here. In our case the transition is not as brutal as for the kings 


‎11-07-2016 10:02 AM - edited ‎11-07-2016 11:38 AM

Great article. two more questions:

1)what are cases except mcast where i should prefer to use mlag instead  evpn?

2) as i know juniper has some limitation in ARP Proxy function. can you clarify which on?


by sja
on ‎11-07-2016 10:47 AM

Still don't get Juniper datacenter strategies

...if something is dead it's trill 

SPB give all that +multicast and it's easy !!!

I love to see that standard  in Juniper Switchs

by slick
on ‎11-08-2016 12:01 PM



Thanks for a nice article.

I very much agree with everything you wrote, including the part about EVPN being a young technology.

From a software maturity point of view, do you think the code that has EVPN VXLAN on the QFX5100 is ready for production ?

I'm not talking just about the code for the EVPN, but the entire Junos image that is required for this feature, is it recommended for production ?






by Juniper Employee
on ‎11-13-2016 10:17 PM

Hi Damien,


Thank you for the food for thought.


From a deployment/performance perspecitve , what is the load on the system, one Vs the other for a similar 2 member topology ?






by jefftant
on ‎11-14-2016 12:25 AM

Aren't you missing the point?

The complexity of MC-LAG is in the state synchronization and not basic connectivity and loop prevention. That's why there are so many proprietary implementaions, ICCP (RFC7275) tried to provide common framework and even though there are implementations it wasn't a big success.


There are many things that need be in sync before we could pronounce a victory Smiley Happy

As you could see, there are some early drafts that try to do exactly that, draft-sajassi-bess-evpn-igmp-mld-proxy is one these. However, there's no free lunch, as number of route-types and associated attributes increases, complexity goes up with it.

by Juniper Employee
on ‎11-14-2016 10:23 AM



Thanks for your questions and sorry for the delay


1)what are cases except mcast where i should prefer to use mlag instead  evpn?

Heavy Multicast is the main one that I can think about right now.  Of course, MC-LAG will be the only option if you have devices that don't support EVPN.


2) as i know juniper has some limitation in ARP Proxy function. can you clarify which on?

Currently ARP proxy is not supported, all devices with L3 interfaces will respond to an ARP request. ARP proxy is currently in dev and is coming in 2017




by Juniper Employee
on ‎11-14-2016 10:32 AM



Trill is dead, totally agreed on that

Regarding SPB, I'm not an expert but believe this protocol is mainly focusing on L2 and do not have a strong support for L3.

EVPN provide a very nice intregration of L2 and L3 together with the Anycast Gateway.

Also, by using standard transport format (MPLS & VXLAN) and BGP, I believe EVPN provide an easier integration.

Again, my knowledge on SPB is limited, sorry if I missed some important points. 


At the end of the day, what matter the most is the level of adoption more than the protocol itself. Today I see many vendors behind EVPN and only few behind SPB.





by Juniper Employee
on ‎11-14-2016 10:38 AM

Hi slick


Thanks for your feedback,

We have some customers already in production with EVPN code (14.1X53) on QFX5100. Having said that, Engineering has been working hard to create a harden release even more mature in the 14.1X53 branch.

The release 14.1X53-D40 will be available very soon and is expected to become the recommended release for EVPN quickly.




by Juniper Employee
on ‎11-14-2016 10:44 AM

Hi Sai


Thanks for your question


While i don't have official numbers to back that up, I think the load will be similar in both scenarios.

  • MC-LAG will require an ICCP session with BFD enabled
  • EVPN will require 2 BGP sessions with BFD

1 session VS 2 sessions will not make a real difference

In both cases the dataplane forwarding is done in hardware. 





by Juniper Employee
on ‎11-14-2016 10:55 AM

Hi Jeff


There is a still a long way to go with EVPN, I agree but even in its early stage EVPN has been able to pass some significant milestones as compare to MC-LAG

As explained in the blog, i think the main limitations of MC-LAG are 

  • Proprietary (or non interoperable)
  • Limitation to 2 members

With EVPN we already have a solution that solves both limitations


As you mentioned, the issue with EVPN is that we have many route-types and many ways to achieve the same results. Hopefully overtime, everyone will support allroute-types and we'll be able to enjoy a true multivendors solution.




by Ankit1
on ‎04-03-2017 11:38 PM

Hi Damien, 

Currently I have deployed MC-LAG in my production environment which consists of Mx480 routers and Ex4200 switches. Junos on Mx is 13.3R9. Basically its a small setup with 2 Mx480's and 4 Ex4200 switches.


Multicast is not in use whereas some basic L3 features over irb interfaces has been deployed to get the node level redundancy what MC-LAG is providing today. 


Is it advisable to go for EVPN in coming days instead of MC-LAG? How much effort is required to migrate from MC-LAG to EVPN. Any migration document available?


Thanks in advance.



Ankit Jain

Juniper Networks Technical Books
About the Author
  • Anil Lohiya is a Principal Engineer in the Campus and Data Center Business unit in Juniper Networks. In his current role, he is leading some of the SDN and Network Virtualization initiatives.
  • I am an Engineer with expertise in Data Packet Forwarding, Software Design & Programming with major domain expertise in QoS (Quality of Services). I have worked across the domains in Data communications field. I love water and am a good swimmer too.
  • Remarkably organized stardust.
  • I have been in the networking industry for over 35 years: PBXs, SNA, Muxes, ATM, routers, switches, optical - I've seen it all. Twelve years in the US, over 25 in Europe, at companies like AT&T, IBM, Bay Networks, Nortel Networks and Dimension Data. Since 2007 I have been at Juniper, focusing on solutions and services: solving business problems via products and projects. Our market is characterized by amazing technological innovations, but technology is no use if you cannot get it to work and keep it working. That is why services are so exciting: this is where the technology moves out of the glossy brochures and into the real world! Follow me on Twitter: @JoeAtJuniper For more about me, go to my LinkedIn profile:
  • Ken Briley is Data Center TME at Juniper Networks focused on Juniper switching product lines. Prior to Juniper Networks, Ken worked at Cumulus Networks as a TME supporting the dis-aggregation movement and before that he spent 15 years at Cisco Systems working in various roles: Technical Support, Technical Marketing Engineer, Network Consulting Engineer and Product Management. Ken has an MS in Electrical Engineering and is CCIE # 9754.
  • Michael Pergament, JNCIE-SP #510, JNCIE-ENT #23, JNCIE-DC #3
  • Raj is a Sr. Cloud Technology Architect with Juniper Networks and focuses on technologies such as VMware, SDN, and OpenStack etc.
  • Rakesh Dubey is the engineering head for Campus and Data Center business unit at Juniper Networks. He has been with Juniper for past six years leading multiple switching products.
  • Sarath Chandra Mekala is a staff engineer with Juniper networks and focuses on implementing Juniper's Openstack Neutron plugins in the areas of Switching, Routing, Firewall and VPN. He is an official contributor to Openstack Neutron FWaaS v2.
  • Sriram is a Sr. Manager in the Campus and Datacenter Business Unit. He is part of the Network Director team and focuses on technologies such as VMware integration, OpenStack etc.
  • An accomplished network engineer with 18+ years’ experience, and a Juniper employee since 2004, Tony leads the IT team focused on deploying “Juniper on Juniper”, using Juniper technology to run the business and deliver core business services across the enterprise. Tony holds a BS degree from California Polytechnic State University. Outside of work, Tony serves on a School Advisory Council, loves biking and good coffee.