Switching

last person joined: 3 days ago 

Ask questions and share experiences about EX and QFX portfolios and all switching solutions across your data center, campus, and branch locations.
  • 1.  QFX-5200 Mac learning issue

    Posted 02-19-2019 20:36

    we had the server ports set up as MC-LAG at first, but the server team change to SET teaming in the server and that does not do LACP 

     

    QFX-a and are connect with AE256 for iccp and icl (ports 30 and 31 on both make the 200G link)

     

    uplink to current Core is from QFX-A 

    MACs on QFX-B are not learned on QFX-B witch we think is causing a unicast storm, if you ping a VM on the server from a nexus or junos you get the (DUP!) alarms 

    if we shut the server ports on QFX-A those alarms go away 

     

    why  are we not learing the MACs on the trunk from QFX a to QFX-b? 

    is this not suppported topology? (the MClag docs say a standalone should mac learn via the iccp) 

     

    There are 2 MC-LAG ports configed and they ARE learing MACS 

     

    do we have a config wrong ? unsupported topo ? 

     

    Juniper TAC said the L2-learing needed restarted, we tried that, chassis control, interface control, iccp services, and rebooted both boxes and the issue remains. 

     

    I admit it could be an issue on the server side, but my job is the Network and I have no access to server. just trying to verify my side of things. 

     

    Thoughts, facts and experance you have would be helpful

    Server plug into both QFX's 100g, with set teaming configed 

     

    we did try to MC-lag from both QFX to our core with nexuss VPC but it kept failing randomly, so did a single L2 link to the core from QFX-A 



  • 2.  RE: QFX-5200 Mac learning issue

     
    Posted 02-20-2019 04:52

    Hi Andrewmiller,

     

    If your MCLAG (ICL/ICCP) is properly up, MACs should be updated from one member to the other.  Does it share MACs sometimes and not at other or doesn't share at all.  Try configuring "set interfaces irb arp-l2-validate" on the QFXs if that resolves.

     

    Else please check if this helps: 

    https://www.juniper.net/documentation/en_US/junos/topics/task/troubleshooting/troubleshooting-mc-lag-qfx-series-cli.html

     

    Hope this helps.

     

    Regards,
    -r.

    --------------------------------------------------

    If this solves your problem, please mark this post as "Accepted Solution."
    Kudos are always appreciated :).

     

     



  • 3.  RE: QFX-5200 Mac learning issue

    Posted 02-20-2019 06:44
    These vlans are layer 2 only, the Nexus core does the arp. Like Isaid, the 2 ports that are mclag do share the Macs
    The layer 2 vlans with single home devices do not this is cuase unicast out all ports

    The comma above is on our l3 interface that we do have


  • 4.  RE: QFX-5200 Mac learning issue
    Best Answer

    Posted 02-20-2019 13:55

    found the reason finally 

     

    https://kb.juniper.net/InfoCenter/index?page=content&id=KB32151&cat=SWITCH_PRODUCTS&actp=LIST 

     

    non MC-lag vlan need there own Trunk link or you have to add them to an MC-lag port 



  • 5.  RE: QFX-5200 Mac learning issue

     
    Posted 02-20-2019 17:38

    Hi Andrewmiller,

     

    Nice,thanks for sharing.  Know that MAC replication on MCLAG is handled by ICCP, and even for single-homed clients it doesn't use datapath learning (like the usual source-MAC learning). 

     

    Short of adding a trunk link, I think you can also try to add static MAC on the ICL for such single-homed devices and see if that alleviates the flooding :):

    https://www.juniper.net/documentation/en_US/junos/topics/task/configuration/bridging-static-mac-cli-els.html

     

    Hope this helps.

     

    Regards,
    -r.

    --------------------------------------------------

    If this solves your problem, please mark this post as "Accepted Solution."
    Kudos are always appreciated :).



  • 6.  RE: QFX-5200 Mac learning issue

    Posted 02-20-2019 21:52

    yah that would be a lot of static MAC for us to add, no thanks LOL