Switching

last person joined: yesterday 

Ask questions and share experiences about EX and QFX portfolios and all switching solutions across your data center, campus, and branch locations.
  • 1.  Round Robin Issues

    Posted 09-25-2013 15:18

    If I am using round-robin bonding on a server (2x 10Gig links), do I need to add those ports into a Link Aggregate or anything on the switch? For some reason the servers can send traffic outbound at 20Gbps (multiple threads), but inbound is around 100-200Mbps when using multiple threads.

     

    Would this be a switch issue, or a server issue with Round Robin? Do I need to add the two ports on the switch into a "Link Aggregate" or something, with LACP set to "none"? Really want to use Round Robin since the SAN software is designed around it (dm-multipath). Round Robin helps improvie the sequential read/writes if it works.



  • 2.  RE: Round Robin Issues
    Best Answer

    Posted 09-25-2013 23:25

    Hi Speedy,

     

    I would suggested creating a link aggregate, but don't use LACP.  The traffic from the switch to the server will just hash based on MAC/IP/Port etc over the links, so you may not see a massive jump in performance depending on flows.



  • 3.  RE: Round Robin Issues

    Posted 09-26-2013 11:34

    Ok this seemed to have help round-robin work better for inbound traffic to the server.

     

    I'm new to this, so sorry for the dumb question. Would you add all the servers to the same Aggregate Group? Or would each server need it's own LAG for their ports?



  • 4.  RE: Round Robin Issues

    Posted 09-26-2013 14:17

    Would this work with round-robin bonding on the servers?

     

    We would use 2 VLANS on the same EX4550. The internal IP block wouldn't be assigned to the VLAN(s) since there isn't a gateway IP being used or out of network traffic. Then connect each 2x 10Gig port to the 2 different vlans with the servers. Wouldn't it then load balance the traffic to each VLAN since the server will have the same IP address on the BOND 0?

     

    Seems like this is what some people were suggesting with Round Robin, just wasn't sure if it's correct.



  • 5.  RE: Round Robin Issues

    Posted 09-26-2013 15:26

    You would only configure a aggregated ethernet link per server, not split across servers.

     

    If I understand what you are saying if your servers need to be in different VLANs, you would need to assign the ae interface to the appropriate VLAN.  If your servers are running a Hypervisor and you have guest machines that need differnet VLANs, then you'll most likely need to configure both ae interfaces the same and tag VLANs toward the server.

     



  • 6.  RE: Round Robin Issues

    Posted 10-02-2013 11:08

    Have another question that i'm struggling with. After removing the internal IP block from the VLAN, the servers tend to have packet loss occasionally for the internal traffic. Since this is all internal traffic, we removed the (ex) 10.0.0.1/16 from the VLAN since it doesn't need any routing outside of the VLAN. Is there something else I need to do so that the switch can resolve internal traffic better? Packet loss is going through the roof when I removet he IP block from the VLAN, it's not resolving the packets.

     

    Is there a setting I need to enable/disable for the VLAN so the packet loss is eliminated?



  • 7.  RE: Round Robin Issues

    Posted 10-03-2013 20:06

    You'll need to provide more detail around your topology and configuration.  

     

    Removing the IP address from the VLAN will not cause packet loss.

     

    It may be that your link aggregate is not working correctly and traffic being hashed over a particular link is being dropped.



  • 8.  RE: Round Robin Issues

    Posted 10-03-2013 21:39

    Have it setup like this, or at least want to.

     

    Servers have 2x 10Gig Uplinks. 10Gbps to VLAN1, 10Gbps to VLAN2. Server has Round Robin (mode 0) bonding to the switch.

     

    SANS have 4x 10Gig Uplinks. 2x 10Gbps in ae0 to VLAN1, 2x 10Gbps in ae1 to VLAN2. Server ports are Round Robin (mode 0) as well. Each SAN has 2x 10Gbps for each ae#, so it can be connected to both VLAN1 and VLAN2 via round robin.

     

     

    Right now, I have the same setup but with only VLAN1 (can't seperate them yet) and if I remove the IP block 10.0.0.1/16 from the VLAN1 the servers start dropping packets for some reason. My hope is to better separate the traffic over to 2 vlans so it can resolve better. People running the same setup as us had better success with 2 vlans in round robin than 1 VLAN.