Switching

last person joined: yesterday 

Ask questions and share experiences about EX and QFX portfolios and all switching solutions across your data center, campus, and branch locations.
  • 1.  VMware setup at hosting provider

    Posted 07-01-2010 12:12

    Hey All-

     

     

     

    We are going to be colocating a VMware environment with a hosting provider.  They recommended we purchase Juniper ex4200 switches, so we did.  The goal is to have all servers/storage redundantly connected to each switch for both performance and availability.  We're completely new to Juniper, so when you answer me, use small words 🙂  We are a smallish company and do not have the luxury of separate departments for networking/sysadmining/application support/etc.

     

    My question pertains to the configuration of the pair of ex4200s connected via 2 1Gb connections to the hosting provider network.

     

    The hosting provider says they're providing 'Layer 3' to my (1Gb connection each) two switches, so I don't need to do anything fancy on my end to accept it other than tag the necessary VLANs.  

     

    The thing I'm having trouble understanding is how to set these switches up - ie. Virtual Chassis them together or keep them separate to have separate 'stacks'?  


    I think I understand how to configure the VCs setup, but how would I configure them if I kept them separate?  It seems Juniper's general recommendation is to plug into separate VCs - 'Top of Rack'.  

     

     

    What do you guys think I should do?  If I do VC, I'm worried about downtime when I have to upgrade the firmware.

     

     

    Gear

    2x Juniper ex4200

    1x NetApp NAS - Dual Controller  <--VMware will be using NFS for storage on this guy

    3x VMware ESXi Hosts <-- Dell r710s

    1x VMware Virtual Center <-- Dell r610

     

    Please let me know if this isn't clear.

     



  • 2.  RE: VMware setup at hosting provider

    Posted 07-02-2010 15:35

    These are a pretty standard setup but also reasonably complex at the same time.

     

    What seems to be missing from your list is a firewall.  Will that be upstream of your dual feeds from your colocation hosting provider?

     

    For the overview, what you have is setting up redundant networking for the servers, storage and the network.  The idea is that if one piece of equipment fails you still have access through the other.  In addition, there are three kinds of network traffic in this system, production, storage and management.  But many systems will drop the management traffic onto the production network and share the bandwidth.

     

    Your firewall and path to the internet as opposed to your private internal network is undefined here.  So I'll assume this is provided upstream and you are connecting to this via a private ip space.

     

    switching

    For redundancy you have two switches.  Your options for the interconnect are:

    • Virtual chassis
    • Connect a trunk port between the two

    The Virtual chassis connection has the advantage of not using up ports for future growth.  But in your case you probably won't need that many for a long time.  The downside is the configuration is a little more complicated than a truck port.  The virtual chassis connects the two switches together and treats them as a single managed unit.  Traffic between ports on the two swiches go through this interconnection.  You will only have one VC in your setup and they can be racked together anywhere you want in the rack.

     

    Switch Ports

    Switch ports come in two flavors, access and trunk.  Access ports have one vlan and connect directly to an ethernet interface on a single device.  Trunk ports connect two switches are are used to transport multiple vlans between the two switches.  This saves on port counts to get your interconnections working.  So each access port that connects to servers or the SAN is assigned to a single vlan.  While the trunk ports can send multiple vlans between the switches and the provider firewall and your switch.

     

    vlans

    You will need at least two vlans.  The private space assigned to your production servers from the upstream firewall and a separate vlan for your SAN.  VMware recommends that you manage your servers on a separate network too.  If you do, then this should also be used for the management interfaces on your SAN.

     

    You will not need to route the SAN vlan anywhere else but between your servers and your SAN so that stays entirely on your switches.

     

    Your production server vlan and management vlan will go up to the firewall to allow you access from your provider.

     

    Devices

    Your SAN should also have two SAN access interfaces and two controller interfaces.  One of each will go to each of your two switches.  The SAN access interfaces connect to your SAN vlan.  The controller interfaces go to either your management or production vlan.

     

    If these servers are similar the Dells we have for vmware hosts you have four ethernet interfaces to work with.  So you will connect two to each switch.  For this setup you would use two ports on each server setup as production vlan access ports.  This is a nic team inside vmware and doubles your capacity and provides redundancy at the same time.  If you run both a san and management vlan then your other two ports are setup as trunk ports inside vmware for both.  If you only have the san then they can be access ports the same way.

     

    Your dedicated vcenter server connects to both switches on both the production and management vlan.  This will not need access to the san at all.

     

    This gives you then an active path for every device even if one of the switches were to fail.

     

    VMware Networking

    Inside your VMware host you will create either two or three virtual switches, one of each vlan production, san and managment.  You assign your physical nics to the correct virtual switch to line all these up inside the host.  This essentially extends the switch fabric for multiple virtual hosts to use it.

     

    You need to create service console ports on both the san and the management virtual switches.  These are how management connections and storage access occur in vmware.  You won't need this on the production switch if you have a management vlan setup.

     

    When you create the hosts you then you just need a nic to connect to the production vlan.  If you run a separate management vlan for the virtual servers you can simply add a second nic and connect it to that virtual switch for access.

     

    Storage network

    You create the nfs volumes on the SAN and setup any security for their presentation to the servers.  These are then attached to via the console port address setup in each vmware host server on this san vlan.

     



  • 3.  RE: VMware setup at hosting provider

    Posted 07-07-2010 14:53

    Wow, thanks for your thorough reply!  Most helpful.

     

    You surmise correctly about everything, including the firewall being upstream from my two feeds from the hosting provider.  Several VLANs will be tagged on those lines for various networks - web, app, DB, and management.  

     

    So your recommendation for this scenario is to create a VC out of the switches?  Can I do software updates on the VC without incurring a downtime to hosts connected to each switch?  

     

     

    Again, your reply was very helpful and reassuring to me.  



  • 4.  RE: VMware setup at hosting provider
    Best Answer

    Posted 07-07-2010 15:20

    I would use the VC in this setup.  Yes, you can upgrade individual switches or the whole chassis at once.  See KB10840 for the general instructions for both methods.

     

    Your scenario is essentially two switches in the same closet.  This guide from the documentation site shows how to setup a basic two switch VC.

     

    If you haven't seen it yet, sign up on the free Juniper fast-track program.  The on-line introductory class on JUNOS as a switching language will help a lot.  They have other free training material but pretty much all of what you will need will be covered in that 90 minute overview with examples and exercises.



  • 5.  RE: VMware setup at hosting provider

    Posted 07-08-2010 12:33

    Many thanks for all your help, including the KB article and JUNOS course.  I'll jump all over that!

     

     

    -jeff

     

     

     



  • 6.  RE: VMware setup at hosting provider

     
    Posted 07-09-2010 02:53

    If the R710's are yours, I would reccommend getting an additional 4-port gig card for the two servers.   This will give you NIC redundancy in the servers.   If you lose the onboard NICs in a servers, you won't have a data channel to migrate the VMs to the other hardware.   

     

    If you do this, be aware that the port numbering on the on-board is opposite that of the PCI card....  1234  8765 (or the other way around)