These are a pretty standard setup but also reasonably complex at the same time.
What seems to be missing from your list is a firewall. Will that be upstream of your dual feeds from your colocation hosting provider?
For the overview, what you have is setting up redundant networking for the servers, storage and the network. The idea is that if one piece of equipment fails you still have access through the other. In addition, there are three kinds of network traffic in this system, production, storage and management. But many systems will drop the management traffic onto the production network and share the bandwidth.
Your firewall and path to the internet as opposed to your private internal network is undefined here. So I'll assume this is provided upstream and you are connecting to this via a private ip space.
switching
For redundancy you have two switches. Your options for the interconnect are:
- Virtual chassis
- Connect a trunk port between the two
The Virtual chassis connection has the advantage of not using up ports for future growth. But in your case you probably won't need that many for a long time. The downside is the configuration is a little more complicated than a truck port. The virtual chassis connects the two switches together and treats them as a single managed unit. Traffic between ports on the two swiches go through this interconnection. You will only have one VC in your setup and they can be racked together anywhere you want in the rack.
Switch Ports
Switch ports come in two flavors, access and trunk. Access ports have one vlan and connect directly to an ethernet interface on a single device. Trunk ports connect two switches are are used to transport multiple vlans between the two switches. This saves on port counts to get your interconnections working. So each access port that connects to servers or the SAN is assigned to a single vlan. While the trunk ports can send multiple vlans between the switches and the provider firewall and your switch.
vlans
You will need at least two vlans. The private space assigned to your production servers from the upstream firewall and a separate vlan for your SAN. VMware recommends that you manage your servers on a separate network too. If you do, then this should also be used for the management interfaces on your SAN.
You will not need to route the SAN vlan anywhere else but between your servers and your SAN so that stays entirely on your switches.
Your production server vlan and management vlan will go up to the firewall to allow you access from your provider.
Devices
Your SAN should also have two SAN access interfaces and two controller interfaces. One of each will go to each of your two switches. The SAN access interfaces connect to your SAN vlan. The controller interfaces go to either your management or production vlan.
If these servers are similar the Dells we have for vmware hosts you have four ethernet interfaces to work with. So you will connect two to each switch. For this setup you would use two ports on each server setup as production vlan access ports. This is a nic team inside vmware and doubles your capacity and provides redundancy at the same time. If you run both a san and management vlan then your other two ports are setup as trunk ports inside vmware for both. If you only have the san then they can be access ports the same way.
Your dedicated vcenter server connects to both switches on both the production and management vlan. This will not need access to the san at all.
This gives you then an active path for every device even if one of the switches were to fail.
VMware Networking
Inside your VMware host you will create either two or three virtual switches, one of each vlan production, san and managment. You assign your physical nics to the correct virtual switch to line all these up inside the host. This essentially extends the switch fabric for multiple virtual hosts to use it.
You need to create service console ports on both the san and the management virtual switches. These are how management connections and storage access occur in vmware. You won't need this on the production switch if you have a management vlan setup.
When you create the hosts you then you just need a nic to connect to the production vlan. If you run a separate management vlan for the virtual servers you can simply add a second nic and connect it to that virtual switch for access.
Storage network
You create the nfs volumes on the SAN and setup any security for their presentation to the servers. These are then attached to via the console port address setup in each vmware host server on this san vlan.