I'm not 100% sure I'm doing this correctly, and wanted to get someone else's eyes on it.
I have multiple L3 subnets that live in one VLAN.
Unfortunately, to route between these subnets, traffic has traditionally needed to leave our switches (10Gbps uplink) and hit the upstream router interface (only to reflect back into our network). I would like to keep this traffic within our layer 2 environment so that we can take advantage of our 40GbE core, and reduce egress traffic to the upstream router.
I've decided to do some testing of a virtual-router configuration on our distribution switches (a pair of QFX5100-24Q switches in virtual-chassis mode).
My thought process, is to add IPs on each of the subnets to act as the default gateway for our host systems (they're currently pointing to our upstream as the default gw).
To accomplish this, I added the IPs I'd like to use for my router-instance to the irb unit 0 interface as follows:
irb {
unit 0 {
description "Virtual Router Interface for L3 Core";
family inet {
mtu 9000;
no-redirects;
address xxx.xxx.xxx.xxx/24;
address xxx.xxx.xxx.xxx/21;
address xxx.xxx.xxx.xxx/23;
address xxx.xxx.xxx.xxx/25;
address xxx.xxx.xxx.xxx/25;
}
}
}
Then, I added the irb.0 interface to the vlan
root@qfx# show vlans BuildNetwork
vlan-id 1135;
l3-interface irb.0;
I then created a virtual-router type routing instance with this irb.0 interface, and a static default route to the upstream's default GW (yyy.yyy.yyy.yyy):
instance-type virtual-router;
interface irb.0;
routing-options {
static {
route 0.0.0.0/0 next-hop yyy.yyy.yyy.yyy;
}
localize {
unicast-only;
}
router-id xxx.xxx.xxx.xxx;
}
Traffic is routing properly with this configuration, but I'm seeing duplicate ICMP packets when pinging from one subnet to the other.
Any immediate gotchas that anyone sees? (sorry for the obsfucated IPs)
Thank you,
-- Andrew