SRX Services Gateway
SRX Services Gateway

VPLS and trunking VLANs

‎02-04-2014 02:33 PM

Hi everyone. This has got me totally stumped. Ive been trying to debug on and off for about a week and have resorted to Wireshark to make sure things are working as expected, and so far I cant see a reason why I am getting this problem.

 

Im trying to configure VPLS between 3 SRX240. Picture them in this configuration:

 

SRX1 - SRX2 - SRX3

 

Over this VPLS I have tried two configurations:

 

1. I basically added an entire interface to the VPLS, and trunked some VLANs over that, and everything worked just fine.

 

2. I am trying to assign individual units from an interface configured for VLAN trunking, each with its own VLAN ID, to the same VPLS instance to trunk ad-hoc VLANs.

 

I suppose I use the term "trunking" rather loosely.

 

But anyway, this is what I am observing:

 

Upon adding the first unit at each site, everything works fine. Data flow works both ways.

 

As soon as I add a second unit at each site, things start to break down. Dataflow seems to stop. Using Wireshark I can see that there is at least one-way dataflow from SRX1 and SRX3 towards SRX2, and I can also see this data being transmitted out towards a switch which I am using to mirror a port for Wireshark purposes. Across the MPLS section of the network I can see that VLAN IDs are maintained for frames/packets from the different VLANs. But dataflow that should be happening back in the other direction from SRX2 towards SRX1 and SRX3 seems to disappear.

 

Ive been doing all sorts of reading around and I simply cant find a solution to this.

 

This is my routing-instance configuration:

 

routing-instances {
    VPLS {
        instance-type vpls;
        route-distinguisher 12345:1;
        vrf-target {
            import target:12345:1;
            export target:12345:1;
        }
        protocols {
            vpls {
                site-range 10;
                no-tunnel-services;
                site SRX2 {
                    site-identifier 1;
                }
                vpls-id 1;
            }
        }
    }
}

 Simply duplicate it amongst the other SRX and make appropriate changes.

 

This is the interface configuration I tried under scenario 1 above:

 

interfaces {
    ge-0/0/12 {
        description "VPLS test interface";
        encapsulation ethernet-vpls;
        unit 0 {
            family vpls;
        }
    }
}
routing-instances {
VPLS {
interface ge-0/0/12.0;
}
}

 

 And this is the configuration I am trying now:

 

interfaces {
    ge-0/0/5 {
        description "Aggregation interface";
        vlan-tagging;
        mtu 1564;
        encapsulation flexible-ethernet-services;
        unit 50 {
            encapsulation vlan-vpls;
            vlan-id 50;
        }
        unit 60 {
            encapsulation vlan-vpls;
            vlan-id 60;                 
        }                               
    }
}
routing-instances {
    VPLS {
        interface ge-0/0/5.50;
        interface ge-0/0/5.60;
    }
}

 Ive tried all kinds of combinations of things like setting a vlan-id within the routing-instance, setting it to all/none, tried pushing and popping VLAN IDs from the interface units, setting and removing the "family vpls" config from interface units ....

 

Basically everything I could possibly find online.

 

Im just not sure if Im hitting some kind of limitation of the SRX platform. Can it even do this?

 

Is it a JunOS bug, although I have tried a different version and it seems to be no different.

 

Am I missing some kind of really obvious setting that I need to get multiple logical units from the same physical interface in to the same VPLS, but without merging them - I want to maintain the VLAN ID (and this appears to be the happening from what I can see from Wireshark so I dont think I need to do anything special there.)

 

I just for the life of me cant get multiple VLANs going at the same time.

 

Hoping someone has done this or knows the answer because Im banging my head against the wall at the moment!

 

Thanks

Tom