Andrew,
There are many ways to accomplish this. The simplest would be to perform ingress policing on the L3VPNs and use differential WRED. I would caution against ever using a policer to split packets from a flow into more than one queue as it would guarantee out-of-order packets for anything exceeding the policer. A better way to do this would be to classify all customer traffic into a single forwarding-class and queue-separate each customer. But with only 8x outgoing queues per port this would limit the scalability of the solution.
Here's how I would use policing/wred to accomplish your goal:
On the ingress interface (customer-facing) attach a firewall filter with a policer (or you can attach a logical-interface policer under the [ edit interfaces ge-x/y/z unit n ] hierarchy with a "family inet policer input <policername>". The exceed action on the policer is set to loss-priority high. This is an internal marking on the packet metadata within the router which can be used to trigger differential WRED on out-of-spec packets.
The policers are defined under [ edit firewall ]
policer 300m {
if-exceeding {
bandwidth-limit 300m;
burst-size-limit 300k;
}
then loss-priority high;
}
policer 200m {
if-exceeding {
bandwidth-limit 200m;
burst-size-limit 200k;
}
then loss-priority high;
}
The policers are attached to the customer-facing ports. In this case, ge-0/0/1 is VLAN-tagged with each tag corresponding to a customer:
interfaces {
ge-0/0/1 {
description Customer-Facing;
vlan-tagging;
link-mode full-duplex;
unit 100 {
vlan-id 100;
family inet {
policer {
input 300m;
}
address 10.1.1.1/24;
}
}
unit 200 {
vlan-id 200;
family inet {
policer {
input 300m;
}
address 10.1.2.1/24;
}
}
unit 300 {
vlan-id 300;
family inet {
policer {
input 200m;
}
address 10.1.3.1/24;
}
}
}
}
The logic here is that ingress traffic is run through the policer (in this example 300Mb/sec with a 300k burst-size). Any packets not exceeding the policer go un-marked. Any packets exceeding the policer are marked (but not droppped). You would attach policers to each customer-facing interface.
Then, as packets are enqueued to the egress (core-facing) interface, you would configure CoS so that if the queue depth increases the marked packets are dropped before other traffic. This way, traffic up to the policed amount goes through unmolested, but excessive traffic is subject to drop in the event of congestion.
Here's an example of a simple CoS config that will apply separate WRED profiles to marked vs unmarked traffic (in this case applied to the MPLS core interface ge-0/0/0):
class-of-service {
drop-profiles {
aggressive {
interpolate {
fill-level [ 50 75 90 100 ];
drop-probability [ 0 50 75 100 ];
}
}
hockeystick {
interpolate {
fill-level [ 75 90 100 ];
drop-probability [ 0 75 100 ];
}
}
}
interfaces {
ge-0/0/0 {
scheduler-map wan;
}
}
scheduler-maps {
wan {
forwarding-class best-effort scheduler sch-normal;
forwarding-class network-control scheduler sch-nc;
}
}
schedulers {
sch-normal {
transmit-rate percent 90;
buffer-size percent 90;
priority low;
drop-profile-map loss-priority high protocol any drop-profile aggressive;
drop-profile-map loss-priority low protocol any drop-profile hockeystick;
}
sch-nc {
transmit-rate percent 5;
buffer-size percent 5;
priority high;
}
}
}
Packets that exceed the policers are marked with a loss-priority of high. Then as these are placed on the egress interface queue, any PLP-high packets are subject to the "aggressive" WRED profile (which starts dropping at 50% queue depth). Any unmarked packets are left alone until the queue becomes extremely congested, then the "hockeystick" profile kicks in to provide fair-drops to protect TCP throughput.