Hi!
We have a very similar requirement - only citrix (ica; tcp 1494) needs to get a guaranteed bandwidths of 1Mbps on a 2 Mpbs-link. LAN inteface is vlan.0, WAN is fe-0/0/0.
Currently Citrix connection suffer miserably, if print jobs (outside ica) are started.
I tried to adapt your example on our SRX100 which resulted in the following config-sniplet:
set firewall family inet filter classify term 10 from protocol tcp
set firewall family inet filter classify term 10 from destination-port 1494
set firewall family inet filter classify term 10 then forwarding-class expedited-forwarding
set firewall family inet filter classify term 30 then forwarding-class best-effort
set interfaces vlan unit 0 family inet filter input classify
set class-of-service interfaces fe-0/0/0 scheduler-map my-scheduler-map
set class-of-service scheduler-maps my-scheduler-map forwarding-class expedited-forwarding scheduler ica
set class-of-service scheduler-maps my-scheduler-map forwarding-class best-effort scheduler other
set class-of-service schedulers ica transmit-rate 1m
set class-of-service schedulers ica priority high
set class-of-service schedulers other transmit-rate 1m
set class-of-service schedulers other priority low
Unfortunatly, the config has no effect in our situation: udp printing traffic is squeezing out tcp traffic.
I simulated this with two iperf connections going thru the SRX: first tcp 1494, second udp 631
at t=0s I start tcp connection and get around 2Mpbs.
at t=10s, I start udp connection, while tcp is still running. udp hogs up almost 2Mbps while tcp goes down to about 200Kbps.
at t=20s, udp connection is stopped and tcp then goes back up to 2 Mbps.
Checking the queues on the outgoing I/Fas you suggested, we can actually see that the expedited forwarding is only 1/10the of the best efford:
root@conhIT-srx# run show interfaces queue fe-0/0/0
:
Queue: 0, Forwarding classes: best-effort
Queued:
Packets : 17216 170 pps
Bytes : 18767113 2060304 bps
Transmitted:
Packets : 17216 170 pps
Bytes : 18767113 2060304 bps
:
Queue: 1, Forwarding classes: expedited-forwarding
Queued:
Packets : 4904 23 pps
Bytes : 6957752 272232 bps
Transmitted:
Packets : 4904 23 pps
Bytes : 6957752 272232 bps
:
What am I doing wrong?
Of course I could limit 'other' to 1Mbps, but that would wast precious bandwidths, if ICA doesn't use up the guaranteed 1Mbps, right?
Cheers,
Kai
N.B.: I think. I got it.
I applied:
set interfaces fe-0/0/0 per-unit-scheduler
and modified the interface-stanza underneath the CoS to rate:
set class-of-service interfaces fe-0/0/0 scheduler-map my-scheduler-map
set class-of-service interfaces fe-0/0/0 unit 0 scheduler-map my-scheduler-map
set class-of-service interfaces fe-0/0/0 unit 0 shaping-rate 2m
Running the iperf now, the tcp is reduced to 1mbps, once the upd starts, which is the expected behaviour.
Running the iperf tests seperately, both use up 2Mbps - QED