03-10-2011 08:02 AM
I'm testing a site-to-site VPN with SRX240H on both endpoints. Unfortunately, I can get only 50Mbit/s throughput, which is far lower from what I expected. Using des/md5 instead of aes/sha doesn't change anything, and cpu usage is always low.
I need any tips on performance troubleshooting and tuning, please.
P.S. I've got 10.0R3.10 on both devices.
03-10-2011 01:34 PM
You've set 'security flow tcp-mss ipsec-vpn mss 1350' already, correct? (Adjust for your path of course, this assumes a 1500 MTU minimum between endpoints.) Fragmentation will slash throughput on these units. RSync across IPSec between two 240s without the mss adjust averages a paltry 2MB/sec for example compared to 20MB/sec with.
03-10-2011 09:01 PM
Of course I already have 'security flow tcp-mss ipsec-vpn mss 1350' in my config! What I can tune else?
Specification on 240 says IPSEC performance should be up to 250Mbit/s, but I've only got 50Mbit/s
03-11-2011 06:15 AM
50 is a little low. You should get about 110 in IMIX, and about 30 in worst-case 64-byte packets.Do you know the packet size / packet mix you are sending through the tunnel?
I'd re-test with 10.2r3. Are these devices in a lab or in production? Verifying "the usual suspects" like duplex and issues with the circuit may be worthwhile.
03-11-2011 07:27 AM
The devices are in pre-production state, so consider them in lab now. I'm transferring files by http or ftp, so packets are large. No other traffic is going through tunnel during my tests. As for duplex and other issues, everything is ok - I've tested throughput in 'routed' configuration (without VPN), and then transfer speed goes to the max.
08-08-2011 10:26 AM
Was this ever resolved?
I am seeing the same issue - individual sessions only achieve 800-1200Kb/s (on bulk TCP file transfers), and total performance seems to max out atabout 3Mb across all sessions.
I have the tcp-mss set and am using the standard proposal set for the ipsec policy.
As noted above, routed performance maxes out at (more or less) line speed. The same is true if I apply a simple NAT rule.
08-08-2011 10:49 AM - edited 08-08-2011 10:52 AM
For high files transfers there are retransmission that causes latency
Try the following command on both the side and this will ensure that there are no-packet drops on the srx.
#set security flow tcp-session no-sequence-check
If this post was helpful, please mark this post as an "Accepted Solution". Kudos are always appreciated!
08-08-2011 03:20 PM
Disabling sequence checking is unadvisable for a firewall. There was even recent news recommend strict sequence checking to protect against certain types of attacks. If you must proceed I'd recommend looking at the 11.2 code as you can selectively enable/disable it per policy.
08-08-2011 03:30 PM
I agree to your view that disabling syn-check is unadvisable for firewall but there are instance, when the packets send by the server are out of sequence and these packets are droppped by the firewall.
In those instances it is a trade-off with a security.The selective enabling/disabling of sequence check per policy is supported from 10.4R2 onwards.Ref http://kb.juniper.net/InfoCenter/index?page=conten
02-24-2014 09:04 PM
I am guessing that in 2 years - no solution was found or none was posted here? I am experiencing the same thing - poor performance on a line that does have high latency - 150MS (London to West Coast US).
No matter what settings I do, I am also seeing fairly poor performance on a 100Mbit link on our end and 20Mbit link on the UK side. I am seeing about 2-3Mbit performance - very poor.
02-25-2014 02:13 AM
Max throughput of TCP/IP @150ms is only 3.5Mb at standard window size. Unless you're pushing multiple streams or using some sort of WAN optimization you won't get much more than 2-3Mb/s.