vSRX
vSRX

10G interfacing on the vSRX

‎02-07-2018 01:20 PM

Lab testing to see real performance across a vSRX and I'm not seeing full performance across a simple test; so, I'm trying to figure out what I may be missing.  This is supposed to be a really simple demonstration of performance: 1 box, with 1 vSRX, with 2 interfaces to 2 Acterna test-sets.

 

vSRX is 17.3R1.10, using KVM on Ubuntu 16.04.3 LTS

I've allocated 16G of RAM and 8 vCPUs, not shared with anything else.

I have an intel 4-port 10G SFP NIC installed (Intel X710 chipset).

I've updated the kernel module, adapter driver (3.45), and firmware (6.01).

There are Intel-supported 10G SFPs installed.

 

I've mapped SR-IOV interfaces on Port 1 of the NIC and another SR-IOV interfaces on Port 4 of the NIC.  There are no other VMs on this server and there are no other demands on the 10Gb ports.  I have an Acterna BERT test between the two running traffic in one direction only.  I'm seeing lost frames at 4.250Gbps and I'm not sure what I can do to increase performance across the VM.  If I start traffic in the other direction, I can cause frame loss on-demand.  The vSRX datasheet says it can support 100G; but, there weren't details on how to achieve it.  I'm not sure what limitation I'm hitting.

-show chassis routing-engine: indicates <10% processor utilization

-show chassis routing-engine: indicates 62% of memory is used.

 

A coworker indicated that there may be a different way to map the interface on the vSRX side....After giving the vSRX the PCI interface, it instantiated its own ge-0/0/x interface.  I do not see a way to instead map an xe-0/0/x interface.  Did I miss a step?

 

Thank you for any insights, suggestions, or recommendations!

5 REPLIES 5
vSRX

Re: 10G interfacing on the vSRX

‎02-08-2018 09:56 PM

Hi,

 

I think vSRX is working as medium flavour , please check the https://www.juniper.net/documentation/en_US/vsrx/topics/concept/security-vsrx-kvm-understanding.html... - vSRX Scale Up Performance . 

 

As per https://www.juniper.net/us/en/local/pdf/datasheets/1000489-en.pdf  5.4 Gbps is throughput for IMIX traffic in vSRX medium flavour .  to support PCI passthrough minimum requirement is :

 

 

9 vCPUs

16 GB

  • PCI passthrough (Intel XL710)

Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1

 

 

Thanks,

Vikas

vSRX

Re: 10G interfacing on the vSRX

‎08-14-2019 02:30 PM

This seems to be one of the few posts that keeps coming up about 10Gb networking on the vSRX so I wanted to add my questions here. I can create a new thread if needed.

 

Are you required to map the NICs via SR-IOV in VMware in order to use more than 1Gbps and get the NICs to show as 10Gbps? The documentation states you are required to use SR-IOV when using 9 vCPUs to scale the throughput. But it doesn't necesssarily mean it is required to get greater than 1 Gbps.

 

Will the interfaces show as "xe" interfaces or stay as "ge"?

 

We don't need full 10 Gbps through put so we are still using a medium deployment with standard nic mapping with VMXNET3 drivers and the link speed is only showing 1Gbps. The box contains Intel X550 dual 10Gb adapter card.

 

 

vSRX

Re: 10G interfacing on the vSRX

‎08-15-2019 01:37 AM

Hi,

 

Are you required to map the NICs via SR-IOV in VMware in order to use more than 1Gbps and get the NICs to show as 10Gbps?

NO

 

Will the interfaces show as "xe" interfaces or stay as "ge"?

NO, the interface naming convention does not change to xe if the host NIC is 10G.

 

 

Regards,
Rahul
vSRX

Re: 10G interfacing on the vSRX

‎08-15-2019 06:52 AM

Thanks ScreenJun for the reply it is appreciated. Just to clarify, so the link speed for the interface will still only show 1000Mbps?

 

vSRX-interface-10Gb.png

vSRX

Re: 10G interfacing on the vSRX

‎08-16-2019 05:31 AM

Hi,

 

Yes, even the speed would show that as the interface naming is "ge".

Note:- I agree, if it could read/learn and switch the naming convention it would have been easy for reader.

 

Further, to check the speed etc on the interface, I recommend to rely on "monitor interface traffic".

 

 

Regards,
Rahul