Lab testing to see real performance across a vSRX and I'm not seeing full performance across a simple test; so, I'm trying to figure out what I may be missing. This is supposed to be a really simple demonstration of performance: 1 box, with 1 vSRX, with 2 interfaces to 2 Acterna test-sets.
vSRX is 17.3R1.10, using KVM on Ubuntu 16.04.3 LTS
I've allocated 16G of RAM and 8 vCPUs, not shared with anything else.
I have an intel 4-port 10G SFP NIC installed (Intel X710 chipset).
I've updated the kernel module, adapter driver (3.45), and firmware (6.01).
There are Intel-supported 10G SFPs installed.
I've mapped SR-IOV interfaces on Port 1 of the NIC and another SR-IOV interfaces on Port 4 of the NIC. There are no other VMs on this server and there are no other demands on the 10Gb ports. I have an Acterna BERT test between the two running traffic in one direction only. I'm seeing lost frames at 4.250Gbps and I'm not sure what I can do to increase performance across the VM. If I start traffic in the other direction, I can cause frame loss on-demand. The vSRX datasheet says it can support 100G; but, there weren't details on how to achieve it. I'm not sure what limitation I'm hitting.
-show chassis routing-engine: indicates 62% of memory is used.
A coworker indicated that there may be a different way to map the interface on the vSRX side....After giving the vSRX the PCI interface, it instantiated its own ge-0/0/x interface. I do not see a way to instead map an xe-0/0/x interface. Did I miss a step?
Thank you for any insights, suggestions, or recommendations!
This seems to be one of the few posts that keeps coming up about 10Gb networking on the vSRX so I wanted to add my questions here. I can create a new thread if needed.
Are you required to map the NICs via SR-IOV in VMware in order to use more than 1Gbps and get the NICs to show as 10Gbps? The documentation states you are required to use SR-IOV when using 9 vCPUs to scale the throughput. But it doesn't necesssarily mean it is required to get greater than 1 Gbps.
Will the interfaces show as "xe" interfaces or stay as "ge"?
We don't need full 10 Gbps through put so we are still using a medium deployment with standard nic mapping with VMXNET3 drivers and the link speed is only showing 1Gbps. The box contains Intel X550 dual 10Gb adapter card.