vMX
Highlighted
vMX

Juniper VMX bad cpu usage using lite mode in kvm compared to vmware?

‎02-08-2018 01:36 PM

Hi all,

 

I build my own lab with a vmware server and 10 vmxs (version 17.2) running. All vfps are running in lite-mode and if I look at the needed cpu ressources I noticed the following:

1 vcp = very low nr of mhz

1 vfp = less than 1000 mhz

 

Now I tried to setup another server, this one is running kvm. I've tested gns3, eve-ng and a ubuntu with the vmx images and original juniper libvirt scripts for kvm and all running in lite mode and all have the same issue.

The qemu instance of the vfp is always using about 150% of my cpu, which is approximately 3Ghz. So the lite mode is using more ressources in kvm than in vmware, is this expected?

 

I've already asked this in the eve-ng forum (attached on the bottom) in the past, and though problem was about nested virtualisation, but now I did this on a bare-metal server and its the same.

 

I've read many thing about dpdk and it looks like that vfp is not running in the dpdk interrupt mode, but I didn't found any way to verify it or to change it. show chassis hardware shows that I'm running in lite-mode.

 

Anyone else have similiar problems? how is your usage?

 

Any help is appreciated. kind regards

viets

 

###OLD POST###

I'm certain this is not an eve issue, but I need some data to compare. And hope that someone has enough knowledge of qemu and juniper images.

I'm running Vmware 6.5 with a Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz with 6 Cores and HT = 12 Cores and 128GB RAM.

Before I used eve, I installed all virtual machines (csr, juniper, etc.) directly in VMware, which worked pretty well.

But now I'ld like to migrate all vms to eve, just to gain a litte bit more flexibility.

So I installed the ova of eve and assign all cpu and half of RAM to the VM. Then I tried to run just a single juniper vcp and vfp instance. Running these works fine, but the cpu load of the vfp vm after connecting to the vcp is much higher compared to vmware. Both have just a basic config, nothing with load. Tried different configs, but didn't change anything.

I know that this is a nested virtualization and can't be as fast as only on layer of virtualization, but I think this is too much loss.

Before I'm getting to technical for some people I've just a question how much cpu usage they have running juniper vms?
Since with this high cpu I'm not able to run all my 10 juniper routers which worked fine with plain vmware.

I also tried gns3 and have the same issue, so I believe it's not an eve issue, I'm not certain if this is something with the juniper image or something with qemu on my setup. Thats why I post it here to hope someone can share their usage.

For people with some technical background, please look at my Test results and If you know anything that can help, please let me know.

Test results:

on plain vmware:
vcp = uses 23Mhz in vmware
vfp = uses 767 Mhz in vmware

on eve nested in vmware only running 1 juniper device (1 vcp/1vfp)
eve use 3.96 Ghz

in top under eve:

Code: Select all

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                            
12896 root      20   0 5342356 2.246g  23044 S 147.3  7.1 975:14.47 qemu-system-x86 (vfp)                                                    
 6881 root      20   0 2949528 2.059g  22072 S   2.3  6.6  15:12.74 qemu-system-x86 (vcp)

After some investigation I found out that on the vfp some software interrupts steal my cpu compared to vmware.
top from the vfp:

Code: Select all

top - 17:28:38 up 10:10,  1 user,  load average: 1.09, 1.16, 1.14
Tasks: 116 total,   2 running, 113 sleeping,   0 stopped,   1 zombie
Cpu(s):  2.8%us,  7.8%sy,  0.0%ni, 88.8%id,  0.0%wa,  0.0%hi,  0.1%si,  0.4%st
Mem:   4046340k total,  2605984k used,  1440356k free,    18124k buffers
Swap:        0k total,        0k used,        0k free,   656556k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 1151 root      20   0  709m 299m 161m R   30  7.6 211:57.70 J-UKERN            
 1138 root      20   0 33.8g  74m 4504 S   16  1.9 196:14.42 riot               
   18 root      -2   0     0    0    0 S   14  0.0 137:51.30 ksoftirqd/1        
   24 root      -2   0     0    0    0 S   11  0.0 113:41.79 ksoftirqd/2        
    3 root      -2   0     0    0    0 S    7  0.0  41:19.24 ksoftirqd/0        
   65 root     -51   0     0    0    0 S    7  0.0  21:43.27 irq/44-virtio1-    
  122 root     -51   0     0    0    0 S    3  0.0   0:00.75 irq/4-serial       
  889 root      20   0  9508 1224  920 S    0  0.0   0:03.88 fpc.core.push.s    
23772 root      20   0 19408 1148  860 R    0  0.0   0:00.10 top  



On Vmware the process J-Ukern and riot are the same, but the ksoftirqds are not visible.

Code: Select all

cat /proc/interrupts 
           CPU0       CPU1       CPU2       
  0:        138          0          0   IO-APIC-edge      timer
  1:         10          0          0   IO-APIC-edge      i8042
  4:      15095       3341          0   IO-APIC-edge      serial
  6:          1          2          0   IO-APIC-edge      floppy
  7:          0          0          0   IO-APIC-edge      parport0
  8:         34          0          0   IO-APIC-edge      rtc0
  9:          0          0          0   IO-APIC-fasteoi   acpi
 12:        130          0          0   IO-APIC-edge      i8042
 14:       2170          1          0   IO-APIC-edge      ata_piix
 15:         74          8          0   IO-APIC-edge      ata_piix
 18:      37478          0          0   IO-APIC-fasteoi   ext
 19:    2826992     148627          0   IO-APIC-fasteoi   int
 40:          0          0          0   PCI-MSI-edge      igb_uio
 41:          0          0          0   PCI-MSI-edge      igb_uio
NMI:          0          0          0   Non-maskable interrupts
LOC:   76477645  482005833  266118529   Local timer interrupts
SPU:          0          0          0   Spurious interrupts
PMI:          0          0          0   Performance monitoring interrupts
IWI:          0          0          0   IRQ work interrupts
RTR:          0          0          0   APIC ICR read retries
RES:   82013955    2052760   96450382   Rescheduling interrupts
CAL:       1061        144         25   Function call interrupts
TLB:      72509      27116          3   TLB shootdowns
TRM:          0          0          0   Thermal event interrupts
THR:          0          0          0   Threshold APIC interrupts
MCE:          0          0          0   Machine check exceptions
MCP:        236        236        236   Machine check polls
ERR:          0
MIS:          0

Just for reference the ps -fax from eve:

Code: Select all

 /opt/unetlab/wrappers/qemu_wrapper -T 0 -D 1 -t vMX-VCP -F /opt/qemu-2.6.2/bin/qemu-system-x86_64 -d 0 -- -nographic -device e1000,netdev=net0,mac=50:00:00:01:00:00 -netdev tap,id=net0,ifname=vunl0_1_0,script=no -device e1000,netdev=net1,mac=50:00:00:01:00:01 -netdev tap,id=net1,ifname=vunl0_1_1,script=no -smp 1 -m 2048 -name vMX-VCP -uuid 468dd6d8-80b6-4d72-a48a-8b82e4683775 -hda hda.qcow2 -hdb hdb.qcow2 -hdc hdc.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
 6879 ?        S      0:00  \_ /opt/unetlab/wrappers/qemu_wrapper -T 0 -D 1 -t vMX-VCP -F /opt/qemu-2.6.2/bin/qemu-system-x86_64 -d 0 -- -nographic -device e1000,netdev=net0,mac=50:00:00:01:00:00 -netdev tap,id=net0,ifname=vunl0_1_0,script=no -device e1000,netdev=net1,mac=50:00:00:01:00:01 -netdev tap,id=net1,ifname=vunl0_1_1,script=no -smp 1 -m 2048 -name vMX-VCP -uuid 468dd6d8-80b6-4d72-a48a-8b82e4683775 -hda hda.qcow2 -hdb hdb.qcow2 -hdc hdc.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
 6880 ?        S      0:00      \_ sh -c /opt/qemu-2.6.2/bin/qemu-system-x86_64 -nographic -device e1000,netdev=net0,mac=50:00:00:01:00:00 -netdev tap,id=net0,ifname=vunl0_1_0,script=no -device e1000,netdev=net1,mac=50:00:00:01:00:01 -netdev tap,id=net1,ifname=vunl0_1_1,script=no -smp 1 -m 2048 -name vMX-VCP -uuid 468dd6d8-80b6-4d72-a48a-8b82e4683775 -hda hda.qcow2 -hdb hdb.qcow2 -hdc hdc.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
 6881 ?        Sl    15:16          \_ /opt/qemu-2.6.2/bin/qemu-system-x86_64 -nographic -device e1000,netdev=net0,mac=50:00:00:01:00:00 -netdev tap,id=net0,ifname=vunl0_1_0,script=no -device e1000,netdev=net1,mac=50:00:00:01:00:01 -netdev tap,id=net1,ifname=vunl0_1_1,script=no -smp 1 -m 2048 -name vMX-VCP -uuid 468dd6d8-80b6-4d72-a48a-8b82e4683775 -hda hda.qcow2 -hdb hdb.qcow2 -hdc hdc.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
12893 ?        S      0:00 /opt/unetlab/wrappers/qemu_wrapper -T 0 -D 2 -t vMX-VFP -F /opt/qemu-2.9.0/bin/qemu-system-x86_64 -d 0 -- -nographic -device virtio-net-pci,netdev=net0,mac=50:00:00:02:00:00 -netdev tap,id=net0,ifname=vunl0_2_0,script=no -device virtio-net-pci,netdev=net1,mac=50:00:00:02:00:01 -netdev tap,id=net1,ifname=vunl0_2_1,script=no -device virtio-net-pci,netdev=net2,mac=50:00:00:02:00:02 -netdev tap,id=net2,ifname=vunl0_2_2,script=no -device virtio-net-pci,netdev=net3,mac=50:00:00:02:00:03 -netdev tap,id=net3,ifname=vunl0_2_3,script=no -device virtio-net-pci,netdev=net4,mac=50:00:00:02:00:04 -netdev tap,id=net4,ifname=vunl0_2_4,script=no -device virtio-net-pci,netdev=net5,mac=50:00:00:02:00:05 -netdev tap,id=net5,ifname=vunl0_2_5,script=no -device virtio-net-pci,netdev=net6,mac=50:00:00:02:00:06 -netdev tap,id=net6,ifname=vunl0_2_6,script=no -device virtio-net-pci,netdev=net7,mac=50:00:00:02:00:07 -netdev tap,id=net7,ifname=vunl0_2_7,script=no -device virtio-net-pci,netdev=net8,mac=50:00:00:02:00:08 -netdev tap,id=net8,ifname=vunl0_2_8,script=no -device virtio-net-pci,netdev=net9,mac=50:00:00:02:00:09 -netdev tap,id=net9,ifname=vunl0_2_9,script=no -device virtio-net-pci,netdev=net10,mac=50:00:00:02:00:0a -netdev tap,id=net10,ifname=vunl0_2_10,script=no -device virtio-net-pci,netdev=net11,mac=50:00:00:02:00:0b -netdev tap,id=net11,ifname=vunl0_2_11,script=no -smp 3 -m 4096 -name vMX-VFP -uuid 95375231-3628-47d3-87dc-87e2fd3b5720 -hda hda.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
12894 ?        S      0:00  \_ /opt/unetlab/wrappers/qemu_wrapper -T 0 -D 2 -t vMX-VFP -F /opt/qemu-2.9.0/bin/qemu-system-x86_64 -d 0 -- -nographic -device virtio-net-pci,netdev=net0,mac=50:00:00:02:00:00 -netdev tap,id=net0,ifname=vunl0_2_0,script=no -device virtio-net-pci,netdev=net1,mac=50:00:00:02:00:01 -netdev tap,id=net1,ifname=vunl0_2_1,script=no -device virtio-net-pci,netdev=net2,mac=50:00:00:02:00:02 -netdev tap,id=net2,ifname=vunl0_2_2,script=no -device virtio-net-pci,netdev=net3,mac=50:00:00:02:00:03 -netdev tap,id=net3,ifname=vunl0_2_3,script=no -device virtio-net-pci,netdev=net4,mac=50:00:00:02:00:04 -netdev tap,id=net4,ifname=vunl0_2_4,script=no -device virtio-net-pci,netdev=net5,mac=50:00:00:02:00:05 -netdev tap,id=net5,ifname=vunl0_2_5,script=no -device virtio-net-pci,netdev=net6,mac=50:00:00:02:00:06 -netdev tap,id=net6,ifname=vunl0_2_6,script=no -device virtio-net-pci,netdev=net7,mac=50:00:00:02:00:07 -netdev tap,id=net7,ifname=vunl0_2_7,script=no -device virtio-net-pci,netdev=net8,mac=50:00:00:02:00:08 -netdev tap,id=net8,ifname=vunl0_2_8,script=no -device virtio-net-pci,netdev=net9,mac=50:00:00:02:00:09 -netdev tap,id=net9,ifname=vunl0_2_9,script=no -device virtio-net-pci,netdev=net10,mac=50:00:00:02:00:0a -netdev tap,id=net10,ifname=vunl0_2_10,script=no -device virtio-net-pci,netdev=net11,mac=50:00:00:02:00:0b -netdev tap,id=net11,ifname=vunl0_2_11,script=no -smp 3 -m 4096 -name vMX-VFP -uuid 95375231-3628-47d3-87dc-87e2fd3b5720 -hda hda.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
12895 ?        S      0:01      \_ sh -c /opt/qemu-2.9.0/bin/qemu-system-x86_64 -nographic -device virtio-net-pci,netdev=net0,mac=50:00:00:02:00:00 -netdev tap,id=net0,ifname=vunl0_2_0,script=no -device virtio-net-pci,netdev=net1,mac=50:00:00:02:00:01 -netdev tap,id=net1,ifname=vunl0_2_1,script=no -device virtio-net-pci,netdev=net2,mac=50:00:00:02:00:02 -netdev tap,id=net2,ifname=vunl0_2_2,script=no -device virtio-net-pci,netdev=net3,mac=50:00:00:02:00:03 -netdev tap,id=net3,ifname=vunl0_2_3,script=no -device virtio-net-pci,netdev=net4,mac=50:00:00:02:00:04 -netdev tap,id=net4,ifname=vunl0_2_4,script=no -device virtio-net-pci,netdev=net5,mac=50:00:00:02:00:05 -netdev tap,id=net5,ifname=vunl0_2_5,script=no -device virtio-net-pci,netdev=net6,mac=50:00:00:02:00:06 -netdev tap,id=net6,ifname=vunl0_2_6,script=no -device virtio-net-pci,netdev=net7,mac=50:00:00:02:00:07 -netdev tap,id=net7,ifname=vunl0_2_7,script=no -device virtio-net-pci,netdev=net8,mac=50:00:00:02:00:08 -netdev tap,id=net8,ifname=vunl0_2_8,script=no -device virtio-net-pci,netdev=net9,mac=50:00:00:02:00:09 -netdev tap,id=net9,ifname=vunl0_2_9,script=no -device virtio-net-pci,netdev=net10,mac=50:00:00:02:00:0a -netdev tap,id=net10,ifname=vunl0_2_10,script=no -device virtio-net-pci,netdev=net11,mac=50:00:00:02:00:0b -netdev tap,id=net11,ifname=vunl0_2_11,script=no -smp 3 -m 4096 -name vMX-VFP -uuid 95375231-3628-47d3-87dc-87e2fd3b5720 -hda hda.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
12896 ?        Sl   979:44          \_ /opt/qemu-2.9.0/bin/qemu-system-x86_64 -nographic -device virtio-net-pci,netdev=net0,mac=50:00:00:02:00:00 -netdev tap,id=net0,ifname=vunl0_2_0,script=no -device virtio-net-pci,netdev=net1,mac=50:00:00:02:00:01 -netdev tap,id=net1,ifname=vunl0_2_1,script=no -device virtio-net-pci,netdev=net2,mac=50:00:00:02:00:02 -netdev tap,id=net2,ifname=vunl0_2_2,script=no -device virtio-net-pci,netdev=net3,mac=50:00:00:02:00:03 -netdev tap,id=net3,ifname=vunl0_2_3,script=no -device virtio-net-pci,netdev=net4,mac=50:00:00:02:00:04 -netdev tap,id=net4,ifname=vunl0_2_4,script=no -device virtio-net-pci,netdev=net5,mac=50:00:00:02:00:05 -netdev tap,id=net5,ifname=vunl0_2_5,script=no -device virtio-net-pci,netdev=net6,mac=50:00:00:02:00:06 -netdev tap,id=net6,ifname=vunl0_2_6,script=no -device virtio-net-pci,netdev=net7,mac=50:00:00:02:00:07 -netdev tap,id=net7,ifname=vunl0_2_7,script=no -device virtio-net-pci,netdev=net8,mac=50:00:00:02:00:08 -netdev tap,id=net8,ifname=vunl0_2_8,script=no -device virtio-net-pci,netdev=net9,mac=50:00:00:02:00:09 -netdev tap,id=net9,ifname=vunl0_2_9,script=no -device virtio-net-pci,netdev=net10,mac=50:00:00:02:00:0a -netdev tap,id=net10,ifname=vunl0_2_10,script=no -device virtio-net-pci,netdev=net11,mac=50:00:00:02:00:0b -netdev tap,id=net11,ifname=vunl0_2_11,script=no -smp 3 -m 4096 -name vMX-VFP -uuid 95375231-3628-47d3-87dc-87e2fd3b5720 -hda hda.qcow2 -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic
20845 ?        S<     0:00 cpulimit -q -p 12896 -l 150 -b

If you need any information, please let me know.

###\OLD POST###

7 REPLIES 7
vMX

Re: Juniper VMX bad cpu usage using lite mode in kvm compared to vmware?

‎02-08-2018 07:30 PM

Hi,

 

How many vCPU(s) are allocated to your vFP?

For lab simulation use case applications (lite mode): Minimum of number of CPU is 4.

 

  • 1 for VCP
  • 3 for VFP

https://www.juniper.net/documentation/en_US/vmx17.2/topics/reference/general/vmx-hw-sw-minimums.html

 

Also, could you share the fpc configuration?

https://www.juniper.net/documentation/en_US/vmx17.2/topics/task/configuration/vmx-chassis-flow-cachi...

 

 

 

 

 

 

 

 

/Karan Dhanak
Highlighted
vMX

Re: Juniper VMX bad cpu usage using lite mode in kvm compared to vmware?

‎02-09-2018 12:22 AM

Hi,

 

yes I've 3 vcpu, so the problem is not that it's not running, but that one cpu is running in a constant polling mode and consumes all cpu of one core, but it should use interrupt mode, at least it does it on vmware and I believe it should do in kvm too.

 

see my config

 

<domain type='kvm'>
  <name>vfp-vmx1</name>
  <uuid>3a91fe59-144a-4b83-80bb-a429d1d32377</uuid>
  <memory unit='KiB'>3000000</memory>
  <currentMemory unit='KiB'>3000000</currentMemory>
  <memoryBacking>
    <hugepages/>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>3</vcpu>
  <numatune>
    <memory mode='preferred' nodeset='0'/>
  </numatune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-1.7'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact' check='partial'>
    <model fallback='allow'>Haswell-noTSX</model>
    <topology sockets='3' cores='1' threads='1'/>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='directsync'/>
      <source file='/home/torben/vmx/build/vmx1/images/vFPC-20171213.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='usb' index='0' model='piix3-uhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='0a:00:dd:c0:de:10'/>
      <source bridge='br-ext'/>
      <target dev='vfp-ext-vmx1'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:2b:68:d8'/>
      <source bridge='br-int-vmx1'/>
      <target dev='vfp-int-vmx1'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <interface type='network'>
      <mac address='02:06:0a:0e:ff:f0'/>
      <source network='default'/>
      <target dev='ge-0.0.0-vmx1'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
    <serial type='tcp'>
      <source mode='bind' host='127.0.0.1' service='8602'/>
      <protocol type='telnet'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='tcp'>
      <source mode='bind' host='127.0.0.1' service='8602'/>
      <protocol type='telnet'/>
      <target type='serial' port='0'/>
    </console>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <sound model='ac97'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </sound>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
  </devices>
</domain>

see the top inside the vm:

 1237 root      20   0  610m 193m 154m S   20  6.7   0:46.13 J-UKERN
 1186 root      20   0 33.5g  73m 4036 S   15  2.6   0:40.42 riot

and top on the host

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 6925 root      20   0 3977392  58312  18868 S 156.1  0.0   5:43.44 qemu-system-x86
 6046 root      20   0 3339000 0.997g  18972 S   2.3  0.8   1:43.60 qemu-system-x86

first is the vfp, but only if there are datapath interfaces, otherwise it doesn't consume much cpu, but as soon the data ports get connected it used 150% cpu.

 

kind regards

viets

 

 

Highlighted
vMX

Re: Juniper VMX bad cpu usage using lite mode in kvm compared to vmware?

[ Edited ]
‎02-09-2018 01:42 PM

 

Just to clarify, when you say as soon as data ports get connected it uses 150% CPU- is there any data traffic flow on ports?

Note: Performance mode is the default mode. Have you manually set fpc to lite-mode?

 

#set chassis fpc x lite-mode

#commit full

 

Could you post

>show interface terse

>show chassis fpc

>show configuration chassis fpc x

>show version

 

 

 

 

 

 

 

 

/Karan Dhanak
Highlighted
vMX

Re: Juniper VMX bad cpu usage using lite mode in kvm compared to vmware?

[ Edited ]
‎02-09-2018 01:50 PM

Hi,

 

it's now 17.4 but it doesn't change a bit. As I said, I know that it needs to run in lite mode, in performance mode, all cores will go up to 100% (=300%). In Lite mode it's "only" 150%, I also tested this on different hardware but always the same. Are you using kvm, how is the usage on your side?

 

There is no traffic running, now the bridge isn't even connected to another side, but I also tried this, but didn't change a thing.

 

root> show configuration 
## Last commit: 2018-02-03 23:39:00 UTC by root
version 17.4R1.16;
system {
    root-authentication {
        encrypted-password "$6$CmFOVL8L$Db//pgVNCQ57VVJNGU93VPQOc6Anc5SyCKBJsfD6TMMXItmXmD7hsIeuAU6a9YGOOlzNGR7GHJkmv0EkwjHfH1"; ## SECRET-DATA
    }
    syslog {
        user * {
            any emergency;
        }
        file messages {
            any notice;
            authorization info;
        }
        file interactive-commands {
            interactive-commands any;
        }
    }
}
chassis {
    fpc 0 {
        pic 0 {
            number-of-ports 1;          
        }
        lite-mode;
    }
}

root> 
 show interfaces terse 
Interface               Admin Link Proto    Local                 Remote
ge-0/0/0                up    up
lc-0/0/0                up    up
lc-0/0/0.32769          up    up   vpls    
pfe-0/0/0               up    up
pfe-0/0/0.16383         up    up   inet    
                                   inet6   
pfh-0/0/0               up    up
pfh-0/0/0.16383         up    up   inet    
pfh-0/0/0.16384         up    up   inet    
cbp0                    up    up
demux0                  up    up
dsc                     up    up
em1                     up    up
em1.0                   up    up   inet     10.0.0.4/8      
                                            128.0.0.1/2     
                                            128.0.0.4/2     
                                   inet6    fe80::5254:ff:fee8:7052/64
                                            fec0::a:0:0:4/64
                                   tnp      0x4             
esi                     up    up
fxp0                    up    up
gre                     up    up
ipip                    up    up        
irb                     up    up
jsrv                    up    up
jsrv.1                  up    up   inet     128.0.0.127/2   
lo0                     up    up
lo0.16384               up    up   inet     127.0.0.1           --> 0/0
lo0.16385               up    up   inet    
lsi                     up    up
mtun                    up    up
pimd                    up    up
pime                    up    up
pip0                    up    up
pp0                     up    up
rbeb                    up    up
tap                     up    up
vtep                    up    up
show chassis fpc 
                     Temp  CPU Utilization (%)   CPU Utilization (%)  Memory    Utilization (%)
Slot State            (C)  Total  Interrupt      1min   5min   15min  DRAM (MB) Heap     Buffer
  0  Online           Testing   5         0        2      0      0    511        29          0
  1  Empty           
  2  Empty           
  3  Empty           
  4  Empty           
  5  Empty           
  6  Empty           
  7  Empty           
  8  Empty           
  9  Empty           
 10  Empty           
 11  Empty   
 show version 
Model: vmx
Junos: 17.4R1.16
JUNOS OS Kernel 64-bit  [20171206.f4cad52_builder_stable_11]
JUNOS OS libs [20171206.f4cad52_builder_stable_11]
JUNOS OS runtime [20171206.f4cad52_builder_stable_11]
JUNOS OS time zone information [20171206.f4cad52_builder_stable_11]
JUNOS network stack and utilities [20171219.172921_builder_junos_174_r1]
JUNOS libs [20171219.172921_builder_junos_174_r1]
JUNOS OS libs compat32 [20171206.f4cad52_builder_stable_11]
JUNOS OS 32-bit compatibility [20171206.f4cad52_builder_stable_11]
JUNOS libs compat32 [20171219.172921_builder_junos_174_r1]
JUNOS runtime [20171219.172921_builder_junos_174_r1]
JUNOS Packet Forwarding Engine Simulation Package [20171219.172921_builder_junos_174_r1]
JUNOS py extensions [20171219.172921_builder_junos_174_r1]
JUNOS py base [20171219.172921_builder_junos_174_r1]
JUNOS OS vmguest [20171206.f4cad52_builder_stable_11]
JUNOS OS crypto [20171206.f4cad52_builder_stable_11]
JUNOS mx libs compat32 [20171219.172921_builder_junos_174_r1]
JUNOS mx runtime [20171219.172921_builder_junos_174_r1]
JUNOS common platform support [20171219.172921_builder_junos_174_r1]
JUNOS mtx network modules [20171219.172921_builder_junos_174_r1]
JUNOS modules [20171219.172921_builder_junos_174_r1]
JUNOS mx modules [20171219.172921_builder_junos_174_r1]
JUNOS mx libs [20171219.172921_builder_junos_174_r1]
JUNOS mtx Data Plane Crypto Support [20171219.172921_builder_junos_174_r1]
JUNOS daemons [20171219.172921_builder_junos_174_r1]
JUNOS mx daemons [20171219.172921_builder_junos_174_r1]
JUNOS Services URL Filter package [20171219.172921_builder_junos_174_r1]
JUNOS Services TLB Service PIC package [20171219.172921_builder_junos_174_r1]
JUNOS Services SSL [20171219.172921_builder_junos_174_r1]
JUNOS Services SOFTWIRE [20171219.172921_builder_junos_174_r1]
JUNOS Services Stateful Firewall [20171219.172921_builder_junos_174_r1]
JUNOS Services RPM [20171219.172921_builder_junos_174_r1]
JUNOS Services PTSP Container package [20171219.172921_builder_junos_174_r1]
JUNOS Services PCEF package [20171219.172921_builder_junos_174_r1]
JUNOS Services NAT [20171219.172921_builder_junos_174_r1]
JUNOS Services Mobile Subscriber Service Container package [20171219.172921_builder_junos_174_r1]
JUNOS Services MobileNext Software package [20171219.172921_builder_junos_174_r1]
JUNOS Services Logging Report Framework package [20171219.172921_builder_junos_174_r1]
JUNOS Services LL-PDF Container package [20171219.172921_builder_junos_174_r1]
JUNOS Services Jflow Container package [20171219.172921_builder_junos_174_r1]
JUNOS Services Deep Packet Inspection package [20171219.172921_builder_junos_174_r1]
JUNOS Services IPSec [20171219.172921_builder_junos_174_r1]
JUNOS Services IDS [20171219.172921_builder_junos_174_r1]
JUNOS IDP Services [20171219.172921_builder_junos_174_r1]
JUNOS Services HTTP Content Management package [20171219.172921_builder_junos_174_r1]
JUNOS Services Crypto [20171219.172921_builder_junos_174_r1]
JUNOS Services Captive Portal and Content Delivery Container package [20171219.172921_builder_junos_174_r1]
JUNOS Services COS [20171219.172921_builder_junos_174_r1]
JUNOS AppId Services [20171219.172921_builder_junos_174_r1]
JUNOS Services Application Level Gateways [20171219.172921_builder_junos_174_r1]
JUNOS Services AACL Container package [20171219.172921_builder_junos_174_r1]
JUNOS Extension Toolkit [20171219.172921_builder_junos_174_r1]
JUNOS jfirmware [20171219.172921_builder_junos_174_r1]
JUNOS Online Documentation [20171219.172921_builder_junos_174_r1]
JUNOS jail runtime [20171206.f4cad52_builder_stable_11]
root> show chassis hardware 
Hardware inventory:
Item             Version  Part number  Serial number     Description
Chassis                                VM5A764729C8      VMX
Midplane        
Routing Engine 0                                         RE-VMX
CB 0                                                     VMX SCB
FPC 0                                                    Virtual FPC
  CPU            Rev. 1.0 RIOT-LITE    BUILTIN          
  MIC 0                                                  Virtual
    PIC 0                 BUILTIN      BUILTIN           Virtual

 

kind regards

viets

Highlighted
vMX

Re: Juniper VMX bad cpu usage using lite mode in kvm compared to vmware?

‎07-25-2018 09:19 AM

For what it's worth, I'm seeing the same high CPU issue on a nested installation.

 

Dell R610 (2 x X5690 CPUs) 32GB RAM

ESXi 6.0

EVE-NG 2.0.3-92

vMX 16.1R4.7 (demo license)

 - set chassis fpc 0 lite-mode

- everything seems to work but when vMX-VFP comes up it hogs 200-300% CPU with no packet forwarding (interfaces deactivated).

 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                     
27748 root      20   0 5257336 1.231g  21496 S 259.3  7.9  43:34.48 qemu-system-x86                                             
 3758 root      20   0 2936456 1.603g  21020 S   4.0 10.2   7:02.56 qemu-system-x86                                             

 

 

Highlighted
vMX

Re: Juniper VMX bad cpu usage using lite mode in kvm compared to vmware?

‎07-25-2018 09:34 AM
Hi,

yes, I gave up.

it looks lile normal behaviour on kvm.. I think there is no interrupt mode implemented on the virtio driver in the vfp.

I use now vmware for my topologies, they work fine.

kind regards
Viets
Highlighted
vMX

Re: Juniper VMX bad cpu usage using lite mode in kvm compared to vmware?

‎08-08-2018 06:52 AM

Maybe we should ask Engineering to have a look at this?

I mean Juniper is pushing Tungsten and Contrail - so all in KVM.

--
Best Regards

Christian Scholz
JNCIE-SEC :: Juniper Networks Ambassador :: Telonic (Germany)
https://www.jncie.eu
Feedback