vMX
Highlighted
vMX

vFPC fails to launch - console still in Linux hypervisor

[ Edited ]
‎06-23-2017 04:47 PM

After figuring out the subject of last post - VMX (version 17.2R1-13) is launching, but the vFP image doesn't appear to be able to launch the image. My login prompt is for the Wind River Linux hypervisor, and the console messages here seem to suggest a failure of the FPC init script to start. The vFPC image file is "vFPC-20170523.img" - apparently a very new release.

 

Console messages are below:

vfp-vmx1 login: sleep 1 sec
Mount partition /etc on /dev/sda5
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sda5): using internal journal
EXT3-fs (sda5): recovery complete
EXT3-fs (sda5): mounted filesystem with ordered data mode
Mount partition /var on /dev/sda6
umount: /var: not mounted
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sda6): using internal journal
EXT3-fs (sda6): recovery complete
EXT3-fs (sda6): mounted filesystem with ordered data mode
Setting up a firewall....
ip_tables: (C) 2000-2006 Netfilter Core Team
FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
/home/pfe/riot/phase2_launch.sh: line 209: /var/jnx/card/local/vm_type: No such file or directory
/home/pfe/riot/phase2_launch.sh: line 212: /var/jnx/card/local/type: No such file or directory
0:2345:respawn:/sbin/mingetty tty0
tty0
kernel.core_pattern = /var/crash/core.%e.%t.%p.gz
/var/jnx/card/local/type file not found
i2cid is: 2986 /var/jnx/card/local/slot file not found
slot is: 0 parent: 1045, child: 1046
slot is: 0 Spawning mpcsd with PID 1046
grep: /etc/riot/runtime.conf: No such file or directory
fpc.core.push.sh: no process found
mpc : 
tnp_hello_tx: no process found
cat: can't open '/var/jnx/card/local/type': No such file or directory
tx_hello_tx: Failed to get card type defaulting to 0
cat: can't open '/var/jnx/card/local/slot': No such file or directory
tx_hello_tx: Failed to get card slot defaulting to 0
tnp_hello_tx: Board type 0
tnp_hello_tx: Board slot 0
tnp_hello_tx: found interface int
Linux vfp-vmx1 3.10.55-ltsi-rt55-WR6.0.0.13_preempt-rt #1 SMP PREEMPT RT Mon May 22 18:17:09 PDT 2017 x86_64 GNU/Linux
igb_uio: Use MSIX interrupt by default
cat: can't open '/var/jnx/card/local/type': No such file or directory
igb_uio 0000:00:05.0: uio device registered with irq 2e
igb_uio 0000:00:06.0: uio device registered with irq 2f
igb_uio 0000:00:07.0: uio device registered with irq 30
igb_uio 0000:00:08.0: uio device registered with irq 31
OK
0x0BAA
cat: can't open '/var/jnx/card/local/type': No such file or directory
logger: invalid option -- 'c'

Usage:
 logger [options] [message]

Options:
 -T, --tcp             use TCP only
 -d, --udp             use UDP only
 -i, --id              log the process ID too
 -f, --file <file>     log the contents of this file
 -h, --help            display this help text and exit
 -n, --server <name>   write to this remote syslog server
 -P, --port <number>   use this UDP port
 -p, --priority <prio> mark given message with this priority
 -s, --stderr          output message to standard error as well
 -t, --tag <tag>       mark every line with this tag
 -u, --socket <socket> write to this Unix socket
 -V, --version         output version information and exit

EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
config file /etc/riot/init.conf
config file /etc/riot/runtime.conf
Disabled QOS 
Unable to open config file /etc/riot/shadow
rte_eth_dev_config_restore: port 0: MAC address array not supported
rte_eth_dev_config_restore: port 1: MAC address array not supported
rte_eth_dev_config_restore: port 2: MAC address array not supported
rte_eth_dev_config_restore: port 3: MAC address array not supported
rte_eth_dev_set_mc_addr_list: Function not supported
rte_eth_dev_set_mc_addr_list: Function not supported
rte_eth_dev_set_mc_addr_list: Function not supported
rte_eth_dev_set_mc_addr_list: Function not supported

 Any insight on how to start troubleshooting this? I've verified sufficient RAM and CPUs (4096 is minimum I'm told, and I've assigned four CPU cores). Any other pointers?

9 REPLIES 9
Highlighted
vMX

Re: vFPC fails to launch - console still in Linux hypervisor

‎06-23-2017 04:53 PM

Can you post your vmx.conf file?

Highlighted
vMX

Re: vFPC fails to launch - console still in Linux hypervisor

[ Edited ]
‎06-23-2017 07:52 PM

Here it is:

 

 

#Configuration on the host side - management interface, VM images etc.
HOST:
    identifier                : vmx1   # Maximum 6 characters
    host-management-interface : br-ext
    routing-engine-image      : "/opt/kvm/media/vmx-lab1/images/junos-vmx-x86-64-17.2R1.13.qcow2"
    routing-engine-hdd        : "/opt/kvm/media/vmx-lab1/images/vmxhdd.img"
    forwarding-engine-image   : "/opt/kvm/media/vmx-lab1/images/vFPC-20170523.img"

---
#External bridge configuration
BRIDGES:
    - type  : external
      name  : br-ext                  # Max 10 characters

--- 
#vRE VM parameters
CONTROL_PLANE:
    vcpus       : 1
    memory-mb   : 1024 
    console_port: 8601

    interfaces  :
      - type      : static
        ipaddr    : 192.168.122.10
        macaddr   : "0A:00:DD:C0:DE:0E"

--- 
#vPFE VM parameters
FORWARDING_PLANE:
    memory-mb   : 4096
    vcpus       : 4
    console_port: 8602
    device-type : virtio 

    interfaces  :
      - type      : static
        ipaddr    : 192.168.122.11
        macaddr   : "0A:00:DD:C0:DE:10"

--- 
#Interfaces
JUNOS_DEVICES:
   - interface            : ge-0/0/0
     mac-address          : "02:06:0A:0E:FF:F0"
     description          : "ge-0/0/0 interface"
   
   - interface            : ge-0/0/1
     mac-address          : "02:06:0A:0E:FF:F1"
     description          : "ge-0/0/0 interface"
   
   - interface            : ge-0/0/2
     mac-address          : "02:06:0A:0E:FF:F2"
     description          : "ge-0/0/0 interface"
   
   - interface            : ge-0/0/3
     mac-address          : "02:06:0A:0E:FF:F3"
     description          : "ge-0/0/0 interface"

I can confirm that even in this state, I can ping the VFPC images's IP address from VCP:

root@vfp-vmx1:~# ifconfig int
int       Link encap:Ethernet  HWaddr 52:54:00:e0:1f:d7  
          inet addr:128.0.0.16  Bcast:128.0.255.255  Mask:255.255.0.0
          inet6 addr: fe80::5054:ff:fee0:1fd7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:299738 errors:0 dropped:16476 overruns:0 frame:0
          TX packets:261612 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:86166494 (82.1 MiB)  TX bytes:26463845 (25.2 MiB)

root@vfp-vmx1:~# 
----
root@vmx1> show arp interface em1.0 
MAC Address       Address         Name                      Interface               Flags
52:54:00:e0:1f:d7 128.0.0.16      fpc0                      em1.0                   none

root@vmx1> ping 128.0.0.16 routing-instance __juniper_private1__ count 5 
PING 128.0.0.16 (128.0.0.16): 56 data bytes
64 bytes from 128.0.0.16: icmp_seq=0 ttl=64 time=1.249 ms
64 bytes from 128.0.0.16: icmp_seq=1 ttl=64 time=0.253 ms
64 bytes from 128.0.0.16: icmp_seq=2 ttl=64 time=0.297 ms
64 bytes from 128.0.0.16: icmp_seq=3 ttl=64 time=0.244 ms
64 bytes from 128.0.0.16: icmp_seq=4 ttl=64 time=0.281 ms

--- 128.0.0.16 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.244/0.465/1.249/0.393 ms

root@vmx1> 
   

 

Highlighted
vMX
Solution
Accepted by topic author cawoodfield
‎06-24-2017 10:34 AM

Re: vFPC fails to launch - console still in Linux hypervisor

‎06-24-2017 12:08 AM
Highlighted
vMX

Re: vFPC fails to launch - console still in Linux hypervisor

[ Edited ]
‎06-24-2017 09:46 AM

Just added that command and restarted the VFP image, still getting the same error. I had previously attempted to launch it with a 8912MB allocation (which would not have required the lite-mode config switch) with the same results.

 

The key error messages I'm noticing suggest possible issues with the build - do these show up in a successful boot?

 

/home/pfe/riot/phase2_launch.sh: line 209: /var/jnx/card/local/vm_type: No such file or directory
/home/pfe/riot/phase2_launch.sh: line 212: /var/jnx/card/local/type: No such file or directory
...
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"
INIT: cannot execute "/usr/share/pfe/mpcsd.py"

Looking through the hypervisor filesystem, it appears that the /var/jnx directory is empty. Is this supposed to be a mount point?

root@vfp-vmx1:/var/jnx# ls -al
drwxr-xr-x    2 root     root          4096 Jun 23 23:39 .
drwxr-xr-x   12 root     root          4096 Jun 23 23:39 ..
root@vfp-vmx1:/var/jnx# 
Highlighted
vMX

Re: vFPC fails to launch - console still in Linux hypervisor

‎06-24-2017 10:34 AM

Posted to soon - the VPFC is now working and I can now route between two VMX instances. Appreciate the help!

Highlighted
vMX

Re: vFPC fails to launch - console still in Linux hypervisor

‎06-24-2017 11:08 AM

Replying to myself, and making this an "educatable moment" - my initial diagnosis of the issue came from my misunderstanding of how the vFP operates. I made the assumption that like the vCP, the image would run a hypervisor with a BSD-based image running as a VM, which is incorrect - on the vFP, the switch application (apparently called J-UKERN) is a native Linux binary. I'll note this makes a *lot* of sense in retrospect, particularly where forwarding performance is a premium.

 

Side question - the dual-image architecture seems to lend itself to the ability to run the two images on separate physical devices; has anyone scaled up a vMX in this fashion? Is running multiple vFPs possible on separate HW units?

Highlighted
vMX

Re: vFPC fails to launch - console still in Linux hypervisor

‎06-26-2017 10:23 PM

Hi cawoodfield,

Side question - the dual-image architecture seems to lend itself to the ability to run the two images on separate physical devices; has anyone scaled up a vMX in this fashion? Is running multiple vFPs possible on separate HW units?

 

[python] - I would say, this will add more dependency and weaken the stability based on the needs. However, technically i feel it should be possible. Assume you have installed vMX in a Vcentre and there is a possibility of both VM's really hosted in different ESXi server

 

 

-Python JNCIE 3X [SP|DC|ENT] JNCIP-SEC JNCDS 3X [ WAN | DC|SEC] JNCIS-Cloud JNCIS-DevOps CCIP ITIL
#Please mark my solution as accepted if it helped, Kudos are appreciated as well.
Highlighted
vMX

Re: vFPC fails to launch - console still in Linux hypervisor

‎11-11-2017 09:51 AM

Hi ,

Wondering how you got around with that issue of "INIT: cannot execute "/usr/share/pfe/mpcsd.py""

Please help. i am stuck too.

Thanks,

dev

skd.dks@gmail.com

Highlighted
vMX

Re: vFPC fails to launch - console still in Linux hypervisor

‎08-11-2018 10:51 AM

Hi I am having the same problem, with ubuntu 18.04

What is notable is that I have 2 other vMX running on the same hypervisor with no issues.

 

I found the following in /var/log/messages

Aug 11 16:45:11 localhost daemon.crit riot[1299]: PANIC in app_init_nics():
Aug 11 16:45:11 localhost daemon.crit riot[1299]: Cannot start port 0 (-5)
Aug 11 16:45:11 localhost daemon.err riot[1299]: 6: [/home/pfe/riot/build/app/riot() [0x408355]]
Aug 11 16:45:11 localhost daemon.err riot[1299]: 5: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x3364c21bf5]]
Aug 11 16:45:11 localhost daemon.err riot[1299]: 4: [/home/pfe/riot/build/app/riot() [0x408294]]
Aug 11 16:45:11 localhost daemon.err riot[1299]: 3: [/home/pfe/riot/build/app/riot() [0x181f1bd]]
Aug 11 16:45:11 localhost daemon.err riot[1299]: 2: [/home/pfe/riot/build/app/riot() [0x405873]]
Aug 11 16:45:11 localhost daemon.err riot[1299]: 1: [/home/pfe/riot/build/app/riot() [0x4312a8]]

I tried setting fpc in lite mode, with  no results (the other 2 vmx are running without lite-mode

I search for information on that riot panic but found nothing.

vFPEs are configured with 8GB ram and 4 vCPUs each