Blogs

Deploy Juniper vMX via Docker Compose

By mwiget posted 07-27-2018 07:25

  

Being able to download and run Juniper vMX on KVM and ESXi has really helped me learning more about networking, telemetry and build automation solutions. But the software dependencies combined with manual editing and launch of shell scripts per vMX instance felt a bit outdated to me.


Why can’t I just get a Docker Container with the vMX deployed using Docker, Docker Compose or Kubernetes? This would allow me to launch a fully virtualized topology with routers and endpoints with a single docker-compose command, try something out, redeploy with different Junos versions and even share the recipe with other users via public Git repo’s.

 

 

Well, this is now possible via pre-built Docker Containers, juniper/openjnpr-container-vmx, capable of launching any vMX qcow2 image from release 17.1 onwards. You do need to “bring your own” licensed Juniper vMX KVM bundle from the official Juniper download site https://www.juniper.net/support/downloads/?p=vmx.

 

Readers familiar with the vMX structure of using two virtual machines, one for the control plane (VCP) and another for the forwarding engine (VFP) might have spotted an error in the previous paragraph: how can one deploy a fully functional vMX by only providing the control plane virtual image? Well, this is actually possible, because the actual forwarding engine is downloaded at runtime from the control plane into the VFP, or in the container age, straight into the Container!

Use cases

Before I dive into the nitty gritty details on how the container actually works and how to use it, I’d like to point to a few use cases I built and published recently:

 

They all share the use of the pre-built and published Docker container juniper/openjnpr-container-vmx combined with configuration files and containerized helper application, orchestrated via a docker-compose.yml file (and the externally sourced vMX control plane QCOW2 images).

 

The only host software package dependencies left to be installed by you are git, make, docker.io and docker-compose. Which Linux distribution you might ask? Well, I’ve used Ubuntu up to bionic (18.04) and Debian 9, but any other recent distribution should work equally well.

Requirements

  • Any recent Linux baremetal server with at least 4GB of RAM and a CPU built in the last 4 years (Ivy Bridge and newer) with KVM acceleration support. Running this within a Virtual Machine however isn’t an option, not just for the dismal overall performance for running nested VM’s.
  • Provisioned Memory Hugepages (1GB per vMX instance)
  • junos-vmx-x86–64–17.3R1.10.qcow2 image, extracted from the vmx-bundle-*tgz file available at https://www.juniper.net/support/downloads/?p=vmx or as an eval download from https://www.juniper.net/us/en/dm/free-vmx-trial/ (registration required)
  • Docker engine, Docker compose, git and make installed

That’s it. No need to install qemu-kvm, nor virsh, nor else, but don’t worry if you happen to have these installed. They won’t interfere with the containers. Everything the vMX needs to run, is provided by the docker Container. Neat, isn’t it?

Installation

Install or update the latest Linux distribution of choice to your server. Update and install the required software (example shown for Ubuntu bionic, adjust accordingly):

 

$ sudo apt-get update
$ sudo apt-get install make git curl docker.io docker-compose
$ sudo usermod -aG docker $USER

The last command will add your userid to the docker group, allowing you access to docker commands without sudo.

Enable hugepages. Count 1GB per vMX instance, but make sure you leave at least 50% to all other applications. If your host has 16GB, you can dedicate e.g. 8GB to hugepages. This can be done with page sizes of 2MB or 1GB and is best done via kernel options in /etc/default/grub (example shows 8 x 1GB):

 

$ sudo grep hugepa /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="nomodeset consoleblank=0 default_hugepagesz=1G hugepagesz=1G hugepages=8"
$ sudo update-grub
$ sudo reboot

Once the system is back, check the allocated hugepages:

 

$ cat /proc/meminfo |grep Huge
AnonHugePages:   8372224 kB
ShmemHugePages:        0 kB
HugePages_Total:       8
HugePages_Free:        8
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB

Download the latest vmx-bundle-x.y.tgz (18.2R1 at the time I published this post) from https://www.juniper.net/support/downloads/?p=vmx and extract the qcow2 image. You can pick any version from 17.1 onwards, including service releases:

 

tar zxf vmx-bundle-18.2R1.9.tgz
$ mv vmx/images/junos-vmx-x86-64-18.2R1.9.qcow2 .
$ rm -rf vmx
$ ls -l junos-vmx-x86-64-18.2R1.9.qcow2
-rw-r--r-- 1 mwiget mwiget 1334771712 Jul 20 21:51 junos-vmx-x86-64-18.2R1.9.qcow2

Keep the qcow2 image and get rid of the rest from the extracted tar file. The qcow2 file will need to be copied into your project working directory, from where the container instances will be launched.

Build your own lab topology

Ready to get going? Create and empty directory and copy the junos-vmx qcow2 into it:

 

$ mkdir my-simple-vmx-lab
$ cd my-simple-vmx-lab
$ cp ../junos-vmx-x86-64-18.2R1.9.qcow2 .

Download the evaluation license key that activates your 60-day, unlimited bandwidth vMX trial. The same key can be used multiple times and the activation period is per running instance:

 

$ curl -o license-eval.txt https://www.juniper.net/us/en/dm/free-vmx-trial/E421992502.txt

Copy your SSH public key to the current directory:

 

cp ~/.ssh/id_rsa.pub .

If you don’t have a SSH public/private keypair, create one:

 

$ ssh-keygen -t rsa -N "" 

Create your docker-compose.yml for your topology, e.g. using the following example:

 

$ cat docker-compose.yml
version: "3"

services:

  vmx1:
    image: juniper/openjnpr-container-vmx
    privileged: true
    tty: true
    stdin_open: true
    ports:
      - "22"
      - "830"
    environment:
      - ID=vmx1
      - LICENSE=license-eval.txt
      - IMAGE=junos-vmx-x86-64-18.2R1.9.qcow2
      - PUBLICKEY=id_rsa.pub
      - CONFIG=vmx1.conf
    volumes:
      - $PWD:/u:ro
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      mgmt:
      net-a:
      net-b:
      net-c:

  vmx2:
    image: juniper/openjnpr-container-vmx
    privileged: true
    tty: true
    stdin_open: true
    ports:
      - "22"
      - "830"
    environment:
      - ID=vmx2
      - LICENSE=license-eval.txt
      - IMAGE=junos-vmx-x86-64-18.2R1.9.qcow2
      - PUBLICKEY=id_rsa.pub
      - CONFIG=vmx2.conf
    volumes:
      - $PWD:/u:ro
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      mgmt:
      net-a:
      net-b:
      net-c:

networks:
  mgmt:
  net-a:
  net-b:
  net-c:

It defines 2 vMX instances, connected to 3 virtual docker networks, created also by the same docker-compose file.

If you are using a Junos release older than 18.2, then add the tag ‘trusty’ to the container image in the docker-compose.yml file, e.g.

 

 image: juniper/openjnpr-container-vmx:trusty

There is one quirk with docker-compose: the network order attached to each instance is unpredictible (see https://github.com/docker/compose/issues/4645 for details). The vMX container works around the issue by ordering the virtual networks at runtime in alphabetic order. This requires access to the docker.sock via volume mount and allows the junos configuration be augmented with the Docker virtual network names.

 

The ‘$PWD:/u:ro’ volume attached to each container exposes the current host working directory to the container to access the vMX qcow2 image, the (optional) Junos configuration file, license key and your SSH public key.

Adjust the environment variable IMAGE in the docker-compose file to match your qcow2 image. You should have now the following files in your working directory:

 

$ ls
docker-compose.yml  id_rsa.pub  junos-vmx-x86-64-18.2R1.9.qcow2  license-eval.txt

Not much, right ;-)? But sufficient to get going. You might wonder when the actual openjnpr-container-vmx Container will get downloaded? This happens automatically by docker-compose at launch. See next step.

Launch your topology

$ docker-compose up -d
Creating network "mysimplevmxlab_net-c" with the default driver
Creating network "mysimplevmxlab_net-b" with the default driver
Creating network "mysimplevmxlab_net-a" with the default driver
Creating network "mysimplevmxlab_mgmt" with the default driver
Creating mysimplevmxlab_vmx2_1 ... done
Creating mysimplevmxlab_vmx1_1 ... done

The ‘-d’ option launches the instances in the background.
Verify the successful launch of the containers:

 

$ docker-compose ps
        Name             Command     State                       Ports
------------------------------------------------------------------------------------------
mysimplevmxlab_vmx1_1   /launch.sh   Up      0.0.0.0:32903->22/tcp, 0.0.0.0:32902->830/tcp
mysimplevmxlab_vmx2_1   /launch.sh   Up      0.0.0.0:32901->22/tcp, 0.0.0.0:32900->830/tcp

That doesn’t mean they are fully up and running with active forwarding just yet, but they are booting up. You can now either wait a few minutes, or observe the launch of Junos and the forwarding engine via the containers console logs:

 

$ docker logs -f mysimplevmxlab_vmx1_1
Juniper Networks vMX Docker Light Container

Linux 8efcff791153 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64

CPU Model ................................ Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
CPU affinity of this container ........... 0-7
KVM hardware virtualization extension .... yes
Total System Memory ...................... 62 GB
Free Hugepages ........................... yes (8 x 1024 MB = 8192 MB)
Check for container privileged mode ...... yes
Check for sudo/root privileges ........... yes
Loop mount filesystem capability ......... yes
docker access ............................ CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS                  PORTS                                           NAMES
8efcff791153        juniper/openjnpr-container-vmx          "/launch.sh"             3 seconds ago       Up Less than a second   0.0.0.0:32903->22/tcp, 0.0.0.0:32902->830/tcp   mysimplevmxlab_vmx1_1
55f5338c8b71        juniper/openjnpr-container-vmx          "/launch.sh"             3 seconds ago       Up 1 second             0.0.0.0:32901->22/tcp, 0.0.0.0:32900->830/tcp   mysimplevmxlab_vmx2_1
c2ef1bdf83a9        juniper/openjnpr-container-vmx:bionic   "/launch.sh"             2 hours ago         Up 2 hours              0.0.0.0:32857->22/tcp, 0.0.0.0:32856->830/tcp   leaf1
d6b99f4325fa        juniper/openjnpr-container-vmx:bionic   "/launch.sh"             2 hours ago         Up 2 hours              0.0.0.0:32851->22/tcp, 0.0.0.0:32850->830/tcp   spine2
722d7e27ae90        juniper/openjnpr-container-vmx:bionic   "/launch.sh"             2 hours ago         Up 2 hours              0.0.0.0:32865->22/tcp, 0.0.0.0:32864->830/tcp   spine1
7fd41ed26279        marcelwiget/vmx-docker-light:latest     "/launch.sh"             2 hours ago         Up 2 hours              0.0.0.0:32843->22/tcp, 0.0.0.0:32842->830/tcp   vmxdockerlight_vmx1_1
304ec56a0400        marcelwiget/vmx-docker-light:latest     "/launch.sh"             2 hours ago         Up 2 hours              0.0.0.0:32845->22/tcp, 0.0.0.0:32844->830/tcp   vmxdockerlight_vmx2_1
cdcaa6014fc2        marcelwiget/dhcptester:latest           "/usr/bin/dhcptester…"   34 hours ago        Up 34 hours                                                             metroevpnbbardemo_dhcptester_1
de3ceea835a0        metroevpnbbardemo_dhcpclient            "/launch.sh"             34 hours ago        Up 34 hours                                                             metroevpnbbardemo_dhcpclient_1
0111d21510ef        metroevpnbbardemo_keadhcp6              "/sbin/tini /launch.…"   34 hours ago        Up 34 hours                                                             metroevpnbbardemo_keadhcp6_1
742118e1b5ca        metroevpnbbardemo_dhcp4server           "/sbin/tini /launch.…"   34 hours ago        Up 34 hours                                                             metroevpnbbardemo_dhcp4server_1
b331054554a8        juniper/openjnpr-container-vmx:trusty   "/launch.sh"             34 hours ago        Up 34 hours             0.0.0.0:32839->22/tcp, 0.0.0.0:32838->830/tcp   metroevpnbbardemo_bbar2_1
68154ba01b10        juniper/openjnpr-container-vmx:trusty   "/launch.sh"             34 hours ago        Up 34 hours             0.0.0.0:32837->22/tcp, 0.0.0.0:32836->830/tcp   metroevpnbbardemo_bbar1_1
24c4846bd4d5        juniper/openjnpr-container-vmx:trusty   "/launch.sh"             34 hours ago        Up 34 hours             0.0.0.0:32835->22/tcp, 0.0.0.0:32834->830/tcp   metroevpnbbardemo_core1_1
yes

lcpu affinity ............................  0-7

NUMA node(s):        1
NUMA node0 CPU(s):   0-7

system dependencies ok
/u contains the following files:
docker-compose.yml  junos-vmx-x86-64-18.2R1.9.qcow2
id_rsa.pub      license-eval.txt
/fix_network_order.sh: trying to fix network interface order via docker inspect myself ...
MACS=02:42:c0:a8:50:03 02:42:c0:a8:40:03 02:42:c0:a8:30:03 02:42:c0:a8:20:03
02:42:c0:a8:50:03 eth0 == eth0
02:42:c0:a8:40:03 eth1 == eth1
02:42:c0:a8:30:03 eth3 -> eth2
FROM eth3 () TO eth2 ()
Actual changes:
tx-checksumming: off
    tx-checksum-ip-generic: off
    tx-checksum-sctp: off
tcp-segmentation-offload: off
    tx-tcp-segmentation: off [requested on]
    tx-tcp-ecn-segmentation: off [requested on]
    tx-tcp-mangleid-segmentation: off [requested on]
    tx-tcp6-segmentation: off [requested on]
Actual changes:
tx-checksumming: off
    tx-checksum-ip-generic: off
    tx-checksum-sctp: off
tcp-segmentation-offload: off
    tx-tcp-segmentation: off [requested on]
    tx-tcp-ecn-segmentation: off [requested on]
    tx-tcp-mangleid-segmentation: off [requested on]
    tx-tcp6-segmentation: off [requested on]
02:42:c0:a8:20:03 eth3 == eth3
using qcow2 image junos-vmx-x86-64-18.2R1.9.qcow2
LICENSE=license-eval.txt
269: eth0@if270: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:c0:a8:50:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Interface  IPv6 address
Bridging  (/02:42:c0:a8:50:03) with fxp0
Current MAC:   02:42:c0:a8:50:03 (unknown)
Permanent MAC: 00:00:00:00:00:00 (XEROX CORPORATION)
New MAC:       00:1d:20:7d:12:45 (COMTREND CO.)
-----------------------------------------------------------------------
vMX mysimplevmxlab_vmx1_1 (192.168.80.3) 18.2R1.9 root password vohdaiph4veekah1as5raeSi
-----------------------------------------------------------------------

bridge name bridge id       STP enabled interfaces
br-ext      8000.001d207d1245   no      eth0
                            fxp0
br-int      8000.16d69246dcb7   no      em1
Creating config drive /tmp/configdrive.qcow2
METADISK=/tmp/configdrive.qcow2 CONFIG=/tmp/vmx1.conf LICENSE=/u/license-eval.txt
Creating config drive (configdrive.img) ...
extracting licenses from /u/license-eval.txt
  writing license file config_drive/config/license/E435890758.lic ...
adding config file /tmp/vmx1.conf
-rw-r--r-- 1 root root 458752 Jul 26 13:45 /tmp/configdrive.qcow2
Creating empty /tmp/vmxhdd.img for VCP ...
Starting PFE ...
Booting VCP ...
Waiting for VCP to boot... Consoles: serial port
BIOS drive A: is disk0
BIOS drive C: is disk1
BIOS drive D: is disk2
BIOS drive E: is disk3
BIOS 639kB/1047424kB available memory

FreeBSD/x86 bootstrap loader, Revision 1.1
(builder@feyrith.juniper.net, Thu Jun 14 14:21:45 PDT 2018)
-

Booting from Junos volume ...
|
/packages/sets/pending/boot/os-kernel/kernel text=0x443df8 data=0x82258+0x290990 syms=[0x8+0x94aa0+0x8+0x814cd]
/packages/sets/pending/boot/junos-net-platform/mtx_re.ko size 0x2239a0 
. . .

Terminate the logs via Ctrl-C anytime. This won’t interrupt the instance.

Take note of the password line in the log file. It not only contains the auto-generated plain-text root password, but also the vMX version and its management IP address:

 

$ docker-compose logs|grep password
vmx2_1  | vMX mysimplevmxlab_vmx2_1 (192.168.80.2) 18.2R1.9 root password otheem4ocahTh3aej6ah2oos
vmx1_1  | vMX mysimplevmxlab_vmx1_1 (192.168.80.3) 18.2R1.9 root password vohdaiph4veekah1as5raeSi

Log into your vMX’s using SSH to the shown IP addresses. You might rightfully ask yourself, who assigned this IP address and how did the vMX learn about it? And why am I not asked for a password? Magic? Well no ;-), just good automation done by the launch container: It copies the users SSH public key (id_rsa.pub) and the assigned IP of the containers eth0 interface into the Junos configuration for your userid and fxp0 within the apply group openjnpr-container-vmx. You can see this in the output below:

 

$ ssh 192.168.80.3
The authenticity of host '192.168.80.3 (192.168.80.3)' can't be established.
ECDSA key fingerprint is SHA256:nZn+TFQgh5xQshQIeoiCb79kCWBgYPVt2VNgXfsw6Zc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.80.3' (ECDSA) to the list of known hosts.
--- JUNOS 18.2R1.9 Kernel 64-bit  JNPR-11.0-20180614.6c3f819_buil
mwiget@mysimplevmxlab_vmx1_1> show configuration groups openjnpr-container-vmx
system {
    configuration-database {
        ephemeral {
            instance openjnpr-container-vmx-vfp0;
        }
    }
    login {
        user mwiget {
            uid 2000;
            class super-user;
            authentication {
                encrypted-password "$1$Quohz5fu$XAlF3qxxESywZDUY52PuI/"; ## SECRET-DATA
                ssh-rsa "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLxUbwJ8sJD1euXqRvnU8tblaNGYWVdcGVksYu2GKwmfGtadEhtN5nG4trGBR3wMBse2HEe/Fhg4IVIFqAmvxQ0hj5KvZRnYg3eQYouLF8UprRM5a9IzYIlBjdwYMQaNIwDOh/TfV+W1famLSkPdXAiX/1Tq9YXzsBtSkfLWlKanx/np6ZhamC+Wfsh7jAIJsqB0gLWId2yl/hVV8lDCnL7WvuPby8IMKI1oWNdQkl87lb34ot8WsnYxtgPwNNTwhNLjc7byTuj+B7olZczWSWexDscd+xmXA7F6OR8riIZvY/z/OaLn2r+pUNSHwXXAqoNM5KDbIpXKP8fagbSS5B mwiget@sb"; ## SECRET-DATA
            }
        }
    }
    root-authentication {
        encrypted-password "$1$Quohz5fu$XAlF3qxxESywZDUY52PuI/"; ## SECRET-DATA
        ssh-rsa "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLxUbwJ8sJD1euXqRvnU8tblaNGYWVdcGVksYu2GKwmfGtadEhtN5nG4trGBR3wMBse2HEe/Fhg4IVIFqAmvxQ0hj5KvZRnYg3eQYouLF8UprRM5a9IzYIlBjdwYMQaNIwDOh/TfV+W1famLSkPdXAiX/1Tq9YXzsBtSkfLWlKanx/np6ZhamC+Wfsh7jAIJsqB0gLWId2yl/hVV8lDCnL7WvuPby8IMKI1oWNdQkl87lb34ot8WsnYxtgPwNNTwhNLjc7byTuj+B7olZczWSWexDscd+xmXA7F6OR8riIZvY/z/OaLn2r+pUNSHwXXAqoNM5KDbIpXKP8fagbSS5B mwiget@sb"; ## SECRET-DATA
    }
    host-name mysimplevmxlab_vmx1_1;
    services {
        ssh {
            client-alive-interval 30;
        }
        netconf {
            ssh;
        }
    }
    syslog {
        file messages {
            any notice;
        }
    }
}
interfaces {
    fxp0 {
        unit 0 {
            family inet {
                address 192.168.80.3/20;
            }
        }
    }
}
mwiget@mysimplevmxlab_vmx1_1> show configuration apply-groups
## Last commit: 2018-07-26 13:48:25 UTC by root
apply-groups openjnpr-container-vmx;

The same ssh public key is also given to the root account, plus ssh and netconf are activated. Remember we haven’t even created the Junos configuration files referenced by the docker-compose.yml file, vmx1.conf and vmx2.conf. You can provide those, e.g. by saving the configurations from your running instances. Make sure they contain the ‘apply-groups’ statement, otherwise a relaunch won’t learn the new management IP address:

 

$ ssh 192.168.80.3 show conf > vmx1.conf.new
$ ls -l vmx1.conf*
-rw-rw-r-- 1 mwiget mwiget 2212 Jul 26 15:59 vmx1.conf.new

Lets check the forwarding engine and its interfaces:

 

$ ssh 192.168.80.3
Last login: Thu Jul 26 13:55:09 2018 from 192.168.80.1
--- JUNOS 18.2R1.9 Kernel 64-bit  JNPR-11.0-20180614.6c3f819_buil
mwiget@mysimplevmxlab_vmx1_1> show interfaces descriptions
Interface       Admin Link Description
ge-0/0/0        up    up   mysimplevmxlab_net-a
ge-0/0/1        up    up   mysimplevmxlab_net-b
ge-0/0/2        up    up   mysimplevmxlab_net-c
fxp0            up    up   mysimplevmxlab_mgmt

mwiget@mysimplevmxlab_vmx1_1>

Cool. Everything looks good. But where do the interface descriptions come from? They haven’t been added to the apply group, but added at runtime into an ephemeral DB storage called openjnpr-container-vmx:

 

mwiget@mysimplevmxlab_vmx1_1> show ephemeral-configuration instance openjnpr-container-vmx-vfp0
## Last changed: 2018-07-26 13:48:36 UTC
interfaces {
    ge-0/0/0 {
        description mysimplevmxlab_net-a;
    }
    ge-0/0/1 {
        description mysimplevmxlab_net-b;
    }
    ge-0/0/2 {
        description mysimplevmxlab_net-c;
    }
    fxp0 {
        description mysimplevmxlab_mgmt;
    }
}

and this ephemeral db instance is activated automatically via the following configuration statement as part of the apply group:

 

mwiget@mysimplevmxlab_vmx1_1> show configuration groups |display set |match ephemeral
set groups openjnpr-container-vmx system configuration-database ephemeral instance openjnpr-container-vmx-vfp0

Not sure about you, but IMHO Junos really rocks, offering such comprehensive automation technique.

Next steps: Configure your vMX instances to your needs and test them. The virtual network interfaces are built via linux bridges, so don’t expect much performance. They also block things like LLDP and LACP. If you need to use such L2 protocols (that are blocked by bridges), revert to other Docker network drivers like macvlan. But VLAN’s are supported over the default docker networks. Check out the example repo’s shown at the beginning of this blog post.

 

While building this example, I had a few other vMX’s running on the same Linux server, 10 in total. An interesting way to see the vMX’s forwarding daemon (riot), all running natively on the Linux host (though isolated within their containers namespaces), use this:

 

$ ps ax|grep riot|grep lcores| wc -l
10
$ ps ax|grep riot|grep lcores
  562 pts/0    Sl    19:09 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:c0:a8:10:02 --vdev net_pcap1,iface=eth2,mac=02:42:c0:a8:00:02 --vdev net_pcap2,iface=eth3,mac=02:42:ac:1f:00:02 --vdev net_pcap3,iface=eth4,mac=02:42:ac:1e:00:02 --vdev net_pcap4,iface=eth5,mac=02:42:ac:1d:00:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2),(4,0,4,2), --tx (0,2),(1,2),(2,2),(3,2),(4,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
  781 pts/0    Sl    19:07 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:ac:18:00:03 --vdev net_pcap1,iface=eth2,mac=02:42:ac:19:00:02 --vdev net_pcap2,iface=eth3,mac=02:42:ac:1a:00:02 --vdev net_pcap3,iface=eth4,mac=02:42:ac:1b:00:02 --vdev net_pcap4,iface=eth5,mac=02:42:ac:1c:00:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2),(4,0,4,2), --tx (0,2),(1,2),(2,2),(3,2),(4,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
  897 pts/0    Sl    20:12 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:ac:18:00:02 --vdev net_pcap1,iface=eth2,mac=02:42:ac:19:00:03 --vdev net_pcap2,iface=eth3,mac=02:42:ac:1a:00:03 --vdev net_pcap3,iface=eth4,mac=02:42:ac:1b:00:03 --vdev net_pcap4,iface=eth5,mac=02:42:ac:1c:00:03 --vdev net_pcap5,iface=eth6,mac=02:42:c0:a8:10:03 --vdev net_pcap6,iface=eth7,mac=02:42:c0:a8:00:03 --vdev net_pcap7,iface=eth8,mac=02:42:ac:1f:00:03 --vdev net_pcap8,iface=eth9,mac=02:42:ac:1e:00:03 --vdev net_pcap9,iface=eth10,mac=02:42:ac:1d:00:03 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2),(4,0,4,2),(5,0,5,2),(6,0,6,2),(7,0,7,2),(8,0,8,2),(9,0,9,2), --tx (0,2),(1,2),(2,2),(3,2),(4,2),(5,2),(6,2),(7,2),(8,2),(9,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
 2247 pts/0    Sl   212:32 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev eth_pcap0,iface=eth1,mac=02:42:0a:0a:00:02 --vdev eth_pcap1,iface=eth2,mac=02:42:0a:63:00:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2), --tx (0,2),(1,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
 2550 pts/0    Sl   231:18 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev eth_pcap0,iface=eth1,mac=02:42:ac:12:00:03 --vdev eth_pcap1,iface=eth2,mac=02:42:0a:02:00:03 --vdev eth_pcap2,iface=eth3,mac=02:42:0a:0a:00:04 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2), --tx (0,2),(1,2),(2,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
 2712 pts/0    Sl   236:51 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev eth_pcap0,iface=eth1,mac=02:42:ac:12:00:02 --vdev eth_pcap1,iface=eth2,mac=02:42:0a:02:00:02 --vdev eth_pcap2,iface=eth3,mac=02:42:0a:0a:00:03 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2), --tx (0,2),(1,2),(2,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
21007 pts/0    Sl     2:42 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:c0:a8:40:02 --vdev net_pcap1,iface=eth2,mac=02:42:c0:a8:30:02 --vdev net_pcap2,iface=eth3,mac=02:42:c0:a8:20:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2), --tx (0,2),(1,2),(2,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
21379 pts/0    Sl     2:41 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:c0:a8:40:03 --vdev net_pcap1,iface=eth2,mac=02:42:c0:a8:30:03 --vdev net_pcap2,iface=eth3,mac=02:42:c0:a8:20:03 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2), --tx (0,2),(1,2),(2,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
31702 pts/0    Sl    24:01 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:ac:15:00:03 --vdev net_pcap1,iface=eth2,mac=02:42:ac:14:00:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2), --tx (0,2),(1,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
31887 pts/0    Sl    24:02 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:ac:15:00:02 --vdev net_pcap1,iface=eth2,mac=02:42:ac:14:00:03 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2), --tx (0,2),(1,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)

Terminate the instances

$ docker-compose down

Final words

That’s it. Hope you enjoyed reading this blog post. Let me know in the comment section. If you are intersted in how the container is built, you can check its source here:

https://github.com/Juniper/OpenJNPR-Container-vMX

 

Please check out also https://www.tesuto.com/, which offers cloud based network emulation at scale. They use some of the techniques of this vMX container to bring up Juniper vMX .


#How-To
#vmx