Enterprise Cloud and Transformation
Moderator Moderator , Moderator Moderator Moderator
Enterprise Cloud and Transformation
How to Get Junos “Speaking Whale” to Containers
Sep 28, 2016

Docker-speaking-whale.jpg

With containers gaining popularity, many people are moving virtual machine-based workloads to containers and entirely developing and packaging new cloud-native applications into container-based micro-services to scale and automate, aligning with typical cloud-native values.

 

Docker is by far the most popular container tool for packaging, distributing and running containers. While Docker, conversationally, usually refers to the Docker command line tool for these tasks, Docker is an open source set of technology tools with Docker Inc. providing Docker Engine and other productized forms of these tools with commercial support. Juniper Networks has a technology alliance partnership with Docker Inc.

 

While containers themselves have been widely adopted as the foundational building blocks of applications, we need more than building blocks to construct next-generation cloud applications. There is an array of considerations in running a containers-based application such as: Isolation, Composition, Scheduling, Lifecycle, Discovery, Constituency, Scaling, AAA, Monitoring, and Health. These kinds of factors are generally addressed with technology present in new-generation orchestration stacks that may go by the name of micro-services stack, serverless architecture stack, or container as a service (CaaS). Popular orchestration examples include Kubernetes, Mesos, Docker Swarm, Nomad and other accompanying technologies and optional Platform as a Service (PaaS) layers. While all of these orchestration systems have various networking biases, Juniper generally addresses these orchestration stacks the same way we automate networking for infrastructure orchestration stacks: with Contrail Networking as a software-defined networking (SDN) overlay solution that can be built on a Juniper or any underlay physical network. This guide’s scope is not to address such stacks and SDN with Contrail Networking which we’ve tackled before in many blogs on opencontrail.org.

 

 

As with all IT transformations, the move to containers comes in different forms. Sometimes the shift to containers will not be wholly revolutionary with such orchestration stacks, and instead be simply managed manually or automatically on the servers running Docker in conjunction with the physical network. Operators may want this simple kind of model that allows containers to be attached to virtual networks, in an analogous way that bare-metal nodes and virtual machines can be today. In such use cases, most Juniper Networks physical networking systems provide a way of connecting container to virtual local area networks (VLANs).

 

In Docker version 1.12 some of the networking drivers in Docker’s networking framework, libnetwork, have matured to interoperate with VLAN technology on network devices that support VLAN tagging and trunking with the IEEE 802.1q standard, most commonly on switches. This guide will provide the necessary Junos OS configurations to setup, test and operate compatible Juniper devices with the MACVLAN libnetwork driver which is now fully supported by Docker Inc. in commercial Docker deployments.

 

Network and Server Setup

For simplicity, we will assume that our two servers running Docker are connected to a QFX series switch, but this switch could equally be connected to an EX or MX series or a network of other switches or routers using standards-based technologies or Juniper technologies like virtual chassis, virtual chassis fabric or Junos Fusion Data Center. In fact, the switch we have used in this test is a member of a virtual chassis fabric.

 

Docker-lab Physical View.png

 

 

 

To test both inter-container and inter-server connectivity we will connect two servers and create two VLANs numbered 1000 and 2000. On switch interface ge-6/0/13 we will connect server1, and on interface ge-6/0/15 we will connect server2.

 

Server1 will run two containers, containerA and containerB, both in the same isolation domain. That is to say they will be on the same VLAN 1000 and Docker should configure Linux networking properly to allow them to reach each other locally without traffic going to and from the switch.

 

Server2 will also run two containers, containerC and containerD, in different isolation domains. They will be on different VLANs. ContainerC will be on VLAN 1000 and containerD will be on VLAN 2000.

 

We will associate the 192.168.10.0/24 subnet with VLAN 1000 and 192.168.20.0/24 subnet with VLAN 2000.

It is expected that container can reach containerA and containerB with simple bridging through the switch. In order for containerD to reach (or be reachable from) VLAN 1000 where the other containers are isolated, we need to route traffic between the two VLANs. We set this up with Junos OS Integrated Routing and Bridging (IRB).

Docker-lab Logical View.png

 

 

 

Junos OS Initial Configuration

This initial configuration will support containers in either VLAN on either server attached to ge-6/0/13 and ge-6/0/15. This configuration was tested on a QFX5100 virtual chassis fabric (VCF).

 

root> configure

 

root# set vlans foo vlan-id 1000

root# set vlans bar vlan-id 2000

 

root# set interfaces ge-6/0/13 unit 0 family ethernet-switching interface-mode trunk

root# set interfaces ge-6/0/13 unit 0 family ethernet-switching vlan members foo

root# set interfaces ge-6/0/13 unit 0 family ethernet-switching vlan members bar

root# set interfaces ge-6/0/15 unit 0 family ethernet-switching interface-mode trunk

root# set interfaces ge-6/0/15 unit 0 family ethernet-switching vlan members foo

root# set interfaces ge-6/0/15 unit 0 family ethernet-switching vlan members bar

root# set interface irb unit 1000 family inet address 192.168.10.1/24

root# set interface irb unit 2000 family inet address 192.168.20.1/24

 

root# set vlans foo l3-interface irb 1000

root# set vlans bar l3-interface irb 2000

 

root# commit

 

Note: use port-mode instead of interface-mode on switches NOT supporting ELS like the QFX series does – also “irb” below becomes “vlans” in older Junos configs

root# set interfaces ge-0/0/0 unit 0 family ethernet-switching port-mode trunk

 

For more information on configuring VLANs in Junos OS, see

https://www.juniper.net/documentation/en_US/junos16.1/topics/example/RVIs-qfx-series-example1.html

 

Docker Configuration

Create networks and containers with Docker. Here we use the busybox image for simplicity because it is a stock image on DockerHub that includes the networking utilities like ip and ping we need to verify our setup is working as expected.

Reference the Docker macvlan documentation for more details on these commands.

 

Create the network on the Server1:

 

user@bms1-m2:~$ sudo docker network create -d macvlan --subnet=192.168.10.0/24 --gateway=192.168.10.1 -o parent=ens9.1000 macvlan1000

4b289befa66a840555a3bdcac296309e40e2bb2d26e271cd4dbbfa90636a82b6

 

user@bms1-m2:~$ sudo docker network ls

NETWORK ID          NAME                DRIVER              SCOPE

c6022998cb28        bridge              bridge              local

90d227d3c7ce        host                host                local

4b289befa66a        macvlan1000         macvlan             local

fb7a27248942        none                null                local

 

user@bms1-m2:~$ ip -d addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 38:c9:86:15:27:4f brd ff:ff:ff:ff:ff:ff promiscuity 0

    inet6 fe80::3ac9:86ff:fe15:274f/64 scope link

       valid_lft forever preferred_lft forever

3: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether a8:20:66:29:e2:82 brd ff:ff:ff:ff:ff:ff promiscuity 0

    inet 10.105.5.161/24 brd 10.105.5.255 scope global enp1s0f0

       valid_lft forever preferred_lft forever

    inet6 fe80::aa20:66ff:fe29:e282/64 scope link

       valid_lft forever preferred_lft forever

4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default

    link/ether 02:42:61:0e:53:8f brd ff:ff:ff:ff:ff:ff promiscuity 0

    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q

    inet 172.17.0.1/16 scope global docker0

       valid_lft forever preferred_lft forever

    inet6 fe80::42:61ff:fe0e:538f/64 scope link

       valid_lft forever preferred_lft forever

7: ens9.1000@ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

    link/ether 38:c9:86:15:27:4f brd ff:ff:ff:ff:ff:ff promiscuity 0

    vlan protocol 802.1Q id 1000 <REORDER_HDR>

    inet6 fe80::3ac9:86ff:fe15:274f/64 scope link

       valid_lft forever preferred_lft forever

 

user@bms1-m2:~$ sudo docker network inspect macvlan1000

[

    {

        "Name": "macvlan1000",

        "Id": "4b289befa66a840555a3bdcac296309e40e2bb2d26e271cd4dbbfa90636a82b6",

        "Scope": "local",

        "Driver": "macvlan",

        "EnableIPv6": false,

        "IPAM": {

            "Driver": "default",

            "Options": {},

            "Config": [

                {

                    "Subnet": "192.168.10.0/24",

                    "Gateway": "192.168.10.1"

                }

            ]

        },

        "Internal": false,

        "Containers": {},

        "Options": {

            "parent": "ens9.1000"

        },

        "Labels": {}

    }

]

 

Create the networks on the Server2:

 

user@bms2-m7:~$ sudo docker network create -d macvlan --subnet=192.168.10.0/24 --gateway=192.168.10.1 -o parent=ens9.1000 macvlan1000

d2053e0746a7ba40e42412ee8b7df39a1a47a53369fb2c940e2f68cea63cc995

 

user@bms2-m7:~$ sudo docker network create -d macvlan --subnet=192.168.20.0/24 --gateway=192.168.20.1 -o parent=ens9.2000 macvlan2000

58cf1be35e7a0bb37fbd6c3c393ace3547333b3338b0213a129989aa527d8fe7

 

user@bms2-m7:~$ ip -d addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 38:c9:86:17:4c:a4 brd ff:ff:ff:ff:ff:ff promiscuity 0

    inet6 fe80::3ac9:86ff:fe17:4ca4/64 scope link

       valid_lft forever preferred_lft forever

3: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether a8:20:66:29:e6:c4 brd ff:ff:ff:ff:ff:ff promiscuity 0

    inet 10.105.5.134/24 brd 10.105.5.255 scope global enp1s0f0

       valid_lft forever preferred_lft forever

    inet6 fe80::aa20:66ff:fe29:e6c4/64 scope link

       valid_lft forever preferred_lft forever

4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default

    link/ether 02:42:fb:a5:36:84 brd ff:ff:ff:ff:ff:ff promiscuity 0

    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q

    inet 172.17.0.1/16 scope global docker0

       valid_lft forever preferred_lft forever

    inet6 fe80::42:fbff:fea5:3684/64 scope link

       valid_lft forever preferred_lft forever

7: ens9.1000@ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

    link/ether 38:c9:86:17:4c:a4 brd ff:ff:ff:ff:ff:ff promiscuity 0

    vlan protocol 802.1Q id 1000 <REORDER_HDR>

    inet6 fe80::3ac9:86ff:fe17:4ca4/64 scope link

       valid_lft forever preferred_lft forever

8: ens9.2000@ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

    link/ether 38:c9:86:17:4c:a4 brd ff:ff:ff:ff:ff:ff promiscuity 0

    vlan protocol 802.1Q id 2000 <REORDER_HDR>

    inet6 fe80::3ac9:86ff:fe17:4ca4/64 scope link

       valid_lft forever preferred_lft forever

 

user@bms2-m7:~$ sudo docker network ls

NETWORK ID          NAME                DRIVER              SCOPE

05bc33a42a9a        bridge              bridge              local

fc37b716c4e7        host                host                local

d2053e0746a7        macvlan1000         macvlan             local

58cf1be35e7a        macvlan2000         macvlan             local

c09c1db624a6        none                null                local

 

user@bms2-m7:~$ sudo docker network inspect macvlan1000

[

    {

        "Name": "macvlan1000",

        "Id": "d2053e0746a7ba40e42412ee8b7df39a1a47a53369fb2c940e2f68cea63cc995",

        "Scope": "local",

        "Driver": "macvlan",

        "EnableIPv6": false,

        "IPAM": {

            "Driver": "default",

            "Options": {},

            "Config": [

                {

                    "Subnet": "192.168.10.0/24",

                    "Gateway": "192.168.10.1"

                }

            ]

        },

        "Internal": false,

        "Containers": {},

        "Options": {

            "parent": "ens9.1000"

        },

        "Labels": {}

    }

]

 

user@bms2-m7:~$ sudo docker network inspect macvlan2000

[

    {

        "Name": "macvlan2000",

        "Id": "58cf1be35e7a0bb37fbd6c3c393ace3547333b3338b0213a129989aa527d8fe7",

        "Scope": "local",

        "Driver": "macvlan",

        "EnableIPv6": false,

        "IPAM": {

            "Driver": "default",

            "Options": {},

            "Config": [

                {

                    "Subnet": "192.168.20.0/24",

                    "Gateway": "192.168.20.1"

                }

            ]

        },

        "Internal": false,

        "Containers": {},

        "Options": {

            "parent": "ens9.2000"

        },

        "Labels": {}

    }

]

 

Run containerA and containerB on server1:

 

user@bms1-m2:~$ sudo docker run -itd --name='containerA' --hostname='containerA' --net=macvlan1000 --ip=192.168.10.10 busybox

ae3b7ea130cea06ccf0e53807b50cd3cbe250fd30a5985693cbe2fab40fbf66b

 

user@bms1-m2:~$ sudo docker run -itd --name='containerB' --hostname='containerB' --net=macvlan1000 --ip=192.168.10.11 busybox

9afd5191e40089522b676f9e578a89091af7da412a452c74748ca9d6d1c74e77

 

user@bms1-m2:~$ sudo docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

9afd5191e400        busybox             "sh"                9 seconds ago       Up 8 seconds                            containerB

ae3b7ea130ce        busybox             "sh"                29 seconds ago      Up 28 seconds                           containerA

 

user@bms1-m2:~$ sudo docker exec -ti containerA ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

24: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue

    link/ether 02:42:c0:a8:0a:0a brd ff:ff:ff:ff:ff:ff

    inet 192.168.10.10/24 scope global eth0

       valid_lft forever preferred_lft forever

    inet6 fe80::42:c0ff:fea8:a0a/64 scope link

       valid_lft forever preferred_lft forever

user@bms1-m2:~$ sudo docker exec -ti containerB ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

25: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue

    link/ether 02:42:c0:a8:0a:0b brd ff:ff:ff:ff:ff:ff

    inet 192.168.10.11/24 scope global eth0

       valid_lft forever preferred_lft forever

    inet6 fe80::42:c0ff:fea8:a0b/64 scope link

       valid_lft forever preferred_lft forever

 

Run containerC and containerD on Server 2:

 

user@bms2-m7:~$ sudo docker run -itd --name='containerC' --hostname='containerC' --net=macvlan1000 --ip=192.168.10.12 busybox

Unable to find image 'busybox:latest' locally

latest: Pulling from library/busybox

8ddc19f16526: Pull complete

Digest: sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6

Status: Downloaded newer image for busybox:latest

2eff9352517fe15ef28860e9f7e14a15ffc9c53e65d1a447dba7b6389ab8612f

 

user@bms2-m7:~$ sudo docker run -itd --name='containerD' --hostname='containerD' --net=macvlan2000 --ip=192.168.20.10 busybox

480423ac2e31725275946e7dd1c09b750ba0e281edb66b1d113ab574d196c965

 

user@bms2-m7:~$ sudo docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

480423ac2e31        busybox             "sh"                8 seconds ago       Up 7 seconds                            containerD

2eff9352517f        busybox             "sh"                36 seconds ago      Up 35 seconds                           containerC

 

user@bms2-m7:~$ sudo docker exec -ti containerC ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

19: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue

    link/ether 02:42:c0:a8:0a:0c brd ff:ff:ff:ff:ff:ff

    inet 192.168.10.12/24 scope global eth0

       valid_lft forever preferred_lft forever

    inet6 fe80::42:c0ff:fea8:a0c/64 scope link

       valid_lft forever preferred_lft forever

user@bms2-m7:~$ sudo docker exec -ti containerD ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

20: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue

    link/ether 02:42:c0:a8:14:0a brd ff:ff:ff:ff:ff:ff

    inet 192.168.20.10/24 scope global eth0

       valid_lft forever preferred_lft forever

    inet6 fe80::42:c0ff:fea8:140a/64 scope link

       valid_lft forever preferred_lft forever

 

Verification and Operationdocker-turtles-communication.jpg

Now we will test that the containers can reach each other. containerA and containerB on server1 can reach each other directly within the host’s networking stack. containerA or containerB on server1 can reach container on server2 through layer-2 bridging through the switch because they’re on the same VLAN. containerD on a separate VLAN can also reach the other containers, but traffic always goes through the switch and is routed at layer 3 through the Junos OS IRB interface. We will see this decrements the ICMP ping packet TTL and the verification counters we setup on the switch.

 

 

 

 

Setup testing counters on the two IRB interfaces and the bridged interface facing server1:

 

We show the alternate form of configuration here which can be input all at once with the load command.

 

{master:0}[edit]

root@B5-VCF# show firewall

family ethernet-switching {

    filter test3 {

        term t {

            from {

                ip-protocol icmp;

            }

            then {

                accept;

                count test3-counter;

            }

        }

        term tt {

            then accept;

        }

    }

}

filter test1 {

    term t {

        then {

            count test1-counter;

            accept;

        }

    }

}

filter test2 {

    term t {

        then {

            count test2-counter;

            accept;

        }

    }

}

{master:0}[edit]

root@B5-VCF# show interfaces ge-6/0/15

unit 0 {

    family ethernet-switching {

        interface-mode trunk;

        vlan {

            members [ foo bar ];

        }

        filter {

            input test3;

        }

    }

}

 

{master:0}[edit]

root@B5-VCF# show interfaces irb unit 1000

family inet {

    filter {

        input test1;

    }

    address 192.168.10.1/24;

}

 

{master:0}[edit]

root@B5-VCF# show interfaces irb unit 2000

family inet {

    filter {

        input test2;

    }

    address 192.168.20.1/24;

}

 

test1-counter counts the packets through the IRB interface for VLAN 1000.

test2-counter counts the packets through the IRB interface for VLAN 2000.

test3-counter counts the packets through the ge-6/0/15 interface to which server2 is attached.

 

Testing with ping

Pings all work as expected. containerA to containerB work through the host networking stack (not shown here) and will not increment any counters.

 

Testing a ping from containerC on server2 to containerA on server1

 

user@bms2-m7:~$ sudo docker exec -ti containerC ping -c 4 192.168.10.10

PING 192.168.10.10 (192.168.10.10): 56 data bytes

64 bytes from 192.168.10.10: seq=0 ttl=64 time=0.402 ms

64 bytes from 192.168.10.10: seq=1 ttl=64 time=0.459 ms

64 bytes from 192.168.10.10: seq=2 ttl=64 time=0.439 ms

64 bytes from 192.168.10.10: seq=3 ttl=64 time=0.473 ms

 

--- 192.168.10.10 ping statistics ---

4 packets transmitted, 4 packets received, 0% packet loss

round-trip min/avg/max = 0.402/0.443/0.473 ms

 

We observe the 4 packets in the test3-counter which are the 4 ping echo requests (responses are not counted because the counting filter is applied in the input direction):

 

{master:0}[edit]

root@B5-VCF# run show firewall counter filter test1 test1-counter

 

Filter: test1

Counters:

Name                                                Bytes              Packets

test1-counter                                           0                    0

 

{master:0}[edit]

root@B5-VCF# run show firewall counter filter test2 test2-counter

 

Filter: test2

Counters:

Name                                                Bytes              Packets

test2-counter                                           0                    0

 

{master:0}[edit]

root@B5-VCF# run show firewall counter filter test3 test3-counter

 

Filter: test3

Counters:

Name                                                Bytes              Packets

test3-counter                                         424                    4

 

Testing a ping from containerC to containerD, both on server2, but in separate VLANs:

 

user@bms2-m7:~$ sudo docker exec -ti containerC ping -c 4 192.168.20.10

PING 192.168.20.10 (192.168.20.10): 56 data bytes

64 bytes from 192.168.20.10: seq=0 ttl=63 time=0.244 ms

64 bytes from 192.168.20.10: seq=1 ttl=63 time=0.354 ms

64 bytes from 192.168.20.10: seq=2 ttl=63 time=0.357 ms

64 bytes from 192.168.20.10: seq=3 ttl=63 time=0.337 ms

 

--- 192.168.20.10 ping statistics ---

4 packets transmitted, 4 packets received, 0% packet loss

round-trip min/avg/max = 0.244/0.323/0.357 ms

 

We observe the 4 packets in the test1-counter for the ping responses, 4 packets in the test2-counter for the ping echo requests, and test3-counter counts 8 total for the requests and responses because all packets are coming in this single interface toward server2:

 

{master:0}[edit]

root@B5-VCF# run show firewall counter filter test1 test1-counter

 

Filter: test1

Counters:

Name                                                Bytes              Packets

test1-counter                                         424                    4

 

{master:0}[edit]

root@B5-VCF# run show firewall counter filter test2 test2-counter

 

Filter: test2

Counters:

Name                                                Bytes              Packets

test2-counter                                         424                    4

 

{master:0}[edit]

root@B5-VCF# run show firewall counter filter test3 test3-counter

 

Filter: test3

Counters:

Name                                                Bytes              Packets

test3-counter                                        1272                   12

 

That’s it. We’ve now verified connectivity, and you can easily duplicate this setup yourself in situations where you have VLANs connecting to bare-metal or VM workloads and even VLANs bridged to VXLAN, E-VPN or IP VPN networks that may be controlled by SDN systems.

 

If you've read my other recent personal blog on "saving the whale" from a Docker fork, you might wonder about networking Docker alternatives or combining them with Docker, and indeed macvlan networking exists in Linux in general and for them too. It works the similarly for CoreOS and rkt container networking with macvlan mode, and Junos OS would be configured the same way.

 

Now when the apps team is introducing Docker container workloads into your Juniper network environment, you can keep calm and speak whale Smiley Wink in other words "Jjjjjjjjuuuuuuuu-nnnnnnooooooosssssssss"

 

 

 

Top Kudoed Authors