Search the Community
- Tech Cafe
- The New Network
- Security Now
- Industry Solutions and Trends
- Partner Watch
- Community Talk
- Automation & Programmability
- SDN and NFV Era
- Packet-Optical Technologies
- Silicon and Systems
- Data Center Technologists
- Business and Finance
- Basic Cable
- Juniper German Blog
- Juniper France Tech Blog
- Government Trends and Insights
- Information Experience (iX)
- Your Business Edge
- All Things APAC
- AR Voices
- Corporate Social Responsibility
- Customer Stories and Successes
- Security Incident Response
- Application Acceleration
- Community Feedback
- Configuration Library
- Contrail Platform Developers
- Day One Tips
- Ethernet Switching
- Identity & Policy Control - SBR Carrier & SRC
- Intrusion Prevention
- Junos Automation (Scripting)
- Junos Space Developer
- ScreenOS Firewalls (NOT SRX)
- SRX Services Gateway
- Training, Certification, and Career Topics
- Wireless LAN
- Ambassador Program
- Ambassador Program
If your business is like most, embracing virtualization with the expectation of tremendous cost and agility savings, it’s not alone. In fact, by 2017, the overwhelming majority of businesses will have deployed a virtual firewall somewhere in their network. Organizations worldwide are adopting virtual infrastructure to reduce capex, improve operational elasticity, and make users and IT staff more agile and productive.
If your business is considering the SDN or NFV route, then a virtual firewall is probably the first service you are looking to deploy, whether in the hosted/cloud environment or at the perimeter to replace a physical on-premises firewall. Regardless of which deployment option you choose, the bottom line is you need to secure the virtualized portion of your network, which in enterprises begins in the data center core. And while virtual firewalls abound, most are still not equal to their physical counterparts and fail to deliver the performance, agility and cost savings they promise.
However, if you believe virtual firewall performance and agility are obstacles to expanding your virtualization footprint, we have some great news for you.
Throughout 2015, Juniper dedicated considerable resources to bolstering the underlying architecture of our vSRX virtual firewall, leveraging industry advancements with DPDK and SR-IOV to radically improve performance, scale, and efficiency. As a result, the vSRX is now the industry’s highest performing, most efficient and most flexible virtual firewall. Even more impressive, the vSRX offers these characteristics while delivering the lowest server TCO in the industry, achieving 17Gbps of large packet performance with only two virtual CPUs. Check out this vSRX infographic to see industry comparisons, analyst data, and how the vSRX stacks up against the competition.
The vSRX, widely deployed by both our enterprise and service provider customers, comes with a full set of advanced, next-generation security services and routing capabilities. Using the Junos Space Virtual Director management application, you can quickly and efficiently provision and scale the vSRX to dynamically meet the demands of virtualized and cloud environments in a matter of minutes or even seconds—not hours or days, as in the past. With the vSRX and Virtual Director, you can quickly and painlessly scale up, out, or in, as needs change—which they inevitably will.
In environments where it gets deployed as the firewall VNF, the vSRX also serves a variety of NFV use cases, supporting Juniper’s Contrail solution as well as other third-party SDN solutions. It also fully integrates with OpenStack.
As the pace of change in the data center accelerates, you need to effectively manage the risks associated with delivering a wide array of cloud services requiring both physical and virtualized high-density infrastructures. These mixed environments demand a new breed of virtual security solutions that can scale along with the virtual and cloud-based resources themselves and provide a robust defense against a variety of sophisticated threats—while achieving the desired level of performance, agility, flexibility and cost savings. Juniper’s vSRX delivers all of this, and much more.
Virtual Machines and Containers
Anytime we discuss virtualized products one of the question invariably asked is whether we considered a container-based approach to the product instead of the virtualization. The intent of this article is to provide the technological differences between these two technologies and the reader is free to choose the technology of his choice.
The major benefits Dockers and Virtual machines provide are:
- Virtual Platforms – Allow us to build virtual platforms that are functionally equivalent to the real platforms. Example: vSRX is a virtual platform providing the SRX functionality.
- Allows Applications developed on different platforms and for different operating systems to run on a single physical server. Now that cloud hosting has become real, it is necessary to run these applications in a very different environment than the environment these applications were developed and tested on. It is not cost effective to have multiple platforms running different operating systems.
- Scale-out model – allows multiple instances of the applications without modifying the applications avoiding resource conflicts.
- Multi-tenancy – Multi user systems can host multiple users. But not suitable to host multiple tenants, as most objects including file systems, processes, network stack are globally visible causing privacy issues.
In any typical physical platform the HW is controlled by the Operating System (OS). Applications are built to run on top of the operating system. Most operating systems provide libraries, which include commonly used functions and system calls to facilitate application development.
A Virtual machine is a Virtual HW platform. The hypervisor or VMM also called the host OS manages the physical HW and provides multiple virtual HW that can run different operating systems on a single physical server. It does this by emulating the HW and the guest OS is not aware it is running on a physical platform or virtual platform. Virtual HW that can run unmodified guest operating system is called full virtualization. As HW emulation is expensive, for efficiency the guest OS drivers are modified to effectively share the physical HW and this form of virtualization is called para-virtualization. As you can see in the above figure, majority of the code runs unmodified in a isolated virtual HW with the help of HW assists provided by the physical HW.
It runs a single OS and the OS provide isolation using name spaces. In container all applications run as processes within the container. This provides another level of security and processes running in a container don’t see or have access to other containers. To run applications built for other operating systems, container needs a simulated environment with a set of libraries and resources such file systems, networking support. Docker uses the container technology and provides the environment needed to support
Virtual machines emulate the hardware and provide virtual platforms that can run unmodified SW including kernel. In environment where the kernel is heavily modified to fit the needs of the applications for efficiency and other reasons additional work is required to run these applications in the container based environment. Otherwise container based technology offers better resource utilization.
Virtual machines use the Hardware provided isolation in most cases and container uses the name spaces to provide the isolation.
While participating in the AWS Summit in San Francisco few weeks back, I was amazed at the number of new AWS services available for developers to build their cloud applications. I also noticed a trend toward “serverless” computing discussed at many of the AWS sessions, including talks by AWS CTO Werner Vogels and AWS CEO Andrew Jassy.
Clearly, AWS anticipates widespread adoption of serverless computing and wants to shift the paradigm of how enterprises develop their applications. The onus is now on enterprises to make tradeoffs: either they avoid the increasing lock-in imposed by cloud platforms (since this new paradigm demands bestowing more responsibility to non-standard cloud platforms), or they embrace it as an accelerator for feature velocity, allowing them to strategically focus on their differentiating application code.
After the Summit, I reflected on the journey from physical to virtual to, now, serverless computing. The implications of this shift in cloud architectures is astounding.
What is Serverless Computing?
Virtualization dramatically improves economics and dynamism by bringing infrastructure-as-a-service (IaaS) to enterprises, allowing cloud providers to pool their infrastructure resources and offer compute, storage, and networking as a utility to bring enterprise applications to the public cloud. Hyperscale cloud providers are innovating at light speed, moving from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS) to offer platform-level middleware services that remove the complexity of managing and maintaining the underlying infrastructure.
The main goal of PaaS is to make application development more agile for enterprises by absorbing common middleware services, creating greater value and stickiness for their cloud platforms. Now they are taking yet another step by moving to serverless computing. This phase is designed to improve application development agility by separating the enterprise application developer from the underlying infrastructure while allowing cloud providers to create more stickiness for their cloud platforms.
The prime enabler of serverless computing is the Function-as-a-Service (FaaS) capability, which scales and securely executes code in run-time containers in response to real-time events without needing to manage the underlying infrastructure. Amazon has AWS Lambda (introduced in 2014); Microsoft has Azure Functions (2016); Google has Google Cloud Functions (2016); and IBM has BlueMix OpenWhisk (2016). In the serverless computing paradigm, developers are focused on building trigger functions that serve the events to build their applications. This relieves enterprise application developers from undifferentiated and tedious infrastructure complexities, allowing them to focus on developing strategic assets of differentiated application code. As developers focus on differentiated trigger functions, cloud providers can take care of delivering just the right amount of compute, storage, networking, security, high availability, auto scaling, and maintenance such as software and security patches. AWS also provides AWS Step-Function service, introduced in December 2016, to create a complex pipeline of AWS Lambda functions to build complex applications. In essence, cloud providers want to hide the infrastructure from application developers and move them towards a more event-driven serverless computing paradigm. Apart from the potentially disruptive technology paradigm shift, this model has positive economic implications for enterprises.
In this serverless paradigm, enterprises don’t have deploy VMs/containers upfront to serve end-user events or requests. The trigger functions are fired up on-demand, when needed, to serve an event; those trigger functions then disappear once the event is serviced. Since AWS hosting costs are typically considered COGS for many SaaS applications, this process actually reduces costs by improving margins and/or driving down end-user expenses. For many cloud-born enterprises building cloud native applications, this paradigm makes a lot of sense, especially since achieving application feature velocity to capture market share is paramount.
What are the Implications on Cloud Networking?
As this paradigm shift occurs at the higher layers, it is imperative to understand the implications for the underlying infrastructure layer. For cloud providers, revenue is generated at the higher layers; the infrastructure layer is merely a means to an end. However, these providers are well aware that the infrastructure and architectural choices they make are strategic differentiators for delivering the higher layers. This is evident from the high infrastructure spending (CAGR of 9.6%, according to Heavy Reading’s SDN and NFV Market Tracker, September 2016), as well as the secrecy in preserving details of their infrastructure architectures.
By hiding the infrastructure from the higher-layer applications, cloud providers are taking on the burden of building a dynamic infrastructure that can be driven in a software-defined manner. As they hide the security, scalability, and high availability of on-demand containers, and their connectivity across multiple high availability zones, the networking infrastructure interconnecting compute and storage nodes within and between data centers needs to build on what I call “application-driven cloud networking” architectures. In order to bring dynamism to the infrastructure, these architectures must support the following characteristics:
- Intent-based automation using programmable platforms to create an agile and responsive network layer that supports on-demand trigger functions that must be communicated inside and across data centers.
- Platforms that can feed real-time analytics to logically centralized SDN controllers, enabling them to make real-time decisions that meet the needs of the trigger functions.
- Application-driven routing technologies like segment routing that enable applications to request application-specific, SLA-constrained paths that are approved by centralized SDN controllers, regardless of how trigger function pipelines are built across data centers.
The bottom line is that leading cloud providers are strategically pursuing the move to serverless computing, creating a paradigm shift in how enterprises develop cloud-native applications. It also raises the question: can the networking layer deliver the needed capabilities fast enough to become strategic enough for cloud providers to differentiate themselves by supporting higher-layer innovations?
Come hear more about this topic at the Telecom Council on June 1st, 2017
SD-WAN is gaining incredible mindshare – as well - it should be.
- Enterprises can reduce their time to achieve business outcomes combined with lower cost structures
- Service Providers have a step up with their managed services portfolio to deliver service assurance with a comprehensive managed offering that adds secure SD-WAN, fully integrated with other services (router, security, wireless lan, wan optimization and more)
- Innovation in this technology domain is fast and furious, hear what industry specialist have to say
Want to learn more?
Come to a half day focused discussion on this topic with views from industry experts.
More discussions and perspectives can be found on Juniper’s SD-WAN page.
This is the second post on the topic of challenges associated with debugging an overlay network. The initial post can be found here .
To recap a bit from the previous post - Network Virtualization typically requires some form of tunnel encapsulation (e.g. VXLAN, MPLSoGRE etc.) to provide a virtual network abstraction. But this abstraction also makes troubleshooting a problematic experience in a virtual network.
In this post, we will look into what makes the troubleshooting a "problematic experience" in an overlay network and the proposed solution to make this a better experience.
Deterioration of application performance or traffic black hole in an overlay network could occur due to -
- Problems in the virtual network, e.g. Local links where VMs/physical servers are connected and/or
- The underlay network itself that is used to transport the tunnels.
Debugging local links in the virtual network is easier. It is the underlay that makes end-to-end troubleshooting harder… and mainly because it has to be debugged in the context of an overlay i.e.. in the context of tenants and its services. Existence of multiple ECMP paths, typically, between the ingress and egress tunnel endpoints further exacerbates troubleshooting.
Existing ping/traceroute mechanisms don't work in the virtualized environment e.g. ping may report that IP reachability between the ingress and egress tunnel endpoints is fine but the end systems (i.e. VM, physical server etc.) connectivity for a tenant could still be broken. This is because ping only verifies basic connectivity between two endpoints in the underlay but NOT in the context of overlay segments. Hence, we need debugging tools that work in the overlay environment.
An important requirement for the fault detection (ping) and isolation (traceroute) mechanisms to work in virtualized networks is to make sure that OAM packets follow exactly the same path in the underlay as the data packets of the overlay segment that needs to be debugged. This is the only way to verify the correct operation of the data plane and making sure that it is in sync with the control plane for a given overlay segment.
Juniper has co-authored a draft http://www.ietf.org/internet-drafts/draft-jain-nvo3-overlay-oam-01.txt that addresses the troubleshooting issues in the overlay network and provides a generic mechanisms to debug overlay tunnels for the most common tunnels types used today in virtualized environments e.g. VXLAN, NVGRE, MPLSoUDP, MPLSoGRE. This draft has the basic framework in place for basic overlay OAM capabilities and it will be further enhanced to include tracing of multiple ECMP paths in next revisions. That will also allow operators to proactively monitor all the available paths between the ingress and egress tunnel endpoints.
The basic idea behind the mechanisms described in the draft is very simple - To make sure that the overlay OAM packets follow the exact same path as the data packets for the overlay segment, it must be encapsulated with the same tunnel headers as the data packets so that any packet processing on transit nodes e.g. load balancing hash computation, tunnel header based policies etc. comes up with the same result. This enables transit nodes to forward the overlay OAM packets exactly as any other data packet for that overlay segment.
The draft also borrows concepts from the LSP Ping RFC (RFC 4379) and provides TLV based encoding format to allow generating OAM request/reply packets with additional information. Thus overlay ping procedures not only verifies the service connectivity in a tenant’s context but it can also validate end system’s MAC and/or IP addresses at the egress tunnel endpoint. Similarly, overlay traceroute with multiple ECMP path discovery support (future revision) will allow the transit nodes to add their downstream nexthops, interfaces, MTUs, timestamps etc. information in the reply message. Providing the ability to collect such detailed information specific to an overlay segment makes overlay OAM a powerful tool for the network administrators as they migrate their network to SDN and network overlay technologies.
We don’t have a trophy cabinet here at Juniper Networks, but if we did it would be full. Recently, we collected a number of awards from the Channel Company for a variety of categories. All reflect the success we are having with our partner programs and networking technologies.
I’m proud of them all. A few really stand out and say something about our people, our partners and partner programs. Here they are.
We earned a 5-Star rating in its 2016 Partner Program Guide. This is the definitive listing of technology vendors that service solution providers or provide products through the channel. The 5-Star Partner Program Guide rating recognizes an elite subset of companies that offers solution providers the best partnering elements in their channel programs.
Five of our channel pros earned the distinction of Channel Chiefs for 2016. This is a big deal and you can read about them here.
We also were named the easiest networking infrastructure provider to do business with in the CRN Annual Report Card (ARC). The ARC survey asks our partners to measure us in a number of areas and we’ve been doing very well year over year. I’m especially proud of this one, so, thanks to all of you who participated.
The CRN Security 100 recognizes “the coolest security vendors” in five categories. Juniper is included in the “Network Security” category. According to CRN, these 100 companies have demonstrated creativity an innovation in product development, as well as a strong commitment to delivering those offerings through a vibrant channel of solution providers. That’s us.
Fifth, but not least, is making the Virtualization Top 50 list. CRN points out that we’ve become a leader in SDN based on our Contrail platform and network function virtualization (NFV).
Five for five. I’ll take that any time.
noun: tier; plural noun: tiers
a row or level of a structure, typically one of a series of rows placed one above the other and successively receding or diminishing in size.
synonyms: row, line, layer, level, balcony
The terminologies of “single-tier” or “two-tiered” fabric have been used to describe network fabric or cluster-based products. I’m doubtful whether a two-tiered fabric can truly function as a single-tier network, but certainly not all two-tiered networks are made equal.
What do we mean by those terminologies?
The most prevalent use of the word tier in networking is its use when describing network topologies in campus and data center environment. Most people understand the phrases “two-tiered” and “three-tiered” refer to the physical topologies in the following figure.
The fundamental reason for constructing a two-tiered or three-tiered network is to achieve the required port fan-out with the desired over-subscription ratio of bandwidth. Increasing or decreasing a tier in the network topology is a big deal, because it directly affects the cost of the networks. If the technology allows unlimited fan-out on a simple box, a two-tiered topology would be a huge waste of ports, a three-tiered topology would “double” the waste.
What is a “tier” in network?
IMHO, the most essential meaning of “number of tiers” is the number of hops that most packets traverse the physical network. In a typical two-tiered network, it takes north-south traffic two hops and east-west traffic three hops; while in a typical three-tiered network, it takes north-south traffic three hops and east-west traffic five hops. In a two-tiered network, a network architect needs to design one layer of over-subscription (OS) ratio; while for three-tiered network, s/he has to design two OS ratios, one at the aggregation layer, and another at the core layer.
Tiers of a fabric
With the advent of multi-path technology, the inter-connectivity among network nodes has changed to exploit the benefits afforded by multi-pathing. The word “fabric” is often used to describe a cluster of nodes that are somewhat more “closely” interconnected.
In a QFabric, the QF-interconnects are chassis specifically designed for interconnecting QF nodes. Network ports are not allowed on Interconnects. In Virtual Chassis Fabric (VCF), network ports are allowed on every node. In Q-Fabric, QF-nodes must be connected with QF-Interconnects in a star-topology. In VCF, every node can be both an “interconnect” and an “edge” node, hence allowing “arbitrary” topologies. Such flexibility allows the customer to tailor their network topology design to meet their specific traffic needs. If we compare the QFabric (in star-topology) and a VCF (in a spine-leaf topology), we have the following table.
N-way (N is # of spine)
From the table above, it is clear that, from a physical topology perspective, both QFabric and VCF in Spine/Leaf topology resemble a two-tiered network. From a logical perspective, both QFabric and VCF appear as a single managed device, hence there may be a tendency to call both QFabric and VCF (in spine-leaf) a “single-tier” fabric. However, the “single-tier” terminology conflicts with its two-tiered nature in physical topology. Hence, the “single tier fabric” terminology may be confusing to some people.
Virtual Chassis Fabric Topologies and Tiers
As mentioned above, VCF can support “arbitrary” topologies. However, only a few typical physical topologies are used by network designers to achieve performance requirements within the cost limit. The figure below shows a few popular topologies.
Since VCF performs shortest path forwarding, for all of the above physical topologies, VCF will optimize for distance/latency. For a fully meshed VCF, unicast traffic will always travel the directly connected links. All traffic traverses the fabric in 2 hops. Hence it functions like a 1 ½ tier network. In a spine-leaf VCF, all shortest paths for east-west traffic consist of 3 hops, while all north-south paths consist of 2 hops. Hence a spine-leaf VCF functions like a two-tiered network, although a VCF will allow maximum bisectional traffic without blocking any links. A spine-leaf VCF will also allow the spine switches to perform multicast replication so that the receivers on the leaf switches will observe the same multicast latency. Due to its topology flexibility, a VCF can also be constructed as mesh-connected spine-leaf sub-topologies as shown above. Such a VCF will allow two small pods to be inter-connected and be managed as one device. It not only allows the servers and storages attached to the same pod enjoy the same low latency as a spine-leaf VCF, but also provide inter-pod resiliency. As a matter of fact, VCF technology will discover and exploit the best attributes of the underlying physical topology, and will strive to achieve optimal performance regardless whether it is a one-tiered or two-tiered network.
BTW, VCF is supported on QFX5100, QFX3500/3600, and EX4300. Customers who have previously purchased QFX3500/3600 will be able to use them as part of a VCF with a software upgrade.
As communication service providers are looking to cut cost and gain competitive advantage by offering a more fine-grained user experience, they are looking to consolidation and virtualized solutions that not only reduce physical footprints but grow with business needs. When you combine consolidation and virtualization, this creates a powerful technical challenge, where service providers must deploy a virtualized solution at massive scale.
One such scenario is Gi-LAN deployment, an MX Edge router acting as Service Control Gateway (SCG) or TDF gateway as defined in 3GPPP specification, connects to service chains. Each service chain represents a specific service or service combination deployed by service providers. The mapping of user traffic to a service chain is done by the SCG gateway based on 5 tuples and the policy defined for that user based on PCRF, a 3GPPP policy function. These service chains could either be created using purpose-built service appliance or based on ETSI-NFV architecture that uses COTS machines and virtual network functions (VNFs). This discussion is for the latter choice.
As shown in Figure 1 above, the MX SCG gateway sends three different traffic classes to three different service chains represented by red, green and blue color based on PCRF policy. It looks very straight forward, where each service chain composed of two VNFs running on virtual machines. But this can quickly become a problem if each service chain needs to support large volumes of traffic and also meet SLA requirement like downtime. One VM usually cannot scale to handle large amounts of traffic and also becomes single point of failure. One of the virtualization principles is that we do horizontal scaling to address both scale and resilience requirements instead of using one monolithic purpose-built physical appliance.
We are going to modify the solution above to handle scale and resilience. This will change our above illustration to Figure 2 below.
As shown in the Figure 2 above, each service chain is built with multiple instances of the same virtual network functions (VNF), which would allow solutions to scale. In order to build such a solution in the forwarding plane, the following solution components must be available.
- The Edge router must have the capability to combine PCRF policy along with user traffic identification to select a configured service set. The MX edge router supports SCG gateway functionality to accomplish that at high scale.
- The Edge router must also have the ability to map a service set to a service chain dynamically orchestrated through a SDN controller. The MX SCG gateway and Juniper’s Contrail platform can accomplish this.
- The last piece is to make sure that service chains are built to scale with multiple instances and these instances must be visible to the Edge router so that it can load-balance the traffic streams in a given class to multiple copies of a service chain. The MX’s TLB feature can be used to accomplish this.
Juniper’s MX SCG gateway and OpenContrail can create a powerful combination to create scalable virtualized Gi-LAN service complex.
If you have attended a VMworld conference, you are most likely not only a believer in the value of virtualization, but you have seen first-hand the benefits that virtualization can deliver. This year at Moscone Conference Center in San Francisco is no exception.
Virtualization has revolutionized the data center, and it is poised to do the same for the network. At Juniper, we are co-creating networks with our customers that enhance the benefits of virtualization. We continue to work closely with VMware to stretch across multiple areas of collaboration and bring the best solutions to our customers.
To better understand the impact virtualization has had on individuals, we sent video reporter David Spark to the show floor at VMworld 2014 to ask attendees, “How has virtualization changed your life?”
Many respondents spoke of the ability to simplify and consolidate the data center, and a few went so far as to comment on the future — where virtualization will play an ever increasing role in our world.
Watch the video and ask yourself, “How has virtualization changed your life up to now, and how will it change your life in the future?” Share your experience with us below.
Unikernel for Virtual Network Functions (VNF)
Do Unikernels have a place in VNF?
Unikernels satisfies the key characteristics of a VNF namely:
- Small in size that allows thousands of tenants in a commodity server
- Can be launched and destroyed in milliseconds and thus improved availability
- Unikernels are mini-VM, allows integration with the existing Cloud orchestration mechanisms, can be moved to different servers, makes use of the hardware provided isolation
- Secure as the attach surface is small
Today VNF is a VM and there is a lot of effort to move towards containers as the containers less resource intensive and boots faster. It is anybody’s guess at this point whether Unikernel will have a place in the VNF. Certainly it has benefits and there are a number of ongoing efforts in this space.
What are Unikernels?
A traditional VM includes a kernel and applications. Applications are run in the user space on top of the kernel in the VM. The kernel with its packages allows running multiple applications utilizing kernel services. Imagine a scenario where you want to use a VM as a web server and there is no need to run any other applications. Is there a need to incur the overhead of a standard kernel, which comes with its own baggage of packages, vulnerabilities?
Unikernel is a way to build an Application VM (AVM) that runs only one application. The kernel services are built as a library called libOS and the application is linked with it to become the AVM. This makes the AVM to be small and only includes the needed kernel services and avoids unwanted overhead associated with standard kernels to provide better deterministic behavior.
MirageOS uses the OCaml language, with libraries that provide networking, storage and concurrency support that work under Unix during development, but become operating system drivers when being compiled for production deployment. The framework is fully event-driven, with no support for preemptive threading.
ClickOS Tiny, Agile Virtual machines for Network Processing. These virtual machines are small (6MB), boot quickly (in about 30 milliseconds) and add little delay (45 microseconds) on commodity hardware.
In interesting paper 7-unikernel-projects to take on docker in 2015 provides more information and you can find lot more information in the Internet. All these AVMs are domU VMs on Xen hypervisor.
There are a number of such efforts that started recently to address the VFN requirements in the NFV context including clear container that is trying to use the Hardware Virtualization offered by X86 and use a tiny Linux that can boot faster by Intel’s open source group.
Use cases for Unikernel
The characteristics of the AVM make it suitable for latency sensitive virtual network devices such as load balancers, middle-box type of applications. The small footprint makes it attractive to support massive multi-tenant scale with the hardware-supported isolation as well as massive scale out model to meet the performance demands.
A ClickOS paper claims they have built a 5 MB AVM using the click elements to do FW, CGNAT, Load balancer and a virtual switch that can boot in 30 milliseconds with a latency of 45 microseconds, with a 10Gbps throughput on a commodity PC.
MirageOS group has started an effort called jitsu (Just-In-Time Summoning of the Unikernels) also called dust cloud which is a tool kit to start the unikernel based AVMs on demand. More details on Jitsu is available in the Usenix paper.
AVMs fit well with the Xen model where the hypervisor and the privileged domain Dom0 provide the environment to run the AVM in the under-privileged DomU.