Enterprise Cloud and Transformation
Trevor_Pott , Visitor
Enterprise Cloud and Transformation
Containers and the Future - Part 1
Jul 22, 2019

Containers are a resource-efficient way to isolate workloads from one another while still providing them with the execution environment they require to operate. Unfortunately, containers are designed for non-persistent workloads and, as a result, often don't come with all the bells and whistles that virtual machines come with.

 

In order to better understand the challenge behind managing containers, it is worth reviewing the difficulties in implementing one of the most celebrated virtual machine features in a containerized world: live workload migration. Live workload migration is the ability to move a workload from one physical host to another without interrupting the operation of the workload.

 

With a virtual machine, this is reasonably straight forward. Containers are more difficult to work with as they don’t contain a complete operating system. A container is basically a means to lie to an application and tell it that it is the only application installed on a given operating system. This allows the application in the container to behave as it wishes, installing files wherever it pleases. However, the container software carefully isolates that application from all other applications, preventing it from interfering with its neighbors.

 

A container can contain much of the execution environment required for an application. Application frameworks and libraries are commonly included alongside an application within a container. It is important to note, though, that this isn't required; applications inside containers absolutely can rely on resources provided by the host operating system and this is part of what makes live migrating containers so difficult.

 

Sharing

An application installed inside a container uses the kernel – and possibly other resources – of the host operating system, just as any application would if it were installed on that operating system without a container. A single operating system can run hundreds, or even thousands of containers, but all of them will share the same basic execution environment [1].

 

This shared execution environment is key. In order for an application to be live migrated to another host, there would have to exist an identical execution environment on the destination host. The destination host would need not only an identical kernel but also any of the same application frameworks, shared libraries and so forth that the application within the container to be live migrated was relying upon.

 

This is difficult. Kernels differ from system to system. This is not only because of the version of the kernel installed but because of kernel modules that are loaded. Many hardware drivers are kernel modules and several applications – such as firewalls – can load kernel modules as well.

 

If the environment is identical on both sides, live migration of a containerized workload is possible. Controlling the environment so precisely is typically accomplished by running the host environment inside a virtual machine. Multiple containers running in a virtual machine is more efficient than running a single virtual machine per container. Therefore, creating virtual container hosts out of virtual machines is not an unreasonable approach, especially if it allows for advanced functionality.

 

In the real world, however, live migration of container workloads is more of a gimmick than a frequently requested feature. Containers are most often used for modern applications that are designed to be stateless. Moving a container from host to host should be done by tearing down the target container and instantiating it on the destination host. This allows the application to be instantiated with the latest code, patches and configurations. The application itself isn't the important part: the data it operates on is.

 

A multicloud container platform

This is where Juke comes in. Juke is a multicloud container platform that contains both a multicloud container management system and a distributed object storage system.  The storage component makes use of the storage assigned to each of your container hosts – whether physical or virtual – to create a single, shared storage pool.

 

Juke allows administrators to tear down a container on one host and reinstantiate it on another without having to worry about migrating storage. There is no need to fret about ensuring that the environment on both sides of a migration is identical. Similarly, there is no need to worry about the details of the execution environment for your application. What matters is the data.

 

Juke provides a performant, resilient, reliable distributed storage solution for containerized data that spans all container hosts. Whether operating on a single cluster in a single data center or on dozens of clusters sprinkled all around the world, Juke solves container storage problems so that developers can spend more time on applications and less time worrying about the hows and whys of where those applications run.

 

Watch our latest demo available on-demand, and see how easy container storage can be.

 

 

[1] Technically, it is possible to run a lightweight kernel inside a container. These should be viewed as a separate category of workload isolation, in part because they rely on hardware-assisted virtualization technologies to operate. Not quite containers, and not quite virtual machines, they are a new breed of technology that allows for running multiple kernels simultaneously on a single host without emulation. Nano-VMs with stripped down kernels and minimal execution environments running on top of a hypervisor are also often incorrectly referred to as containers.

 

Top Kudoed Members