Containers and Kubernetes vs VMs vs Config Management

In today’s DevOps-centered world, it’s often easy to be taken in by the infrastructure solution of the hour—VMs for everyone! Or wait, now containers are where it’s at! Who needs config management anyway?

Those of us that are primarily developers probably haven’t had to think about infrastructure decisions as much as our friends on the operations side. Since we haven’t had to make those decisions in the past, it can be hard to figure out what’s out there and why you’d want to use things like VMs or containers. We need to consider what actual problem these solutions are trying to solve. So let’s start simply.

Config Management, VMs, and Containers: What Are They?

Let’s take a high-level look at what they are and how they differ from one another.

Config Management

Configuration management is a centralized management of your decentralized systems and servers. Many consider configuration management to be not much more than package management—installing, upgrading, patching, and removing packages on an operating system. But modern config management like Puppet and Ansible do a lot more for you. It turns your infrastructure setup and decisions into code, allowing you to version and automate your infrastructure builds.

Virtual Machines

A VM takes an existing machine and installs additional compartmentalized machines on it. The initial machine is the host machine; all VMs on the host are called guests. The host manages its guests using a virtual machine monitor (or hypervisor). This keeps the guest VMs isolated from each other. Each guest VM also requires its own operating system. So the more VMs you have on a particular machine, the more operating systems are running.

Containers/Kubernetes

Containers like Kubernetes are a type of virtualization that let you run an application and its dependencies in an isolated process. A container doesn’t have its own OS. It shares the kernel with not only the host but also the neighboring containers.

What Problem Do They Solve?

So there must be a reason why we have these different solutions. Let’s consider what they’re trying to fix.

Problem Solved By Config Management

Without config management, it would be difficult to track system-level changes and infrastructure. This problem becomes bigger the more servers you have. Modern configuration management is designed to deploy, configure, and manage servers in a repeatable and automated fashion.

At a container level, there’s also the application’s config management. Where do we store our configuration? For a container solution, Kubernetes uses config maps and Kubernetes secrets for management.

Problem Solved By Virtual Machines

VMs allow you to run different applications on the same hardware but in an isolated fashion. As a developer, you don’t care what else was running on other VMs. You don’t wonder about what was already using ports. And you’re not worried about what particular OS versions or patches the other applications required. Therefore, if you have different applications that have different operating system needs, VMs will deal with that.

There have been several legacy systems that I’ve come across in the past that could benefit from VMs. They were inexplicably tied to not just a particular OS, but one particular version of that OS. It just would not run elsewhere. The secrets of why that was the case were often buried and long forgotten.

Problem Solved By Containers/Kubernetes

Containers let us run a single application in a lightweight fashion and still stay separate from other containers and applications on that server. This removes the requirement of having a separate VM for each application. All the containers use the same kernel as the host. Containers are also simpler for developers like us to spin up, as someone else already made the OS decisions. There’s less for us to have to manage or even know about.

What Should You Watch Out For?

Consider the following factors when looking at config management, containers, and VMs.

Configurability

One of the benefits of config management is being able to patch and modify your infrastructure. You might think this would be a requirement since many companies still stand up their infrastructure setup by hand. However, your infrastructure is code. That should be versioned and automated through a CI/CD pipeline. If you have to change at a system level, then you can update your infrastructure scripts and it will deploy a new image. You want your application config to be immutable, and there’s nothing keeping us from making the infrastructure immutable as well.

Kubernetes relies on config maps and Kubernetes secrets to ensure that containers have the correct configuration. That’s where you set up your container so it knows what environment it’s in and how it should run. It also makes secret rotation of things like database IDs and passwords easier; that can all be scripted and scheduled.

Another option for both VMs and containers is to have configuration completely in another system. The container or VM only knows where to get the configuration from. The configuration lives in a vault or KeyStore.

This has the advantage of being able to update the configuration for the whole fleet, but it makes the assumption that changing configuration won’t introduce a bug. As much as we’d like to believe that, most of us have seen a change in configuration that caused problems. Perhaps configuration is better off being immutable.

Security

VMs don’t have access to the host OS as they have their own OS to work with. Sneaking through the hypervisor to get through to the host OS is difficult. If Bob’s Burger Barn VM is compromised, it’s less likely to affect their neighbor over at Harriet’s House of Hummus.

However, VMs are heavier solutions with their own set of OS and required packages. Therefore, they also have more places where you need to worry about penetration for security issues. With containers, they’re considered to have a smaller surface area, as they only have the packages required to run the application.

The lifecycle of a container is also typically shorter. So if someone does break in, they can be shut out the next time you deploy the container.

On the other hand, containers share an OS and kernel. If someone gains kernel access in one container, they could have access to other containers as well. If a neighbor in your cluster is compromised, that could affect your application.

Additionally, security solutions are still evolving for containers. That’s why you want to deploy frequently with the latest versions of tools and infrastructure using your config management. Even though containers have that smaller surface area that I mentioned, there aren’t as many tools available yet for monitoring security risks and penetration. If a container is compromised, that container may be deleted and removed before we even find that something occurred.

Performance/Resources

Containers can have a performance lag over VMs. I once worked on a project moving from VMs to containers. Some of the components/applications that ran without issue on VMs, where the app was allocated 2–4G of RAM of memory, later required 6–8G of RAM on a container. Why is that?

For one, hypervisors on the VM boost performance so that it’s closer to the performance of running on bare metal. Additionally, containers all run on the same kernel, similar to Linux processes. And if your application uses Java and runs on a JVM, there’s another gotcha. Although Java 9 and 10 have some improvements, we were using Java 8. In that version, the JVM looked at the host OS to determine what processors were available, how many threads to spawn, and how much CPU it should use. That doesn’t work when all containers are trying to use more than they have available.

Not all is perfect on the VM side, however. The overall efficiency of the host is worse if you’re running multiple instances of an app on separate VMs than if you were using containers. As mentioned earlier, this is due to the overhead required for each VM and its own OS. That might not seem too bad, especially when you recall that VMs share the hardware of the host OS. However, the VM will also include “virtual” hardware that maps to the host the OS needs to run. It also comes with all the packages the OS needs to function fully.

In summary, If you want multiple instances of an application running, then containers make it easy and less resource-heavy. However, if you only have one instance of your applications running but have many applications overall, then VMs are a good solution.

Harmony

You may be thinking that none of these address all your problems or concerns. Or maybe one option would be attractive if it weren’t for that single flaw. Well, you’re in luck! You can use all three together to get all the benefits you’re looking for.

For example, let’s say that security of containers is a concern, but you still want to get your stuff easily deployed and configured. You can use VMs that hold containers to mitigate the risk of less secure neighbors while still having the flexibility and portability of containers. And since those VMs are all going to be using config management, setting this up is repeatable and configured correctly.

Config management isn’t dead. It just needs to be adjusted for the current infrastructure. You’ll always need to manage configuration. Even serverless applications, though they may not have dedicated servers, still live and run on a server. And someone has to manage those servers.

Overall, when combining these elements, you’ll still have to monitor the system to make sure you’re getting the most out of your infrastructure. Ensure it’s working as expected with the help of tools for proper monitoring and logging.

Conclusion

VMs, containers, and configuration management solve specific problems. As developers, we can’t make infrastructure decisions based on what’s trendy. And we don’t want to only have one tool in our tool belt.

Take all three of these solutions and see how they can address your problems without compromise.

This post was written by Sylvia Fronczak. Sylvia is a software developer that has worked in various industries with various software methodologies. She’s currently focused on design practices that the whole team can own, understand, and evolve over time.