Skip to Main Content

How Docker and Kubernetes make infrastructure almost instant

docker kubernetes logo||||

When we started experimenting with Docker and Kubernetes, we were just looking for a more efficient way to stand up our applications. Traditionally, it’s an infrastructure or operations team’s responsibility to make sure every dependency an application needs to run is set up the server, and that none of them are in conflict — and then continue to ensure that throughout the life of the system. It’s also their responsibility to figure out which server the application need to go on in the first place.

This was just how infrastructure worked. It was time-consuming, a little bit tedious and prone to error. Every time you moved your environment, you had to build it fresh.

Everyone wanted this part of the job to basically go away so we could focus on doing the parts of our work that are challenging, interesting, and let us use our whole brains. A few of us at Table XI started playing around with containerization by ourselves, using it inside projects to see if we could save ourselves some time.

As we tested out options, two tools seemed the most promising: Docker as an image/container manipulation tool, and Kubernetes as an orchestration platform for running those containers efficiently across infrastructure. Together, the two proved so powerful that we started rolling them out to clients.

Now, our goal is to use the Kubernetes/Docker combination on all of our projects within the year. We’ll explain what Docker and Kubernetes are, why we like the two together so much, and how we plan to adopt it everywhere.

How Docker and Kubernetes work together

To understand the difference between Docker and Kubernetes and how the two work together, it helps to first understand how things work now. Typically, when we set an application up to run on a server, any external dependencies that software has to be loaded onto the same server individually. This could be system libraries, utilities, really anything the application needs to reference to do its job. This isn’t so bad in itself — whoever’s running the server can just add the dependencies. But it does get tricky really quickly. For instance, if multiple applications are on the same server. One may require a different, conflicting version of the same dependency, for example. And you end up with a soup of dependencies floating around the server, any one of which could have an unpatched security bug.

It’s just … messy. Docker lets us take all that mess and pack it up into a single container. All a developer needs to do is run a single command to build an “image” from a “Dockerfile” and all of the necessary dependencies will be ready and available. What works on your machine will also work in production. The container creates a boundary, blocking out a lot of the reasons why things break down. You can move them around, update them, swap them out — once it’s in a container, you can handle it however you want without worrying about adverse interactions.

Then there’s getting it onto a server. In the past, we had to treat our servers like pets, not cattle. Each machine was a special snowflake that we’d have to tend to meticulously. Instead of making them work for us, we’d often ended up working for them. This is where Kubernetes comes in. Kubernetes is an open source container orchestration platform. We write code that tells Kubernetes what kind of environments we want the applications to live in, and it automatically arranges the necessary containers across our servers to make the most efficient use of space and compute power. Basically, Docker is the brick maker and Kubernetes is the mason, elegantly lining them up.

Want to start visualizing your project risks? Download our free Software Risk Management template.

Download PDF

The benefits of containerization, and why we picked Docker and Kubernetes

Immediately, Docker and Kubernetes orchestration saves us server space, because we’re using our infrastructure efficiently, and time, because we don’t have to make decisions about where everything should go.

Then there are the bigger savings. Scaling with Docker and Kubernetes is substantially easier. You can set up a rule in Kubernetes that if CPU usage goes above 50 percent, it needs to spin up the application on another server. All it has to do is grab the Docker image and put another copy on a server. No one needs to be watching to make sure this happens, so there’s no room for human error (and no costs for human labor). It’s just a nearly instant transfer to keep your application stable. The same goes for backups — we don’t need to keep them running all the time, because we can automate them to spring into action as needed. We can use Kubernetes and Docker across AWS, Google Cloud, Azure, pretty much any infrastructure provider a client might want. And we can do it exceptionally easily, because all we have to do is hand off industry-standard Docker images and Kubernetes config files.

There are a lot of other options for containerization and orchestration out there — and more popping up all the time. But we consider the war won. Others can keep chasing the new fad if they like, but we’re sticking with these two well-tested, well-accepted solutions. We will always keep an eye on this space for new players, but we believe that radical ideas will need to shake things up before we actively pursue anything else at this tier. Another point in favor: Kubernetes has uncommonly good diversity among their early adopters and leaders. It’s a nice thing to see, and in our experience, greater diversity usually translates to greater success for the product.

Getting a Docker and Kubernetes introduction with small projects

The first partner we introduced to Docker and Kubernetes was a client of ours called Participate. We were working with the education startup to adopt an Agile workflow, but ran into a wall when it came to the Quality Assurance process. The development team was used to pushing through several features at once, and the whole bundle would go to the QA team for testing. If there was a problem with feature B, features A and C would have to wait, because there was no way to test features in isolation.

We helped Participate grow users by 43 percent. See how in our case study.

Docker was quick and easy to experiment with — each feature would live in its own container, so as soon as one was ready to go, it could be pushed to production without affecting the others. We explained the benefits to Participate, then worked alongside its team to build out the entire QA system in Docker and Kubernetes.

With another client, BenchPrep, we had to learn together. Their infrastructure was built on IBM’s cloud platform and they were pivoting from openstack to Kubernetes and the containerized space. BenchPrep reached out to us. We honestly said we weren’t experts, but we’d be happy to bring the expertise we do have, as well as our growing understanding of Kubernetes and Docker. We helped BenchPrep adapt to a new process and workflow, building up their DevOps chops in general, while also working closely with IBM to learn how it’s approaching the containerized space differently. That includes getting early access to new tools for Kubernetes, and getting to have conversations with some of the leaders within IBM, all of which has been extremely helpful.

How we plan to expand our Docker and Kubernetes architecture

Now, we need to take what we’ve learned so far and run with it. We’ve hired Amanda Snyder to our DevOps team, who has a ton of experience and intelligence around Docker and Kubernetes, and all of us are upskilling quickly. The hope is to keep offering an introduction to Docker and Kubernetes for teams like BenchPrep and Participate, while working to migrate all of our own applications to the containerized space.

This last bit is tricky. While the benefits will be immense — for us and our partners — it’s not as simple as zipping an app up in Docker and writing a few Kubernetes config files. Applications have to be written in pieces to take the best advantage of something like Docker, so each piece can run with its dependencies. Sometimes that means going back and tweaking a few things here and there, sometimes it means a much bigger refactor of the code. In a few extreme cases, it might not make sense to move the app at all. Each application will require a lot of conversations with our partners so we can determine the best course of action for everyone.

That said, containerization is happening. Much like the cloud, where a few years ago it was just a cutting edge idea, it’s now an inevitability. The faster we can help our partners evolve, the sooner we can all take advantage of the benefits.

If you’re interested in adopting Docker and Kubernetes, contact us.

Published by Patrick Turley in agile

Let’s start a conversation

Let's shape your insights into experience-led data products together.