Navigate Incident Management Like a Pro: MyFitnessPal's Sr. Director of Engineering Shares Insider Strategies with Lee Atchison
How much time are engineering teams spending on incidents?
Are you trying to set your engineering team free to do their best work? Read our new case study to learn how Blameless can help you do that.

What is Container Orchestration? Key Concepts Explained

Emily Arnott
|
9.1.2021

Wondering about container orchestration? When your organization manages too many containers, you start to need container orchestration. We'll explain.


What is container orchestration?


Container orchestration is the automation of many operational tasks in your container-based applications. This includes processes such as:

  • Deployment
  • Allocating resources
  • Scaling up or down
  • Load balancing
  • Monitoring
  • Configuring applications


What are containers?

Containers are a bundle of a piece of code and everything needed for the code to run. Once your code is in a container, you can deploy it to any environment and be sure it will run. 


Containers often contain microservices, which are small parts of your application or product that have a standalone function. For example, you might have a microservice that handles users viewing their shopping cart. The container for this service would bundle together:

  • The code that makes the shopping cart pop up and display the added items
  • Any libraries or other dependencies that the code requires
  • The runtime configurations that allows the code to function
  • Information on how the container interacts with the rest of the system - how it is accessed, how resources are allocated to it, etc.
  • The database storing exactly which items are contained in the user’s cart


However, the container would likely not contain:

  • An operating system - unlike virtual machines, containers do not contain the operating system that will actually run the code. Instead, they’re abstracted such that they can run in any environment without bringing the environment with them.
  • The database of all items for sale, — distinct from the database keeping the items in the cart — which is likely huge and would need to be accessed by many containers. Thus, it is left as a resource linked to by the container’s database.


More and more, organizations are building their applications based on microservices rather than a monolithic service that handles everything. Containers are a popular way to organize services using this type of architecture.

Why use containers?

  • Easy to port to other environments - containers run in any environment, so you can deploy your code anywhere. Even if you make a major overhaul of your development environment, you’ll know that your code will still run. For example, you may decide to deploy on AWS’s public cloud services and, further down the road, you may decide that some services need to run in a private cloud environment due to security compliance mandates. Porting over should be easy and flexible.
  • Faster and more reliable deployments - you can make more frequent updates to your production environment by updating only the relevant containers each time. If something breaks with the deploy, you know that it’s isolated to only those containers.
  • More modular and scalable architecture - as the needs of your system change, you can reconfigure it more easily by adding and removing containers instead of refactoring the code itself. You can also allocate more resources to specific containers when necessary.
  • Makes development faster and more consistent - developers won’t need to worry about how to configure code for deployment or how it will interact with other code - all of that will be handled by how the container was set up and, of course, managed over time.
  • More efficient allocation of resources - if part of your code runs best on local servers, and others need to be run on the cloud, containers allow you to freely distribute your application’s code wherever it makes the most sense.


Essentially, using containers allows you to build your system step-by-step like Lego building blocks that can be added, removed or modified. This is rather different compared to carving it out of one very large piece of marble which would be a monolithic approach.


What are the challenges of containers?

Containers are an investment; they require you to establish standards for each container and build infrastructure for them to integrate with each other at runtime. You’ll also need to invest in tools to manage your containers, such as Docker or Kubernetes.


Even with these tools to help, managing containers becomes very time consuming as your application and environment scales. Complex applications can use hundreds of containers at a time. If you needed to make a change across all of your containers, the time sink could be massive. This is why you need container orchestration.

What is container orchestration?

Container orchestration is the process of automating the management of your containers. Rather than manually make choices and changes for each container, the container orchestration platform automatically performs these tasks for every relevant container.


Container orchestration can handle:

  • Deploying new containers
  • Configuring runtime settings for containers
  • Scheduling deployment of groups of containers
  • Removing unnecessary containers
  • Scaling resources allocated to each container
  • Changing the infrastructure that allows containers to interact
  • Monitoring containers

What are the benefits of container orchestration?

  • Allows your containers to scale up without extra work - many benefits of containers increase as you break your service down into smaller chunks. Orchestration allows you to embrace this philosophy by managing hundreds of containers as easily as you manage one.
  • Increases reliability of services - you can automate responses to incidents to help when things go wrong. If your orchestration platform detects that containers aren’t functioning, it can automatically replace them, or reallocate resources to support them. Orchestration can’t prevent incidents, but it can reduce their severity and  improve uptime.
  • Gives you more monitoring insights - by having consistent and automatic monitoring practices for all your containers, you can find trends and patterns across different areas of your service. This helps you build sophisticated metrics like SLIs that reflect customer expectations and therefore happiness.
  • Scheduling major deployments across containers - when you want to do a major code push, you can use orchestration to coordinate simultaneous deployments across  many containers. This helps keep deploys consistent and simple.

How does container orchestration work?

There are two major things you need to do to use container orchestration:

1. Add metadata and hooks into your containers.

i. Your containers should contain metadata to categorize them and classify them. 

ii. They should also contain consistent hooks in the code that changes their settings and can be accessed externally.

iii. This allows the orchestration program to issue commands to the correct subset of containers.

2. Build an orchestration program to control the containers.

i. It needs to be able to issue custom commands to any given subset of containers, such as changing their settings, deploying new instances, or taking them down.

ii. It needs to monitor the health of containers and collect the data in a centralized location

iii. It also needs to be set up to automatically react to changes in the health of containers by changing settings or reallocating resources.


This is not an easy system to build, let alone maintain over time. Most organizations use a container orchestration platform or tool to help them.

Container orchestration platforms

There are two major container orchestration platforms, Kubernetes and Docker Swarm. Both allow you to build, manage, and orchestrate containers.


Kubernetes emphasizes adaptability, allowing you to containerize a wide variety of microservices and functions.


Docker Swarm, on the other hand, emphasizes ease of use by using more templated Docker nodes, allowing easy setup and deployment of simpler services.


Other popular container orchestration platforms include:


Some of these are tools that manage your instances of Kubernetes or Docker. Choose the right tools depending on the complexity and size of your service. Look for tools that support all the container functions that you’ll need while still simplifying the process of creating and orchestrating containers.


Container orchestration can be performed in any type of environment - local servers, private clouds, 3rd party clouds, or any combination. No matter where they are, containers are containers. However, there are some considerations you should have when choosing a tool:

  • Some platforms may be specialized for some types of environments
  • Some platforms require more customization and thus will require more resources to implement
  • Some platforms are specialized for some number of containers

Implementing container orchestration

Challenges of implementing container orchestration

Like containerizing your system, setting up container orchestration is an investment. You need to establish what the orchestration will be able to do in order to set up each container correctly. If you need more functionality later, it will be difficult to retroactively add that.


You also need to make sure that all development teams are on the same page about developing within containers. If your containers are inconsistent, orchestration will be especially difficult. Make sure that the standards are clearly understood by every developer.

Who manages container orchestration?

Deciding who has ownership of container orchestration can be tricky. Development creates the code that goes into containers, but operations manages the deployed containers. DevOps setups can help connect these two sides. The DevOps team can make sure the orchestration setup works for both development and operations’ needs.

SRE and container orchestration

Container orchestration also fits into the SRE development process at several stages:

  • By automatically monitoring all your containers, you can update SLOs based on a collection of services
  • After incidents, orchestrating services can automatically pull contextual information to help build the retrospective
  • Tools like the Kubernetes Operator can automate deployment of SRE tools for new instances


Blameless SLIs and SLOs can make the most of your container-based system. Weave the collection of metrics generated by your microservices into fewer metrics that reflect user requirements and happiness. To see how, check out a demo!

Resources
Book a blameless demo
To view the calendar in full page view, click here.