When a new technology begins to spread, it can be difficult to understand how it works and why you might benefit from it. We often need to frame these changes in terms of existing technologies in order to connect established concepts to emerging ones.
Containerization isn't a completely new concept. However, how Docker containers operate is somewhat new, and the technology may be completely new to someone who entered the industry after the year 2000. That's when Windows, local computing, and VMs came to dominate the computing landscape.
What is a container?
A container is an abstraction that groups processes into a manageable unit. Docker containers also introduce the container image, which collects all programs and dependencies into a deployable package. This allows the container to be deployed independently of a host's base configuration.
What types of applications might benefit from using container technology?
1) Applications that are hard to develop quickly because of lots of code interdependencies.
If an application needs to be merged at some point, basically integrating the technologies, such as a web server, content manager, content and code, middleware and backend data store, this integration means a higher level of complexity in order to have a successful deployment. Containers create the opportunity to decouple the deployment of components, thereby allowing changes and upgrades to each component to happen independently. This allows each development pipeline to proceed without interference. The linkage happens between the containers.
2) Groups of applications that share a common configuration pattern.
Container images are designed to be efficient not just in their transport and deployment, but in how they are extended. This means that a common pattern or configuration can be established as a base image, and then extended by the development teams. This gives the ability for strong reuse and a centralization of common configuration patterns.
3) Applications that you want to be deployed to groups of people.
Containers are often shown in the context of servers providing a network service. Containers can also be used from the command line, allowing teams to set up shared tools, scripts, and resource files. More than once, I've seen database teams put all of the utilities and common queries onto a container, allowing everyone to pull the container image to their workstations. It makes the team efficient and improves workflow.
4) Applications that have a hard time scaling on their own.
There are plenty of toolkits that make coding easy, but they aren't multi-threaded, or they don't scale well on their own. The container can be used as a scaling mechanism, allowing multiple instances to be run at once, providing the ability to process in parallel even the the toolkit can't.
5) Applications that you want to autoscale.
Autoscaling requires that you set up either a system image with all of your dependencies worked out ahead of time, or that the system image can self-configure quickly when a new instance is started. Containers are usually designed to operate in this fashion. This makes an excellent marriage, allowing you to pull the containers needed on an autoscaling request. This makes the spin up of more capacity simple, since each node simply runs by starting the containers needed.
6) Applications that might "scale in the middle".
Scaling up has always been seen as a horizontal capacity increase in the application front end. This works unless the bottleneck is in another part of the application, in particular in the middleware. By using containers, you have the ability to scale components even in the center of the application.
7) Groups of applications with competing technical dependencies.
As program, library, and configuration files change on a system, we run the risk that some component might be left behind, or the component to be installed will update something critical causing failures. Since a container holds all of its own dependencies, we don't need to worry about collisions between our application components nor any host system configuration requirements.
8) Applications services that should be separate because of non‑dependencies, but done efficiently.
I like to think of this as having your payroll system on the same machine as your email server. That means any time you need to perform maintenance on one, the other might be affected. You can use other mechanisms, such as virtual machines, in order to create a level of separation, but that means running and maintaining whole operating systems in each environment. This increases resource usage and maintenance effort. Containers can allow that separation, keeping dependencies apart, and decoupling the operations of each component.