Containerization 101 – Docker

What are containers?

In very simple terms, Containers are a mechanism for deploying applications. A container will hold an application’s files, libraries, and other dependencies that it needs to execute. Containers isolate applications so that they cannot interfere with other applications and they cannot be interfered with either.

Containers can be thought of as virtual machines. A virtual machine is complete deployment of a computer. It will have a hyper-visor or virtualization layer that simulates a real computer, a complete operating system and then the applications it is running. A container on the other hand only has the resources necessary to run the application it is hosting. It uses the host systems operating system and other resources. By sharing the operating system, containers are much less resource intensive and start more quickly.

Docker is the main container service in use. It works on all the major operating systems and supports both Linux and Windows containers. However, Linux containers are more mature and have broader community support. Docker containers are also supported by major cloud services such as Azure and AWS.

In addition to isolating applications running on a host computer, Docker also provides software defined networks so containers running on a host can communicate without being exposed to the wider network. Docker also provides persistent storage for data created by containers. This makes it possible to host applications like database servers in containers.

What is important about containers?

Containers are important because they are less resource intensive than virtual machines. As such they start up faster and more can be hosted on a single host. It is not uncommon for Docker to be run inside a virtual machine.

Because containers use fewer resources it is easier to have the same execution environment at all stages. The containers that are run in development are the same containers that are used for testing and then released to staging and finally production. It is realistic to create an environment where “it works on my machine” means it works everywhere.

Containers also make it possible to migrate an application to the cloud without significant effort. As previously discussed, A developer creates and works with container images in the development environment. At some point the container images are ‘lifted’ to QA where testing is performed to identify defects. At some point the container images are again lifted to staging and eventually production. An organization could initially choose to run the application in their own data center but at some point could decide to again lift the application to the cloud. If the application is self-contained in its own group, or swarm, of containers then the application won’t even require configuration changes when its lifted to the cloud. If the application is using external resources, the organization’s SAP system for instance, then things like network access will have to be configured.

What is docker?

Docker is an open source project that has standardized containers. The idea of containers is not new, the idea goes back to how mainframes work. Linux introduced containers but it didn’t offer an easy way to create containers. That is where Docker came in.

In Docker containers are created from images. Images are representations of the file system (like a zip file) containing only the files specific to the application that will be run in the container. This means that if there are files in /bin/ and /tmp/wwwroot then the image will have those just those files. The container is a running instance of the image. A single docker host can run as many containers as it has memory and CPU to handle.

Going beyond containers Docker offers stacks which are groups of containers that are related. For instance, you can have one container that has a web-site and another container that has a database server. The stack provides docker the information it needs to deploy the containers together so that they will work together. The stack’s information will provide networking and volume information in addition to the image information.

Docker also provides the tools needed to build container images from scratch or from other container images. As an example, as a Microsoft stack developer I use Core to build applications. Microsoft provides a container for compiling Asp.Net Core applications inside a container, I’ve built my own custom version of their image that adds a few additional tools that I use as a part of my build process.

Failing to mention the Docker Hub would be a mistake. As mentioned I have created my own image based on Microsoft’s image. The Microsoft image is distributed through the Docker Hub. You can search the hub and find thousands of images contributed by other companies and individual developers. You can sign up for your own account free of charge and contribute images to the community as well.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s