As I’ve been learning about Docker and containerization in general I’ve given some thought to how it will impact the way I architect and design my solutions. However, the more I have thought about it the more I think it’s a non-issue.
For developers and solution architect nothing has changed. You are still going to write code in Visual Studio. You’ll still write code using C# or VB.NET. One difference is that you’ll use the .NET Core framework and ASP.NET MVC Core instead of the regular .NET framework, but I don’t see this as a restriction. From what I can tell all our favorite tools are available to use. We can still use Entity Framework to access backend databases.
A small change is that if you’re using a Microsoft SQL database from a Linux container you’ll have to use SQL Authentication instead of Windows Authentication. Beyond that though, you can still write your LINQ statements to query the database and process the results.
So if nothing has really changed, is there anything to that should be considered if you’re planning to deploy your solution in containers? Yes, I think there are several.
The first consideration is the selection of technologies. Today, you can build your solution using either Windows Containers or Linux Containers. Personally, I think using Linux Containers is the better choice today. Linux Containers have wider support right now that Windows Containers. Your container can run in environments like OpenShift, AWS EC2 Container Service, and of course Azure Container Service. You can also build on top of a huge library of existing Docker images from Docker Hub. The MySql image comes to mind, instead of having to spend time building your own image you reuse existing official images. Microsoft also provides a SQL Server image you can use, which brings me to my next consideration.
Not everything has to be in a container
You can host services like Microsoft SQL Server or MySql in a container but I don’t think it really buys you much. Containers are ephemeral, short lived, and when they go away their state goes away too. You can configure volumes to store data, but that will have an impact on your scale out scenario. I think you’re better off either running your database in Virtual Machines or in the cloud (ie Azure Sql, AWS RDBMS, etc.). This choice will have an impact on your ability to move your containers around. If you are using SQL Server hosted by an on premises Virtual Machine then lifting the container to the cloud will be require extra work (Azure ExpressRoute, AWS VPN) or a migration of the database server too. In either case you’ll have some additional configuration and testing to perform. However, this isn’t unique to containers. Any application that is migrated from on-premises to the cloud will have similar challenges.
However, even though I don’t think it is brilliant, there is an argument to be made that in a Micro-Service architecture putting the database in a container makes sense. In which case the containers would be moved together avoiding these headaches.
If your solution has external dependencies (databases, web-services, etc.) then each environment will likely have different configuration values. In the past I dealt with this by using different .net configuration files for each build environment. However, .NET Core doesn’t appear to follow this practice and tends to favor using environment variables instead. Docker-Compose allows you to specify a text file containing name value pairs for your environment variables so each environment can have its own file. The only thing missing is this approach doesn’t allow things like passwords to be encrypted. This means that you probably shouldn’t store configuration files in a repository.
What containers are great at is scaling out. If you keep your container images small then services like Kubernetes can spin up new instances of your service quickly in response to increased demand. The image size is particularly important when a host must download the image from a repository.
To keep the time it takes to spin up the new container you should also only have one process per container. If the host has to start a lot of processes in your container then there will be a lag before the new container can start processing requests.
Lifting to the cloud
As previously mentioned, if you anticipate that your solution could eventually go to the cloud then you should select services that make that migration as easy as possible. Either create Micro-Services that allow you to put the database in a container, plan to migrate databases or other services to the cloud, or configure tunnels that will allow secure access to services behind your firewall (Azure ExpressRoute or an AWS VPN).
Containers are ephemeral and when a container goes away anything it contained is deleted. So if your application is writing log files you should plan for a way to move those files out of the container. Tools like Splunk can help with this, but then you’re running more than one process per-container. Another option is to write logs files in to a blob storage container like Azure Blog Storage or AWS S3.
Regardless of how you do it, just having a strategy ahead of time will save you a lot of pain when your application is in production and stops behaving like it should.
This is where I think containers are going to have the biggest impact. Self-hosting a solution is important for productively and quality assurance. Self-hosting a solution means that the developer can host the entire solution on his workstation. Trying to share resources like a database rapidly becomes a huge pain. I often write unit tests that will clear several tables, load several tables with known data and then perform a series of unit tests that check tables for the results. If I’m sharing the database with other developers they are going to get annoyed with me and we’ll end up running our unit tests less frequently as a result. But if everybody is self-hosting the solution then we don’t have to worry about each other and can go nuts with our testing which usually increases the quality of the finished product.
Even though I think putting database servers in containers isn’t a great idea, in the development environment I think it is brilliant.
The sample application I built is able to download the MySql image and my DB builder base image, build the DBInit image, spin up MySql and DBInit and then run the scripts in DBInit in about 15 seconds. Yes, I have a faster internet connection, but once I have the images cached the time it takes is even shorter. So I can go through the code/build/debug cycle very quickly and not spend a lot of time waiting for the build step to finish. Adding in unit tests to the build phase will slow it down, but also reduce the time I spend in the debug phase so it’s a fair trade. I’m looking to see if I can get things like Sitecore in to a container so I can work with it this way as well.
At this point my view is that containers don’t have a direct impact on how you put your solution together. Yes, I feel that using Linux containers is currently a better choice than Windows containers, but I think over time they will gain parity. But the usual patterns you use writing your application don’t change. If you’re an experienced ASP.NET MVC developer what you did before is what you’ll do now. Your solution will just be deployed in a container. If you adapt your workflow to take advantage of this then you’ll have an easier time delivering higher quality solutions. I think that is a pretty big win.