VM face plant

For the past year or so I change how I have my workstation configured. I’ve taken pretty good advantage of Hyper-V and the bare metal system itself has very little installed on it. I mostly just do social media and blogging there. I have 2 other VMs that are used for work. One if my ‘Office’ machine where I keep work related stuff like email, documents and so on. Most is actually either stored in Exchange, Teams or OneDrive so it’s easy enough to get at documents from anywhere. The other VM is my development workstation where I have ALL of my development tools installed. Which brings me to the problem.
The Office VM is 300Gig! The Development VM is 400Gig! Along with the other smaller VMs and other files I have nearly filled a Terabyte SSD! DOH!
I don’t really want to throw anything away, but at the same time this doesn’t seem to be the great leap forward I thought it would be. The bare metal machine is just fine and the VMs run fast enough, but the development workstation just seems wrong. My original idea what to spin up a fresh VM for each project and then throw it away when I finished. Only install what is needed for that project so that the VM doesn’t become overgrown and so on. This should work well because all my project files go to either GitHub or VSO.
The Office machine should also be a lot smaller. I think the offline mailboxes are the problem. Do I really need email from 10 years ago? Do I really need to keep files over 10 years old?
Guess it’s time for some clean up and hard decisions.

Never leave home! Adventure with OpenVPN and WOL

I prefer working from my office in my house.  It’s a million times more comfortable for me and I fill at least a thousand times more productive there than I do anywhere else.

However, on-site happens and when it does I need to be able to access my stuff.  To make that happen I setup a RaspberryPi to host OpenVPN.  So far this has been great.  However, today when I tried to jump on one of my VMs to do some work I couldn’t get in.  VPN was working fine, I just couldn’t RDP to my VM.  After looking around I found out that my physical workstation had gone to sleep.  DOH!

My daughter is home, she is doing an online school (like father like daughter I guess), so I could call her.  But first I wanted to see if there was a better solution.  It turns out “Wake On Lan” is a thing and is turned on by default for Windows 10 (something I need to look in to – is this a good thing?).

After a little research I figured out that Raspbian has a package for this.  I just needed the MAC address for my physical machine.  A quick SSH to my Raspberry and no more problem:

sudo apt-get install wakeonlan
wakeonlan this:is:not:my:mac:address

A quick refresh on my network status page and my workstation is back!

I’m still trying to work out DNS issues b/c I don’t have name resolution for my home network.  I have to use my network status page to get IP addresses.  Not a big deal and I’ve configured my router to reserve the addresses so I can just use my hosts file.  But a DNS server would be cooler.  My tomorrow.

Architecting for Containers


As I’ve been learning about Docker and containerization in general I’ve given some thought to how it will impact the way I architect and design my solutions.  However, the more I have thought about it the more I think it’s a non-issue.

For developers and solution architect nothing has changed.  You are still going to write code in Visual Studio.  You’ll still write code using C# or VB.NET.  One difference is that you’ll use the .NET Core framework and ASP.NET MVC Core instead of the regular .NET framework, but I don’t see this as a restriction.  From what I can tell all our favorite tools are available to use.  We can still use Entity Framework to access backend databases.

A small change is that if you’re using a Microsoft SQL database from a Linux container you’ll have to use SQL Authentication instead of Windows Authentication.  Beyond that though, you can still write your LINQ statements to query the database and process the results.

So if nothing has really changed, is there anything to that should be considered if you’re planning to deploy your solution in containers?  Yes, I think there are several.

Technology Selection

The first consideration is the selection of technologies.  Today, you can build your solution using either Windows Containers or Linux Containers.  Personally, I think using Linux Containers is the better choice today.  Linux Containers have wider support right now that Windows Containers.  Your container can run in environments like OpenShift, AWS EC2 Container Service, and of course Azure Container Service.  You can also build on top of a huge library of existing Docker images from Docker Hub.  The MySql image comes to mind, instead of having to spend time building your own image you reuse existing official images.  Microsoft also provides a SQL Server image you can use, which brings me to my next consideration.

Not everything has to be in a container

You can host services like Microsoft SQL Server or MySql in a container but I don’t think it really buys you much.  Containers are ephemeral, short lived, and when they go away their state goes away too.  You can configure volumes to store data, but that will have an impact on your scale out scenario.  I think you’re better off either running your database in Virtual Machines or in the cloud (ie Azure Sql, AWS RDBMS, etc.).  This choice will have an impact on your ability to move your containers around.  If you are using SQL Server hosted by an on premises Virtual Machine then lifting the container to the cloud will be require extra work (Azure ExpressRoute, AWS VPN) or a migration of the database server too.  In either case you’ll have some additional configuration and testing to perform.  However, this isn’t unique to containers.  Any application that is migrated from on-premises to the cloud will have similar challenges.

However, even though I don’t think it is brilliant, there is an argument to be made that in a Micro-Service architecture putting the database in a container makes sense.  In which case the containers would be moved together avoiding these headaches.

Configuration Secrets

If your solution has external dependencies (databases, web-services, etc.) then each environment will likely have different configuration values.  In the past I dealt with this by using different .net configuration files for each build environment.  However, .NET Core doesn’t appear to follow this practice and tends to favor using environment variables instead.  Docker-Compose allows you to specify a text file containing name value pairs for your environment variables so each environment can have its own file.  The only thing missing is this approach doesn’t allow things like passwords to be encrypted.  This means that you probably shouldn’t store configuration files in a repository.

Scale out

What containers are great at is scaling out.  If you keep your container images small then services like Kubernetes can spin up new instances of your service quickly in response to increased demand.  The image size is particularly important when a host must download the image from a repository.

To keep the time it takes to spin up the new container you should also only have one process per container.  If the host has to start a lot of processes in your container then there will be a lag before the new container can start processing requests.

Lifting to the cloud

As previously mentioned, if you anticipate that your solution could eventually go to the cloud then you should select services that make that migration as easy as possible.  Either create Micro-Services that allow you to put the database in a container, plan to migrate databases or other services to the cloud, or configure tunnels that will allow secure access to services behind your firewall (Azure ExpressRoute or an AWS VPN).


Containers are ephemeral and when a container goes away anything it contained is deleted.  So if your application is writing log files you should plan for a way to move those files out of the container.  Tools like Splunk can help with this, but then you’re running more than one process per-container.  Another option is to write logs files in to a blob storage container like Azure Blog Storage or AWS S3.

Regardless of how you do it, just having a strategy ahead of time will save you a lot of pain when your application is in production and stops behaving like it should.

Development workflow

This is where I think containers are going to have the biggest impact.   Self-hosting a solution is important for productively and quality assurance.   Self-hosting a solution means that the developer can host the entire solution on his workstation.   Trying to share resources like a database rapidly becomes a huge pain.  I often write unit tests that will clear several tables, load several tables with known data and then perform a series of unit tests that check tables for the results.  If I’m sharing the database with other developers they are going to get annoyed with me and we’ll end up running our unit tests less frequently as a result.  But if everybody is self-hosting the solution then we don’t have to worry about each other and can go nuts with our testing which usually increases the quality of the finished product.

Even though I think putting database servers in containers isn’t a great idea, in the development environment I think it is brilliant.

The sample application I built is able to download the MySql image and my DB builder base image, build the DBInit image, spin up MySql and DBInit and then run the scripts in DBInit in about 15 seconds.  Yes, I have a faster internet connection, but once I have the images cached the time it takes is even shorter.  So I can go through the code/build/debug cycle very quickly and not spend a lot of time waiting for the build step to finish.  Adding in unit tests to the build phase will slow it down, but also reduce the time I spend in the debug phase so it’s a fair trade.  I’m looking to see if I can get things like Sitecore in to a container so I can work with it this way as well.


At this point my view is that containers don’t have a direct impact on how you put your solution together.  Yes, I feel that using Linux containers is currently a better choice than Windows containers, but I think over time they will gain parity.  But the usual patterns you use writing your application don’t change.  If you’re an experienced ASP.NET MVC developer what you did before is what you’ll do now.  Your solution will just be deployed in a container.  If you adapt your workflow to take advantage of this then you’ll have an easier time delivering higher quality solutions.  I think that is a pretty big win.

Docker Stack – my full sample

[Source code is available on GitHub]
A Docker Stack is a collection of services that make up an application. In this case our stack will consist of an ASP.NET MVC Core web-page, a MySQL database and a load balancer. Doing this using Docker didn’t really pose any significant challenges, but it did remind me to not over look good practices just because I’m throwing together sample code.

What did I do?

Over the past few weeks I’ve been very focused on mastering Docker and I viewed this project as a sort of final to demonstrate how I’d use Docker for a real application. As already mentioned, the sample application is just a plane old web page that reads data from a database and stores data in a database. This is what most web-applications do.
I elected to use MySQL as the database for a couple of reasons. First and most importantly I wanted to learn it. Second it is much smaller that Microsoft SQL Server so putting it in a container makes sense, sort of. Third, it is supported by Entity Framework and would work in the environment of my application.
I also create a YML file with instructions for Docker-Compose to setup the services I wanted to deploy. I also created a pair of DockerFiles to create my images.
Most of this is supported by Visual Studio using its Docker support. However, there are aspects of Visual Studio’s Docker support that fall short of what I think developers really need. Primarily, you’re not entirely in Docker when you’re debugging. Also, your application is compiled outside of the container and the output is made available to the container via a volume. Finally, debugging doesn’t attach to the container. You’re running the code on your workstation and pretending you’re in the container.
My approach takes you out of Visual Studio and to the command line to build your application. In a different post I’ll dig in to how to debug an application that is running in a container.

The Full Sample

FullSample is an ASP.NET MVC Core application that uses Entity Framework to read and write data stored in a MySQL database. I’m still using .NET Core 1.1.2. There is no technical reason for this, I just haven’t expended that huge amount of strength it would take to change that drop down to 2.0. I’m resting up in preparation for that adventure.

Setting up the data model

I prefer to keep different components of my applications in separate projects. After creating the initial ASP.NET MVC Core project and solution I added a library project to the solution to separate the data model from the rest of the application. This is my normal practice and supports things like reuse and making the components easier to unit test.
This project has references for Microsoft.EntityFrameworkCore and Pomelo.EntityFrameworkCore.MySql. The rest of the project is the usual Entity Framework set of classes and interfaces. The point here is that there isn’t really anything unusual going on.


Setting up the database


Entity Framework provides a way to create database tables and relationships based on the model being used. However, I prefer to create the database DDL myself using SQL. This is primarily because in most cases I encounter databases have already been created or are controlled by separate teams that don’t like developers just throwing tables on their server in the fashion EF does it.
To create the database and its tables I just added a solution folder to hold my SQL files. With SQL Server I’d create a database project to handle this. Because I’m still new to MySql I wasn’t aware of any tools to help me so I’m just using some files to get the work done.

In addition to the SQL files I also wrote a very simple shell script that we’ll use later to run the SQL files against MySql.

Hooking up ASP.NET MVC

The sample application uses the HomeController to read notes from the database and display them in a grid. HomeController also provides a handler for post backs to create new notes in the database.

Lesson learned

Nothing at this point is controversial. It’s just a basic web-application that reads and writes data from a database. For debugging, I switched the startup project from the Docker project to NoteWeb so I could just hit F5 to walk through my code.
One challenge is that I needed to start up and initialize the MySql container before I started debugging. Most of the time I just left MySql running. Another approach I could have used would be to setup a volume for the MySql container so the database would persist between session. I’ll examine this further in a moment.
The next challenge I ran in to was being a bone head. I should have written unit tests for my data model. I wasted a lot of time chasing issues that didn’t matter. This project got spread out over a couple of weeks and I forgot changes I made during different sessions. Having a unit test framework setup would helped save me time. At some point the EF data model was looking for a table called Notes and the SQL script was creating a table called Note. Naturally this caused things to blow up.

Moving in to Docker

Moving the project in to Docker was straight forward. Locally, I work with my code like I always do: write unit tests, write code, run in the debugger, test, repeat. Once I’m happy with my code I commit it to GitHub.
The Docker build image I described earlier is already setup for this scenario. The DockerFile for this project just needs the URL to the GitHub repository and we’re good to go:

# FullSample
# Compile the solution inside a docker container and then copy the results in to the final image

# Build stage 1 - compile the solution/application
FROM jakewatkins/ancbuildenv AS builder

WORKDIR /source

# copy the solution in to the image
RUN git clone https://github.com/jakewatkins/DockerFullSample.git /source

# restore the solution's packages
RUN dotnet restore 

#build the solution
RUN dotnet publish --output /app/ --configuration Debug

# Build stage 2 - build the container image
FROM microsoft/aspnetcore:1.1.2
COPY --from=builder /app .

EXPOSE 80/tcp

# Set the image entry point
ENTRYPOINT ["dotnet", "NoteWeb.dll"]

All this DockerFile is doing is grabbing the project from GitHub, building the project and then copying the build output in to the final image.
The benefit of this approach is that the project will be build the same way regardless of where the DockerFile is run. This avoids issues around team members having different configurations or patches. The solution is always built the same way.
This is great for constant integration (CI). The only thing missing is running a battery of unit tests against the results, but we’ll cover that with another post.

Creating the MySql Image

My solution just uses a plain MySql image. I don’t customize the image in anyway. However, I do need a way to get my database and some sample data in to the MySql instance that gets started in a container when the image is run. To do this I use another container to do the actual database build.
Creating a MySQL Build image
In a previous post I demonstrated how I created an image with the MySql client and the Git client. Using that image as the starting point we use Git to download the project and then setup the shell script as the entry point for the image. We also add environment variables to the image that provide needed information for connecting to MySql
The shell script waits for MySql to become available and then uses the MySql client to execute the SQL files in our project:

# build the notes database
mysqladmin -u root -p$MYSQL_ROOT_PASSWORD -h$DBHOST ping --wait 

# load the DDL
mysql -u root "-p$MYSQL_ROOT_PASSWORD" -h$DBHOST < /source/DBinit/notesdb.sql

# load some sample data
mysql -u root "-p$MYSQL_ROOT_PASSWORD" -h$DBHOST <span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>< /source/DBinit/notes-sample.sql

The DockerFile for the image looks like this:

FROM jakewatkins/mysql-db-build:1.1

RUN git clone https://github.com/jakewatkins/DockerFullSample.git /source \
			&& chmod +x /source/DBinit/build-notesdb.sh

ENTRYPOINT ["/source/DBinit/build-notesdb.sh"]

That’s it. This demonstrates the benefit of creating base images that are setup to work with your workflow. Because I already had the mysql-db-build image ready to go I just had to write 3 lines in a DockerFile.
One consideration in doing these custom base images is size. We want them to be as small as we can get them to speed up our processes. Images have to be downloaded, copied and executed. If they are big each step is going to take longer. I’m a little impatient so if it takes too long to do something I’m likely to skip it.


Docker-Compose is a separate component of Docker. Its function is to orchestrate the configuration and execution of different components in a Docker solution. In our application we have 4 containers on two networks to coordinate.

version: "3"

# for production create a volume to hold the notes database
# In development we're not going to bother
#  notesdata:



    image: "mysql:8.0.0"
      - backend
      - 3306:3306
      - MYSQL_ROOT_PASSWORD=mysecret
      - bind-address=

      context: .
      dockerfile: dockerfile-builddb
      - backend
      - MYSQL_ROOT_PASSWORD=mysecret
      - INITDB=true
      - DBHOST=mysql
      - mysql

# the source for our application has to be pushed to GIT before building
# the image
      context: .
      dockerfile: Dockerfile-selfcontained
      - backend
      - frontend
      - "5022:22"
      - MYSQL_ROOT_PASSWORD=mysecret
      - DBHOST=mysql
      - DBPORT=3306
      - mysql

    image: dockercloud/haproxy:1.2.1
      - 3000:80
      - mvc
      - /var/run/docker.sock:/var/run/docker.sock
      - frontend

To build the solution we go to the command line and execute
Docker-compose build
Mysql and loadbalancer will be skipped because we’re just using the raw images. Dbinit and mvc will be built. Once they have finished we can start bringing up the solution by getting the database started:
Docker-compose up dbinit
This will cause dbinit and mysql to be started. The shell script in dbinit will wait until mysql has started before executing the other steps. Once it is finished the dbinit container will exit but the msql container will keep running and be ready to serve up the Notes database.
Next we start mvc:
Docker-compose up -d loadbalancer
In this case I’ve added the ‘-d’ flag to the up. This tells docker-compose to start loadbalancer but leave it disconnected from the console. In the case of dbinit I didn’t use the ‘-d’ flag because the image exits when it has finished its work.
Because loadbalancer has a link to mvc and mvc depends upon mysql it will cause them to be started as well. We already started mysql so nothing happens there, but mvc will be started.
To test out solution out we can open a web-browser and point it to http://localhost:3000 and we should see something like:
home page

Lessons learned

This is the point where I ran in to trouble. Mostly because I got busy with other projects and had large gaps of time between sessions on this project. Between getting the web application running and getting the DBInit image working I changed the Notes table to be the Note table. Because I didn’t have unit tests I didn’t have an automated way to catch this goof up.
As a result, my images all built and would start up without any trouble. But trying to access the application it would crash. I initially did not have the ASPNETCORE_ENVIRONMENT environment variable set so I was getting the useless error page. However, it did tell me to add ASPNETCORE_ENVIRONEMT=Development to get the useful error page. Once I added that the issue was revealed and I wished I had taken the time to write some unit tests.


At this point I’ve walked the path from barely understanding Docker to being able to build realistic solutions that can be deployed in Docker. The basic things we’ve seen with this solutions is that our past skills don’t really change just because we are going in to a container.


As a consultant and a software developer I find that training is the most important activity I engage in. To the point that it is worth giving up billable hours to regularly spend time training. The reason training is a first priority is that if you allow your skills to grow stale you will soon find yourself without hours to bill. It really is as simple as that.

Before going further, I should be clear about what I mean by ‘training’. Training to me just means either acquiring new knowledge or expanding existing knowledge. Besides attending a live class taught by an instruction, training can also take the form of reading or watching videos. There are other forms for training as well like attending meet ups and other gatherings where I can interact with other people who are either learning or are already experts.

The first part to training is figuring out what needs to trained on. There is the maintenance of existing knowledge and determining whether you should invest in it. For instance, I don’t spend any time looking at COM+, ASP.NET Forms or VBScript even though I used to be very good with all of them. I do need to keep spend some going deeper in to C# and JavaScript. For new knowledge, I’m learning NodeJS, Docker, OpenShift and ASP.NET MVC Core.

Having identified areas that I’m going focus on I next do some research to get a bit deeper in to the topic and figure out what areas I need to spend time on. For instance, with JavaScript I need to get better working with asynchronous processes. For Docker, I’m learning how to create service stacks and debug running services from Visual Studio.

With the areas of focus identified I can then look at the resources I have access to and make a learning plan. I’m a Pluralsight subscriber so I’ll usually search for courses there to see if there is something there that will meet my needs. I’ll also search YouTube to see what they have. I then do a general search for the topic looking for blog posts and articles. I’ve lately added EdX as a resource.

This part is what I call curating; I’m just gathering up stuff and not getting in to it yet. For blog posts and articles, I’ll skim them just to make sure they aren’t click bait or garbage. In the case of videos, I just add them to play lists or Channels (Pluralsight). Next, I break things up in to what is fast and what will take more time. A short blog post or article will get consume early. I’ve had cases where a two-page post was all I really needed. Next come short Pluralsight classes. Then YouTube videos. There is a big difference in quality between what I find on Pluralsight and YouTube. There are occasionally great YouTube videos, but in most cases, they’re from companies (Docker has produced some excellent material). I’m very quick to kill videos, especially long ones.

In addition to directed training I also do a fair amount of ‘entertrainment’ where I just watch or read stuff because it is interesting. I find DefCon videos to fit this category very well. I believe this is one aspect of training that people don’t give enough attention to. Three years ago, would you have known that Docker was going to be a big deal? I didn’t. I’d heard of DevOps but customers were not asking about it so I wasn’t paying attention. Had I been doing a better job with my ‘entertrainment’ I might have seen this stuff earlier and not find myself playing catch up. So my question now is: what’s next? That’s what the ‘entertrainment’ is meant to find out.

How much time you allocate to training is up to you and your ambitions. Just make sure you’re doing it and putting the effort in the right places. Also, make sure you have a way to look in to the future so you don’t miss out on exciting new developments (blockchain for instance).


Funny thing about growing in experience. Eventually people start referring to you as an expert at whatever it is you do. I’m always really uncomfortable about this title, it makes me feel like a fraud.

I always think to myself “don’t they realize I’m still learning?”. I’m constantly researching new things. I’m always reading other people’s blogs, watching YouTube and PluralSight videos. It never ends. As soon as I start thinking that I know something pretty well I find something new that I didn’t know.

It makes me wonder about people like Martin Flower, …. Do they feel the same pressure? Or are they comfortable staying on well worn paths and not wandering in the wilderness of the new?

I’m a big fan of BOFH and appreciate his definition for Expert -> ‘Ex’ as in ‘has been’, ‘once was’, ‘is no more’ and ‘spurt’ a drip under pressure. No thank you.

Building a MySQL database in Docker

I was reading “Essential Docker for ASP.NET Core MVC”. It is a great book, but I wanted a better sample to show off. The specific problem I had with the book’s example was that wait-for-it didn’t work for me. Using MySQLadmin’s ping seems like a better and more reliable way to accomplish the same goal. Also, I wanted a more complete application that both read data and wrote data. However, I didn’t want to make it to complicated because my primary focus is on Docker.

The sample application I’ve started building is also built on ASP.NET Cove MVC. The major difference in the application is that I’ve moved the data model in to its own assembly and I’m posting data back to the server to be stored. I don’t really have time to be too elaborate. Another difference is that I’m not using the EF migrations to create the database. I usually start with my database and then create my POCOs from the database schema. I have T4 templates that do most of the work for me.

Because I don’t use EF migrations to create my database I must do additional work in Docker and docker-compose to get my database instance prepared. I’ll need an image that I can use to execute my SQL scripts against the MySQL server I’m using.

The MySQL image

The MySQL image itself already gets the job done for me. I don’t need to customize it. Just pull it from Docker Hub and put it to work. However, one consideration we have is that we are running our database server in a container. This means that if the container is shutdown or stopped for any reason we’ll lose our data. In production, this would be bad. In development, I don’t think this is important and may in fact be an advantage. Our production environment will just mount a volume so MySQL can store its databases persistently. I’ll get in to production considerations in another post.

The MySQL build image

The MySQL build image just needs to MySQL command line client and nothing else. For this I decided to build a custom image starting with the Alpine image on Docker Hub. The reason I chose this image is because it is very small and provides a package manager that has what I need.

# MySQL dbinit image
FROM alpine:3.6
RUN apk add --no-cache mysql-client

This image doesn’t have an ENTRYPOINT in it because it will act as a base for our application’s dbinit. For the dbinit I created two SQL files. The first SQL file creates the database and schema. The second SQL file loads some sample data. In a real application, the second SQL file would load the lookup tables and other data that your application needs to get started. Finally, I need a shell script that orchestrates the work.

# build the notes database
mysqladmin -u root -p$MYSQL_ROOT_PASSWORD -h$DBHOST ping --wait
# load the DDL
mysql -u root "-p$MYSQL_ROOT_PASSWORD" -h$DBHOST < /dbbuild/notesdb.sql
# load some sample data
mysql -u root "-p$MYSQL_ROOT_PASSWORD" -h$DBHOST < /dbbuild/notes-sample.sql

The script starts by using the MySQLadmin tool to ping our MySQL database server and waiting for it to respond. We do this because we do not have control over the other container starting up. Maybe MySQL is already running with this container starts, maybe it hasn’t. Doing this helps insure the rest of the script works. Once we know MySQL is running the MySQL client is used to load the SQL files.

The script does have dependencies on environment variables. While I was building this I simply created two PowerShell scripts that took care of setting everything up. This one starts the MySQL container:

docker run -d --rm --name testmysql -h testmysql -e MYSQL_ROOT_PASSWORD=mysecret -e bind-address: mysql

This one starts our build container:

$dbhostip = docker inspect --format '{{ .NetworkSettings.IPAddress }}' testmysql
docker run -it --rm --name testbuild  -e MYSQL_ROOT_PASSWORD=mysecret -e DBHOST=$dbhostip -e DBPORT=3306  testimage

This is where I ran in to trouble. First, make sure you are storing your scripts as Unix formatted text files! This means that lines end with just an LF, Windows still uses CRLF for line ends. That drove me nuts but then brought me to my problem. Alpine does not have bash, only shell. So the first line of the text file needs to have #!/bin/sh. Initially I didn’t bother doing that figuring that Linux would just sort it out. It didn’t. This really drove me nuts trying to figure out what was going on. However, it also demonstrated why we like tiny containers. I was deleting and rebuilding my test image over and over. Because the base image was small the process only took a few seconds. If you’re working with a big image something like this will be unpleasant.

Stacking containers

Once I had the images sorted out I could start my docker compose file that would orchestrate starting a MySQL container and running the database builder image. Here is the docker-compose file:

version: "3"

    image: "mysql"
      - test.env
      context: .
      dockerfile: dockerfile-builddbtest
      - test.env
      - mysql

This is very stripped down to focus on my current goal of getting a MySQL instance up and running. There is a lot of additional stuff that can be added to this file. Right now, the focus is just on building our services. The first one is the MySQL service. All it does is tell docker-compose to use the MySQL image. If the image is not in the Docker cache it will pull it from the repository. The other part is the env_file attribute. I think this a very cool feature of docker-compose and something that deserves more attention.

What env_file allows me to do is store environment variables in a separate file. The env file looks like this:


It’s just a simple name value pair dictionary. Each line has the name of the environment variable and equal sign and the value you want. Be aware: there is no interpretation. Whatever you enter will be the value you see in the container. So, this:


Is not the same as this:


If you try those out and then attach to the running containers you’ll see the different values. You can add comments to the file by starting lines with a # sign. Those lines are just ignored.

The reason the env_file is so cool is that the values being supplied to the container are separated from the docker-compose file. That means I can create environment variable files for each environment. One for self-hosted development, one for integration, one for QA, one for Staging and finally one for production. This means that as my project is promoted between environment we can change the values and not worry about storing passwords in source control accidently.

Testing my stack

Once I have my docker-compose file all together I can start testing it out. The first step is to build my images:

docker-compose build

after the images have been build all I must do is:

docker up dbinit

After dbinit finishes I can check to see if things worked the way I wanted. First, is the container running MySQL still up and running?

It is. Happy days. Because I’m not using a volume, if that container stops for any reason the database I just created will go away and I’ll have to start over. Regardless, it is running. The question now is whether my database, table and data are there? To check I just do a docker exec in to bash:

docker exec -it MySQLbuilder_MySQL_1 bash

this gives me a command prompt. Now I just use the MySQL client to connect and look around:

Everything appears to be good. The commands I used were:


which gives you a command prompt inside MySQL so you can execute SQL commands. First, I checked to see if my database was created using:

show databases;

Shows a list of databases in the server. My notes database is there, so now we can see if my table is there and if it has any data:

Select * from notes.Note;

The results show that not only do I have a table but there is data in it. So, this was successful! Had the table not been created we would have gotten an error.

How is this useful?

What I’ve done is not particularly useful for production. Our database container will lose all of its data if I reboot my host, restart the container or anything else. I’ll cover going to production in another post. This is useful for development and testing. The final database builder image and the docker-compose together provide me with a way to quickly spin up a new MySQL instance, create my database and load some data in. This means that I can quickly make changes and retest without a great deal of work. My code, debug cycle can be fast this way.


At this point I have a base image that I can reuse whenever I need to create a MySQL database hosted in a Docker container. I also have a reusable approach for using docker-compose to orchestrate bringing up my database server, initializing it and then bringing up an actual application. In my next post I’ll grow this another step by adding my ASP.NET Core MVC application to this mix. With all of the pieces tied together properly I’ll be able to cycle (code, unit test, debug, test) very quickly without a lot of manual work on my part.