Building a MySQL database in Docker

I was reading “Essential Docker for ASP.NET Core MVC”. It is a great book, but I wanted a better sample to show off. The specific problem I had with the book’s example was that wait-for-it didn’t work for me. Using MySQLadmin’s ping seems like a better and more reliable way to accomplish the same goal. Also, I wanted a more complete application that both read data and wrote data. However, I didn’t want to make it to complicated because my primary focus is on Docker.

The sample application I’ve started building is also built on ASP.NET Cove MVC. The major difference in the application is that I’ve moved the data model in to its own assembly and I’m posting data back to the server to be stored. I don’t really have time to be too elaborate. Another difference is that I’m not using the EF migrations to create the database. I usually start with my database and then create my POCOs from the database schema. I have T4 templates that do most of the work for me.

Because I don’t use EF migrations to create my database I must do additional work in Docker and docker-compose to get my database instance prepared. I’ll need an image that I can use to execute my SQL scripts against the MySQL server I’m using.

The MySQL image

The MySQL image itself already gets the job done for me. I don’t need to customize it. Just pull it from Docker Hub and put it to work. However, one consideration we have is that we are running our database server in a container. This means that if the container is shutdown or stopped for any reason we’ll lose our data. In production, this would be bad. In development, I don’t think this is important and may in fact be an advantage. Our production environment will just mount a volume so MySQL can store its databases persistently. I’ll get in to production considerations in another post.

The MySQL build image

The MySQL build image just needs to MySQL command line client and nothing else. For this I decided to build a custom image starting with the Alpine image on Docker Hub. The reason I chose this image is because it is very small and provides a package manager that has what I need.

#
# MySQL dbinit image
#
FROM alpine:3.6
RUN apk add --no-cache mysql-client

This image doesn’t have an ENTRYPOINT in it because it will act as a base for our application’s dbinit. For the dbinit I created two SQL files. The first SQL file creates the database and schema. The second SQL file loads some sample data. In a real application, the second SQL file would load the lookup tables and other data that your application needs to get started. Finally, I need a shell script that orchestrates the work.

#!/bin/sh
# build the notes database
mysqladmin -u root -p$MYSQL_ROOT_PASSWORD -h$DBHOST ping --wait
# load the DDL
mysql -u root "-p$MYSQL_ROOT_PASSWORD" -h$DBHOST < /dbbuild/notesdb.sql
# load some sample data
mysql -u root "-p$MYSQL_ROOT_PASSWORD" -h$DBHOST < /dbbuild/notes-sample.sql

The script starts by using the MySQLadmin tool to ping our MySQL database server and waiting for it to respond. We do this because we do not have control over the other container starting up. Maybe MySQL is already running with this container starts, maybe it hasn’t. Doing this helps insure the rest of the script works. Once we know MySQL is running the MySQL client is used to load the SQL files.

The script does have dependencies on environment variables. While I was building this I simply created two PowerShell scripts that took care of setting everything up. This one starts the MySQL container:

docker run -d --rm --name testmysql -h testmysql -e MYSQL_ROOT_PASSWORD=mysecret -e bind-address:0.0.0.0 mysql

This one starts our build container:

$dbhostip = docker inspect --format '{{ .NetworkSettings.IPAddress }}' testmysql
docker run -it --rm --name testbuild  -e MYSQL_ROOT_PASSWORD=mysecret -e DBHOST=$dbhostip -e DBPORT=3306  testimage

This is where I ran in to trouble. First, make sure you are storing your scripts as Unix formatted text files! This means that lines end with just an LF, Windows still uses CRLF for line ends. That drove me nuts but then brought me to my problem. Alpine does not have bash, only shell. So the first line of the text file needs to have #!/bin/sh. Initially I didn’t bother doing that figuring that Linux would just sort it out. It didn’t. This really drove me nuts trying to figure out what was going on. However, it also demonstrated why we like tiny containers. I was deleting and rebuilding my test image over and over. Because the base image was small the process only took a few seconds. If you’re working with a big image something like this will be unpleasant.

Stacking containers

Once I had the images sorted out I could start my docker compose file that would orchestrate starting a MySQL container and running the database builder image. Here is the docker-compose file:

version: "3"

services:
  mysql:
    image: "mysql"
    env_file:
      - test.env
  dbinit:
    build:
      context: .
      dockerfile: dockerfile-builddbtest
    env_file:
      - test.env
    depends_on:
      - mysql

This is very stripped down to focus on my current goal of getting a MySQL instance up and running. There is a lot of additional stuff that can be added to this file. Right now, the focus is just on building our services. The first one is the MySQL service. All it does is tell docker-compose to use the MySQL image. If the image is not in the Docker cache it will pull it from the repository. The other part is the env_file attribute. I think this a very cool feature of docker-compose and something that deserves more attention.

What env_file allows me to do is store environment variables in a separate file. The env file looks like this:

MYSQL_ROOT_PASSWORD=mysecret
DBHOST=mysql
DBPORT=3306
bind-address=0.0.0.0

It’s just a simple name value pair dictionary. Each line has the name of the environment variable and equal sign and the value you want. Be aware: there is no interpretation. Whatever you enter will be the value you see in the container. So, this:

MYSQL_ROOT_PASSWORD=”mysecret”

Is not the same as this:

MYSQL_ROOT_PASSWORD=mysecret

If you try those out and then attach to the running containers you’ll see the different values. You can add comments to the file by starting lines with a # sign. Those lines are just ignored.

The reason the env_file is so cool is that the values being supplied to the container are separated from the docker-compose file. That means I can create environment variable files for each environment. One for self-hosted development, one for integration, one for QA, one for Staging and finally one for production. This means that as my project is promoted between environment we can change the values and not worry about storing passwords in source control accidently.

Testing my stack

Once I have my docker-compose file all together I can start testing it out. The first step is to build my images:

docker-compose build

after the images have been build all I must do is:

docker up dbinit

After dbinit finishes I can check to see if things worked the way I wanted. First, is the container running MySQL still up and running?

It is. Happy days. Because I’m not using a volume, if that container stops for any reason the database I just created will go away and I’ll have to start over. Regardless, it is running. The question now is whether my database, table and data are there? To check I just do a docker exec in to bash:

docker exec -it MySQLbuilder_MySQL_1 bash

this gives me a command prompt. Now I just use the MySQL client to connect and look around:

Everything appears to be good. The commands I used were:

MySQL -u root -p$MYSQL_ROOT_PASSWORD

which gives you a command prompt inside MySQL so you can execute SQL commands. First, I checked to see if my database was created using:

show databases;

Shows a list of databases in the server. My notes database is there, so now we can see if my table is there and if it has any data:

Select * from notes.Note;

The results show that not only do I have a table but there is data in it. So, this was successful! Had the table not been created we would have gotten an error.

How is this useful?

What I’ve done is not particularly useful for production. Our database container will lose all of its data if I reboot my host, restart the container or anything else. I’ll cover going to production in another post. This is useful for development and testing. The final database builder image and the docker-compose together provide me with a way to quickly spin up a new MySQL instance, create my database and load some data in. This means that I can quickly make changes and retest without a great deal of work. My code, debug cycle can be fast this way.

Conclusion

At this point I have a base image that I can reuse whenever I need to create a MySQL database hosted in a Docker container. I also have a reusable approach for using docker-compose to orchestrate bringing up my database server, initializing it and then bringing up an actual application. In my next post I’ll grow this another step by adding my ASP.NET Core MVC application to this mix. With all of the pieces tied together properly I’ll be able to cycle (code, unit test, debug, test) very quickly without a lot of manual work on my part.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s