Containerization 101 – OpenShift

What is OpenShift?

OpenShift builds on top of Docker by providing tools to help orchestration of containers, scaling applications and managing containers. Web scale applications become very complex and even with the efficiencies of containers additional hardware will be needed for scaling. OpenShift helps make it possible to scale containers across multiple hosts. OpenShift also provides a nice CI/CD system whereby each time you commit code to a git repository OpenShift will perform a build and deployment cycle for your application.

What makes OpenShift Important?

Docker provides a great tool for an individual developer to work in isolation. OpenShift provides additional capabilities that it easier for a team to work together. Additionally, Docker doesn’t provide much in the way of management tooling to make Docker ready to be run in a production environment. OpenShift fills that gap.

Over the next few weeks I have several posts about Docker and OpenShift planned.  My goal is writing these is to lock in what I’ve learned.  Along the way I’m building a lot of sample applications, POCs, and demos.

Containerization 101 – Docker

What are containers?

In very simple terms, Containers are a mechanism for deploying applications. A container will hold an application’s files, libraries, and other dependencies that it needs to execute. Containers isolate applications so that they cannot interfere with other applications and they cannot be interfered with either.

Containers can be thought of as virtual machines. A virtual machine is complete deployment of a computer. It will have a hyper-visor or virtualization layer that simulates a real computer, a complete operating system and then the applications it is running. A container on the other hand only has the resources necessary to run the application it is hosting. It uses the host systems operating system and other resources. By sharing the operating system, containers are much less resource intensive and start more quickly.

Docker is the main container service in use. It works on all the major operating systems and supports both Linux and Windows containers. However, Linux containers are more mature and have broader community support. Docker containers are also supported by major cloud services such as Azure and AWS.

In addition to isolating applications running on a host computer, Docker also provides software defined networks so containers running on a host can communicate without being exposed to the wider network. Docker also provides persistent storage for data created by containers. This makes it possible to host applications like database servers in containers.

What is important about containers?

Containers are important because they are less resource intensive than virtual machines. As such they start up faster and more can be hosted on a single host. It is not uncommon for Docker to be run inside a virtual machine.

Because containers use fewer resources it is easier to have the same execution environment at all stages. The containers that are run in development are the same containers that are used for testing and then released to staging and finally production. It is realistic to create an environment where “it works on my machine” means it works everywhere.

Containers also make it possible to migrate an application to the cloud without significant effort. As previously discussed, A developer creates and works with container images in the development environment. At some point the container images are ‘lifted’ to QA where testing is performed to identify defects. At some point the container images are again lifted to staging and eventually production. An organization could initially choose to run the application in their own data center but at some point could decide to again lift the application to the cloud. If the application is self-contained in its own group, or swarm, of containers then the application won’t even require configuration changes when its lifted to the cloud. If the application is using external resources, the organization’s SAP system for instance, then things like network access will have to be configured.

What is docker?

Docker is an open source project that has standardized containers. The idea of containers is not new, the idea goes back to how mainframes work. Linux introduced containers but it didn’t offer an easy way to create containers. That is where Docker came in.

In Docker containers are created from images. Images are representations of the file system (like a zip file) containing only the files specific to the application that will be run in the container. This means that if there are files in /bin/ and /tmp/wwwroot then the image will have those just those files. The container is a running instance of the image. A single docker host can run as many containers as it has memory and CPU to handle.

Going beyond containers Docker offers stacks which are groups of containers that are related. For instance, you can have one container that has a web-site and another container that has a database server. The stack provides docker the information it needs to deploy the containers together so that they will work together. The stack’s information will provide networking and volume information in addition to the image information.

Docker also provides the tools needed to build container images from scratch or from other container images. As an example, as a Microsoft stack developer I use Core to build applications. Microsoft provides a container for compiling Asp.Net Core applications inside a container, I’ve built my own custom version of their image that adds a few additional tools that I use as a part of my build process.

Failing to mention the Docker Hub would be a mistake. As mentioned I have created my own image based on Microsoft’s image. The Microsoft image is distributed through the Docker Hub. You can search the hub and find thousands of images contributed by other companies and individual developers. You can sign up for your own account free of charge and contribute images to the community as well.


Investing in yourself

My focus, from a technology perspective, is on containerization (Docker & OpenShift). I believe that the best combining a technology with something else. It’s like the start-up pitch “Uber & X” where you get “Uber for pets” or something like that (hopefully more useful). In my case the other thing I’m looking at is blockchain.

I realize “The Blockchain” is a technology, but really plays in to the business side of things. How the blazes are we going to do this stuff and what direction should I go? If this was 2009 it would be all about BitCoin mining, but that ship has sailed. Unless you want to invest a million dollars in a mining operation I think you’re wasting your time. I think the applications of the blockchain outside of money are where the action is going to be. What new businesses can we enable because of the blockchain? Can I change my own business as a free-lance software developer because of the blockchain?

Once thing about this does give me pause. I read Satoshi Nakamoto’s paper “Bitcoin: A Peer-to-Peer Electronic Cash System” and I’m a bit dizzy. How the blazes did that unintelligible bit of writing create all of this? The paper is far from clear and leaves out a lot of critical information.

This is why I call this investing. It is not without risk. I’m investing my time and effort in to understand this with the expectations that I’ll profit. A clearer more easily understood paper would mean less effort on my part. Instead I’ll have to do more research. That increases risk and eats up more time….

Docker Cheat Sheet

The docker cheat sheet is just meant to be a list of commands that I’ve found useful when working with Docker. This is provided with little in the way of explanation or instruction. I’ll be providing the details in articles that will follow.


Build an image

docker build . -t [accountname/imagename] -f [dockerfile name]


docker build . -t jakewatkins/exampleapp0831 -f dockerfile.standalone


Create a container

docker create -p [port mapping] –name [container name] [image name]


docker create -p 3000:80 –name testcontainer jakewatkins/exampleapp0831


Start a container

docker start [container name]


docker start testcontainer


Just run the container

docker run -p [port mapping] –name [container name] [image name]


docker run -p 3000:80 –name testcontainer jakewatkins/exampleapp0831


Get the logs from a running container

docker logs [container name]


docker logs testcontainer


Run an image, give me a shell and then remove the container when I exit:

docker run -it –rm [image name]


Docker run -it –rm jakewatkins/example0831


I’m working on a series of articles about Containerization. For the past few months I’ve been having a blast playing with Docker and OpenShift. They’re very cool technologies but the documentation around them is rather mixed. With this series I hope to provide a clear direction for other people adopting this technology and flatten out the learning curve as much as possible.

Everything I write will be from my point of view and hands on experience. This means that all work is from the point of view of a Microsoft centric developer. My workstation runs Windows 10 pro, I use Visual Studio 2017 and when coding I target the .NET framework (.NET Core in this case).

I think this will be valuable because most of the voices I’m seeing in this space are coming from the Open Source community. However, today I think the distinction between being a Microsoft developer and an Open Source developer is meaningless. I wrote Node.js and use a lot of Open Source tooling. I even run Linux (RHEL). So perhaps my earlier Microsoft warning is meaningless.

Regardless – if you have any questions, please let me know and I’ll do my best to get them answered.

Improving my blog

I’m making an effort to post more regularly. However, before I really get going I need to clean up a few things in the posts I’m creating. In particular I want to stop posting pictures of source code. It drives me nuts. On one hand it looks good, but you my reader can’t do anything with it. In order to work with the code you have to download the code from my github repository.

To fix this I’m playing around to see what I need to do. The first thing is that I can wrap code in [code] … [/code] tags which does most of the work. However, I use Microsoft Word to compose my posts. If I paste source code in to Word and use the [code] tags you get something like this:

<span style="color:blue;font-family:Consolas;font-size:9pt;">public<span style="color:black;">
				<span style="color:blue;">static<span style="color:black;">
						<span style="color:blue;">void<span style="color:black;"> Main(<span style="color:blue;">string<span style="color:black;">[] args)

<span style="color:black;font-family:Consolas;font-size:9pt;">        {

<span style="color:black;font-family:Consolas;font-size:9pt;">
			<span style="color:blue;">var<span style="color:black;"> host = <span style="color:blue;">new<span style="color:black;">
							<span style="color:#2b91af;">WebHostBuilder<span style="color:black;">()

<span style="color:black;font-family:Consolas;font-size:9pt;">                .UseKestrel()

<span style="color:black;font-family:Consolas;font-size:9pt;">                .UseContentRoot(<span style="color:#2b91af;">Directory<span style="color:black;">.GetCurrentDirectory())

<span style="color:black;font-family:Consolas;font-size:9pt;">                .UseStartup<<span style="color:#2b91af;">Startup<span style="color:black;">>()

<span style="color:black;font-family:Consolas;font-size:9pt;">                .UseApplicationInsights()

<span style="color:black;font-family:Consolas;font-size:9pt;">                .Build();

<span style="color:black;font-family:Consolas;font-size:9pt;">            host.Run();

<span style="color:black;font-family:Consolas;font-size:9pt;">        }</span>

You’re better off if I just post a picture. So this means I’ll have a slightly more complex workflow than I want, but it will produce a better quality product. Eventually I’ll figure out a way to automate it. What I plan to do is just leave annotations in my post. They’ll look like:

[code language=”csharp”]



I’ll post the article to the drafts folder on WordPress and then add the source code manually in WordPress’s editor. Done that way the code looks like this:

public static void Main(string[] args)

var host = new WebHostBuilder()


Hopefully this will immediately yield a better article for my readers. And later I’ll figure out how to automate the process so I can just publish in one step without any manual interventions.

Learning Gulp


I’m in a learning mode picking up a lot of the ‘new’ stuff I have missed over the past few years.  One area is all the new tooling like bower, yeoman, gulp and so.  To a degree I’m skeptical that I need this stuff.  I like writing PowerShell scripts or MSBuild scripts for most tasks that need to be automated.  All the same: no reason not to at least look.

You can download all of the files from here

What is Gulp?

Gulp is described as a JavaScript task runner.  What that really means is that they’ve build a tool on top of node.js that has plug-ins available that will allow you to work with files and do things to them.  It also includes a file watcher that will run scripts in response to changes to the files it is watching.

The idea behind Gulp is to provide a build for your website.  HTML, JavaScript and CSS on their own don’t require anything to be built, but if you are using any of the languages built on top of them (LESS, SASS, etc.) then you need something to ‘compile’ the file and output your CSS or JavaScript.  You also might what your files minified which requires some processing.

In my case I use Visual Studio which handles nearly all this stuff for me.  But not everybody has such an awesome tool at their disposal.

Get it setup so you can run it from PowerShell

Before going further I’ll point out that none of this is strictly necessary.  Most of the time I work from Visual Studio which will process gulp files without any help.  You could also save yourself some trouble and just fire up the old windows command prompt and live on happily.  However, I’m all about automation.  What if I want to use gulp as a part of an automated workflow I’ve scheduled via Windows Task Scheduler or some other scheduler?  Most of my automation work is done via PowerShell.  If you’ll bear with me a little this might make a little sense.  Or not.

As already mentioned I primarily work from PowerShell so a little additional work is necessary to get things setup.  The first task is to get node and npm on the workstation.  Go to and install NodeJS (just click on the tile that matches your system).  This will also install npm (node package manager).  To verify that you’re ready to go, do the following:

Start PowerShell and at the command prompt type “node -v” this will show the version of Node.


Now type “npm -v” to see the version of Node Package Manager:


We are now ready to install gulp.   To start with we’ll create a project.  In my case I keep stuff like this in a folder called trashcode.  So we’ll create a gulp folder to hold all of the experiments with gulp and then a project in there called concat.  Like so:


Next we initialize npm by typing “npm -init” and answer the questions:


Notice that I change the entry point from “index.js” to “gulpfile.js”.  I also provided a description to avoid the warning, if you’re not OCD you can leave description empty.  You now see we have a package.json file in our directory.


You don’t need to worry about this file.  However, you’ll notice the repository settings won’t work very well.  I’m not going to bother with them for now.  Next, we’ll create a script to setup our project:


The first line installs gulp and the next two lines install two modules we’ll use in our first test.  When you run the script you’ll see a bunch of status information as the components are installed.  After it’s all done the directory will look like this:


You can explore node_modules to see what has been installed.

For our testing we’ll create additional directories and some test files:


All I want is some stuff to work with and that is what testSetup.ps1 provides.  Now we will create gulpfile.js:


Here is what is going on:

Lines 6, 7, 8 tell node what packages we require and provide a short cut to access the modules.

Line 10 creates an object call paths that we will store values in.

Lines 14, 15, 16, and 17 store the paths we will work with in the paths object.

Line 19 creates a task called “clean:dest” that uses the gulp-clean module to delete files.

Line 24 creates a task called “min:txt” that uses the gulp-concat module to concatenate all the files together.

Line 30 and 31 setup tasks that can call multiple tasks.  In this case if I had also created a min:images task I would make line 30 look like:


This would cause gulp to first call the task “min:txt” and then “min:images”.

The final task “default” first calls “clean” and then calls “min”.  Clean deletes all the files in destination and min creates the new file and puts it in destination.

Now that we have our gulp file we can run it right?  In a Visual Studio project, we’d be good to go.   If you look inside node_modules in the gulp folder you’ll see “gulp.bat” and if you look at it in notepad you’ll see what is going on.  To do the same in PowerShell, just create gulp.ps1 in the project directory.


At the command line if we run .\gulp.ps1 we’ll get the following results.


Looking in our destination folder we should see bigText.txt and it should have the contents of all the other files we put in our source directory and sub-directories.

Where to now?

At this point we really haven’t done that much with Gulp, but we have a good start to play with it and learn how to work with it.  My next step will be to review the modules that are available and learn to work with them.

My goal is always to automate routine tasks and find ways to stream line workflows.  I can see Gulp fitting in with PowerShell to help me accomplish those goals.