Docker Containers Part II
Reviewing the context
My last post on containers was all the way back in February, and the amount of change in this space since then has been dizzying in its pace! I want to follow up my original post with some more observations and reflections on Windows Containers and how I’ve been using them to work smarter. Before that though, there’s one lesson I want to share all by itself, and that’s the idea of changing how we think about our software environments.
Around mid-July, Dave Sampson posted a question to my original post inquiring about some of the details of how I Dockerized the InRule service components. In my response to him I provided some information and a link to a GitHub gist that included a couple of different files. The DOCKERFILE and scripts in that gist are a good way to get started with running InRule in Windows Containers. As a note, the irCatalog database doesn’t need to be a full SQL Server instance; I’ve had success running the catalog as a sql-server-express docker image and I suspect also that it would be equally frictionless to use SQL Server on Linux to host the actual catalog DB. Those scripts and the gist are a good way to get started from the pure mechanics of things like the DOCKERFILE and scripts, etc. What it doesn’t do though is show how to actually leverage containers to get the best use, and that brings us back to the discussion of how we approach our environments.
Treating software environments like a pet…
In a traditional IT operation, environments are special snowflakes, each unique and special because they have been lovingly tended and maintained through deployments thick and thin. “Shrapnel” accumulates in these environments from failed deployments, abandoned approaches, and littered code packages. Phrases such as “re-create the environment” are blasphemy, because No One Really Knows How It Works, and recreation is a time-intensive task that can take 10’s to 100’s of hours of time. In other words, the environment is treated the same way that you’d treat a pet dog – it’s fed, cared for, and otherwise treated tenderly and lavished with attention. If it gets sick, we take it to the vet. We worry and we fret about its condition.
This is not an efficient approach.
…Versus treating software environments like an animal herd
A sheep herder or a cattle farmer has responsibility over dozens or even hundreds of individual animals. If one of the cows gets hurt or sick and must be removed from the herd, it can be replaced with another animal with no special fanfare or ceremony. In a similar fashion, specific instances of a software environment should not be accorded any special consideration or treatment. Being willing – and able – to destroy and recreate an environment at-will is a critical factor for achieving success in today’s DevOps-focused technology climate.
Containers help avoid making pets out of an environment
Containers aren’t only useful for hosting services in a stack. There are a whole host of applications that Docker Containers can be used in that can yield improvements in areas across the board. Specialized build steps and processes can be the bane of engineering teams striving for a repeatable, dependable build process. For example, I recently created a custom container image that uses bind mount volumes to generate HTML documentation from a set of MarkDown documents. I then invoke that container as a part of the build process, and I don’t have to worry about how to set up the build agent with the customized rendering tools.
My experience with Docker and Windows Containers is still in its infancy in many ways, as I am still discovering places and areas where they make me more productive. Even so, I’ve been impressed with how much utility I’ve been able to get from their use. What has made the difference for me in a way that goes beyond the use of containers is understanding that in whatever projects or work I do, it’s worth putting the time into making sure my environment can be destroyed, re-paved, and back up again with minimal time and effort.