CI with TeamCity and Docker – Part 3


Introduction

This is the 3rd part of a post about using TeamCity and Docker to provide Continuous Integration. Here you can find part 1 and part 2.  It is part of the series about Implementing a CI/CD Pipeline. Please refer to this post for an introduction and a complete table of content.

In this part I want to first provide a better alternative than using Docker in Docker which has a lot of drawbacks and potential side effects. Further more I want to discuss how we can use Docker to test our container containing the code.

Avoid Docker in Docker

According to the author of the Docker in Docker (DID) solution it is actually not a good idea to use DID in Continuous Integration (CI). As he points out DID has various disadvantages and side effects when used in CI. Thus I decided to look out for a better solution. The same author suggests how we can configure our build agent to be able to use Docker and create sibling containers instead of nested containers. We basically have to mount the Docker sock volume of the host to the container

In doing so we can now access Docker from within the agent container and execute all operations that we normally can do on the Docker Host.

I also found some better suited Docker images for TeamCity and TeamCity Agents. Together with this Docker-Compose Yaml file

To start a pair of TeamCity and Agent we need to navigate to the folder containing the above yaml file and use this command

After a minute or so TeamCity and Agent will be ready to be used and I can access TeamCity as usual port 8111 of my Docker Host (i.e. 192.168.99.100:8111 in my case).

Testing our Code

Part of CI is testing the just built artifacts. Docker makes this really simple and convenient. We can run an instance of our code in a Docker container and then run another container which contains our tests. Those tests will be executed against the public API of the former. Once the tests are executed we can just destroy (and remove) the two containers and we are left with a clean system. Since we now need to start multiple containers we can use Docker-Compose to simplify the task. Let’s look at a sample (you can find the full source on GitHub). Here we have the docker-compose.test.yml file

Let’s explain this content a bit. What this file basically says is that we are creating two containers web and sut as well as a network test. Both containers will be part of the network test and thus can access each other without the need of any publicly exposed ports. The container sut contains our test code. We build this container by using the Docker file called Dockerfile.test. The container when running will have the name ci-tests and the Docker image from which the container is instantiated has the same name. The container web contains the code we want to test (in this case a ASP.NET core Web API). The image with a name of ci-webapi is built using the Docker file called Dockerfile and the name of the container when running will also be ci-webapi.

Note, on lines 17&18 we declare that the container web should map its internal port 5000 to port 80 of the host. For testing purposes on the CI server this is not really needed but it came in handy while I was debugging my whole setup since it allowed me to access the web service from my browser or from Postman.

Now to the actual test. When accessing our web API at the relative URI /api/projects it returns the following JSON content

My test is implemented as a simple Bash script and uses curl to navigate to the URL web:5000/api/projects and analyzes the response and looks out for the presence of a fragment {“id”:1,”name”:”Heart Beat”}. If this fragment is found then the test succeeds otherwise it fails. Not the most sophisticated test I agree but it server to show the idea. The full test can be found here.

Once we have all prepared we can use docker-compose to build the images and execute the tests locally. Just run the following commands in your shell

We should see a result similar to this

Once we’re done we can tear down the whole test setup like this

Configuring TeamCity

We can configure TeamCity to listen to a given GitHub repository the usual way and we can define a trigger that starts a build whenever someone pushes to the master branch of the repository. I have added one one build configuration to the TeamCity project which contains 4 build steps. All build steps are of type Command Line. In the first step we build the Docker images containing a) the code and b) the tests

In the second step we run an instance of both containers

Note how we wait on line 5 until the test container has completed and exits. The Docker wait command will then returns the exit code of the container which we will use in turn as the return code of the build step (line 6). In the third step we clean up after ourselves

and in the final step we push the new Docker image with the application to Docker hub. This step is of course only executed if the previous steps were all successful.

House keeping

After a lot of experimenting my local Docker images repository got polluted with a lot of entries that are all orphaned and have no associated tag. To remove all untagged Docker images I use this command

Summary

In this 3rd part of the post about CI using TeamCity and Docker I showed how we can configure the TeamCity agents to not create nested Docker containers but rather siblings by mounting the docker/sock volume. I also showed how we can further leverage Docker to run tests on the build server.

CI with TeamCity and Docker – Part 2