In the project I am currently working we are using Docker in all environments, that is development, CI, staging and production. The technology stack is based on Python with Django, MySQL and MongoDB to just name the most important elements. As good developers we of course have written a lot of unit-, integration- and UI tests. To author the UI or end-to-end tests we are using Robot which is a high level framework on top of Selenium. In this post I want to talk about one challenge we faced when executing integration- or UI tests and how we solved the problem.
docker-compose to define our setup we’re using to run UI tests. The corresponding docker-compose Yaml file looks similar to this
As you can see we’re having 4 services in our setup; each service runs in its own container. We have a container for Mongo DB, one for MySQL, one for our application that we want to test and one container to execute our Robot tests. Note how the application service
app links to the Mongo DB and the MySQL services. By defining those links we effectively configure the DNS service of Docker and as a consequence we’re able to connect to Mongo DB and MySQL from the
app container by using the names of the respective service,
mysql. This is important for what follows.
Now the problem is that if we run our tests using
docker-compose -f docker-compose.ui-tests.yml up
that the Robot container immediately starts to execute tests although the application and the databases are not yet ready. This will lead to failing tests. We have to avoid that.
But how can we make sure that everything is ready before the first test is executed? Docker cannot help us here since Docker doesn’t really know what’s happening inside a container. The fact that a container is ready from the perspective of Docker doesn’t mean that the application running inside the container is also ready. Specifically databases need some time to initialize. On slower machines that can take several seconds or more. On our CI agent Mongo DB for example takes about 30 seconds to start up and MySQL is no better.
How can we solve this problem. The solution we chose is the following
- the application executes a step during initialization to wait for the database
- the Robot container executes a step during initialization to wait for the application
The latter is easy since our application is a web application we use
curl to access the index page. We repeat a get request to this URL until the response status code is 200 (OK). We normally wait 5 seconds between subsequent requests. The logic is written in
Bash and looks similar to this
Note how we use the two environment variables
EXPOSED_PORT that we defined in the
Waiting for the database is a bit more tricky. Here we need to wait until the respective DB starts to listen at the defined TCP port. For this we use the logic found in the
wait-for-it.sh script that I found here. The code snippet in our bash script looks similar to this
Note how I use the DNS names
mysql as hostnames in the snippet above.
Docker and docker-compose cannot help us synchronizing applications running in different containers. This is our task. I have show a technique that can be used to make sure that in the context of integration- and/or UI testing the application and the databases are ready before the first test is executed. We have used bash scripts to implement the necessary logic.