In this post we will work with the SwarmKit directly and not use the Docker CLI to access it. For that we have to first build the necessary components from source which we can find on GitHub.
You can find the links to the previous 4 parts of this series here. There you will also find links to my other container related posts.
Build the infrastructure
Once again we will use VirtualBox to create a few virtual machines will be the members of our cluster. First make sure that you have no existing VM called
nodeX where X is a number between 1 and 5. Otherwise used
docker-machine rm nodeX to remove the corresponding nodes. Once we’re ready to go lets build 5 VMs with this command
for n in $(seq 1 5); do docker-machine create --driver VirtualBox node$n done;
As always buildling the infrastructure is the most time consuming task by far. On my laptop the above command takes a couple of minutes. The equivalent on say AWS or Azure would also take a few minutes.
Luckily we don’t have to do that very often. On the other hand, what I just said sounds a bit silly if you’re an oldie like me. I still remember the days when we had to wait weeks to get a new VM or even worse months to get a new physical server. So, we are totally spoiled. (Rant)
Once the VMs are built use
to verify that all machines are up and running as expected
Build SwarmKit Binaries
To build the binaries of the SwarmKit we can either use an existing GO environment on our Laptop and follow the instructions here or use the golang Docker container to build the binaries inside a container without the need to have GO natively installed
We can SSH into
node1 which later should become the leader of the swarm.
docker-machine ssh node1
On our leader we first create a new directory, e.g.
now cd into the
we then clone the source from GitHub using Go
docker run --rm -t -v $(pwd):/go golang:1.7 go get -d github.com/docker/swarmkit
this will put the source under the directory
/go/src/github.com/docker/swarmkit. Finally we can build the binaries, again using the Go container
docker run --rm -t \ -v $(pwd):/go \ -w /go/src/github.com/docker/swarmkit \ golang:1.7 bash -c "make binaries"
We should see something like this
and voila, you should find the binaries in the subfolder
bin of the swarmkit folder.
Using the SwarmCtl Utility
To make the
swarmctl available everywhere we can create a symlink to these two binaries into the
sudo ln -s ~/swarmkit/src/github.com/docker/swarmkit/bin/swarmd /usr/bin/swarmd sudo ln -s ~/swarmkit/src/github.com/docker/swarmkit/bin/swarmctl /usr/bin/swarmctl
now we can test the tool by entering
and we should see something along the lines of
swarmctl github.com/docker/swarmkit v1.12.0-714-gefd44df
Create a Swarm
Initializing the Swarm
Similar to what we were doing in part 1 we need to first initialize a swarm. Still logged in to
node we can execute this command to do so
swarmd -d /tmp/node1 --listen-control-api /tmp/node1/swarm.sock --hostname node1
Let’s open a new
ssh session to
node1 and assign the socket to the swarm to the environment variable
Now we can use the
swarmctl to inspect the swarm
swarmctl cluster inspect default
and we should see something along the line of
Please note the two swarm tokens that we see at the end of the output above. We will be using those tokens to join the other VMs (we call them nodes) to the swarm either as master or as worker nodes. We have a token for each role.
Copy Swarmkit Binaries
To copy the swarm binaries (swarmctl and swarmd) to all the other nodes we can use this command
for n in $(seq 2 5); do docker-machine scp node1:swarmkit/src/github.com/docker/swarmkit/bin/swarmd node$n:/home/docker/ docker-machine scp node1:swarmkit/src/github.com/docker/swarmkit/bin/swarmctl node$n:/home/docker/ done;
Joining Worker Nodes
ssh into e.g. node2 and join it to the cluster as a worker node
./swarmd -d /tmp/node2 --hostname node2 --join-addr 192.168.99.100:4242 --join-token <Worker Token>
In my case the
<Worker Token> is
join-addr is the IP address of
node1 of your setup. You can get it via
docker-machine ip node
in my case it is
Repeat the same for
node3. Make sure to replace
node3 in the join command.
node1 we can now execute the command
swarmctl node ls
and should see something like this
As you can see, we now have a cluster of 3 nodes with one master (node1) and two workers (node2 and node3). Please join the remaining two nodes 4 and 5 with the same approach as above.
Having a swarm we can now create services and update them using the
swarmctl binary. Let’s create a service using the
swarmctl service create --name nginx --image nginx:latest
This will create the service and run one container instance on a node of our cluster. We can use
swarmctl service ls
to list all our services that are defined for this cluster. We should see something like this
If we want to see more specific information about a particular service we can use the
swarmctl service inspect nginx
and should get a much more detailed output.
We can see a lot of details in the above output. I want to specifically point out the column
Node which tells us on which node the nginx container is running. In my case it is
Now if we want to scale this service we can use the
swarmctl service update nginx --replicas 2
after a short moment (needed to download the image on the remaining node) we should see this when executing the
inspect command again
As expected nginx is now running on two nodes of our cluster.
In this part we have used the Docker swarmkit directly to create a swarm and define and run services on this cluster. In the previous posts of this series we have used the Docker CLI to execute the same tasks. But under the hood the CLI just calls or uses the
If you are interested in more articles about containers in general and Docker in specific please refer to this index post