Easing the use of the AWS CLI

This post talks about a little welcome time-saver and how we achieved it by using Docker.

In our company we work a lot with AWS and since we automate everything we use the AWS CLI. To make the usage of the CLI as easy and frictionless as possible we use Docker. Here is the Dockerfile to create a container having the AWS CLI installed

FROM python:2.7
ENV AWS_DEFAULT_REGION='[your region]'
ENV AWS_ACCESS_KEY_ID='[your access key id]'
ENV AWS_SECRET_ACCESS_KEY='[your secret]'
RUN pip install awscli
CMD /bin/bash

Note that we need to provide the three environment variables AWS_DEFAULT_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set in the container such as that the CLI can automatically authenticate with AWS.

Update: a few people rightfully pointed out that one should never ever
disclose secrets in the public, ever! And I agree 100% with this. In
this regard my post was a bit misleading and my “Note:” further down
not explicit enough. My fault, I agree. Thus let me say it loudly
here: “Do not push any image that contains secrets to a public
registry like Docker Hub!” Leave the Dockerfile from above as is
without modifications and pass the real values of the secrets when
running a container, as command line parameters as shown further down

Let’s build and push this container to Docker Hub

docker build -t gnschenker/awscli

to push to Docker Hub I of course need to be logged in. I can use docker login to do so. Now pushing is straight forward

docker push gnschenker/awscli:latest

Note: I do not recommend to hard-code the values of the secret keys into the Dockerfile but pass them as parameters when running the container. Do this

docker run -it --rm -e AWS_DEFAULT_REGION='[your region] -e AWS_ACCESS_KEY_ID='[your access ID] -e AWS_SECRET_ACCESS_KEY='[your access key] gnschenker/awscli:latest

Running the above command you find yourself running in a bash shell inside your container and can use the AWS CLI. Try to type something like this

aws ecs list-clusters

to get a list of all ECS clusters in your account.

To simplify my life I define an alias in my bash profile (file ~/.bash_profile) for the above command. Let’s call it awscli.

alias awscli=docker run -it --rm \
  -e AWS_DEFAULT_REGION='[your region] \
  -e AWS_ACCESS_KEY_ID='[your access ID] \
  -e AWS_SECRET_ACCESS_KEY='[your access key] \
  --entrypoint aws \
  gnschenker/awscli:latest

Once I have done that and sourced the profile I can now use the CLI e.g. like this

awscli s3 ls

and I get the list of all S3 buckets defined in my account.

Thanks to the fact that Docker containers are ephemeral by design they are really fast to startup (once you have the Docker image in you local cache) and thus using a container is similar in experience than natively installing the AWS CLI on you machine and using it.

About Gabriel Schenker

Gabriel N. Schenker started his career as a physicist. Following his passion and interest in stars and the universe he chose to write his Ph.D. thesis in astrophysics. Soon after this he dedicated all his time to his second passion, writing and architecting software. Gabriel has since been working for over 25 years as a consultant, software architect, trainer, and mentor mainly on the .NET platform. He is currently working as senior software architect at Alien Vault in Austin, Texas. Gabriel is passionate about software development and tries to make the life of developers easier by providing guidelines and frameworks to reduce friction in the software development process. Gabriel is married and father of four children and during his spare time likes hiking in the mountains, cooking and reading.
This entry was posted in containers, docker and tagged , , , , . Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Why not create a bash alias instead?

  • James Nugent

    This method seems suboptimal – at best you still have unencrypted AWS credentials in your Bash profile. If you are OK with having credentials sat around unencrypted on disk, the AWS CLI _already_ offers a superior solution in the form of named profiles (http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles).

    If instead you’d rather not have these sat around on disk however (vastly preferable) the best solution is to use envchain (https://github.com/sorah/envchain).

    • Thanks for sharing https://github.com/sorah/envchain

      I was looking for this for a while now :)

    • gabrielschenker

      Thanks James for the feedback. You are right, we should be a bit more careful with secrets. Unfortunately reality shows that security is painful and causes friction. And when there is friction then people try to go around the problem. Until we don’t have an easy and standard way of dealing with the problem we are always going to choose suboptimal ways.

  • HL3

    Anyone with permission to execute docker cli commands can see your AWS credentials, all it takes is one docker-inspect command. Easy — yes, secure — not.

  • Instead of configuring the aws cli via environment, you could always mount in your ~/.aws/credentials file. You could also use something like Fugu (https://github.com/mattes/fugu) to take some of the pain out of that very long docker run command.

  • Jeff L

    Any particular reason you chose to use python:2.7 over python:2.7-alpine? I found that using the python:2.7-alpine image slims the docker image down 600MB from 710MB to 110MB.

    • gabrielschenker

      no special reason, just laziness on my side :-)
      But good catch. Thanks for the hint.