How to set dynamic environment variables in ECS containers using mounted volumes and Docker Entrypoints

This post describes a problem that seems custom to ECS: Using a single task definition for multiple environments, and is part of my mini-series of posts: Common challenges in containerizing your application with Amazon ECS.

Problem Description

During our migration to Docker and ECS, one of our goals is to have multiple environments (QA, Staging, Production) use the same containers and, ideally, the same ECS task definitions. In order to achieve this goal, each cluster of Docker EC2 hosts needs to know if it’s a “QA” or “Staging” or “Production” cluster and pass that information along to the containers running on it.

However, AWS ECS only allows setting Docker environment variables in the task definition; hardcoded. This would require us to create separate task definitions for each environment, which we’re trying to avoid. This issue is listed as #3 the AWS ECS Agent list of issues on Github, and Amazon hasn’t taken this issue on since it was brought up in January of 2015: Need Host environment variable resolution to pass some information to a container · Issue #3 · aws/amazon-ecs-agent · GitHub

Solution this problem: Volumes, CloudFormation, and Docker ENTRYPOINT Scripts

This solution includes creating a file containing environment variables on the Docker Host, using CloudFormation to automate the creation of this file, and then using an ENTRYPOINT script inspired by the Postgres Docker image.

Use CloudFormation to create a file on the Docker host

To provision Docker hosts in an easy and scalable way, I use a CloudFormation template that defines an Auto-Scaling Group. For that Auto-Scaling group, I use a AWSCloudFormationInit CloudFormation script to create a file on each instance created by the Auto-Scaling Group. Defining the launch configuration in the stack definition could look like this:

Resources:
  EcsCluster:
    Type: "AWS::ECS::Cluster"
    Properties:
      ClusterName: MyCluster
  EcsInstanceLaunchConfiguration:
    Type: AWS::AutoScaling::LaunchConfiguration
    DependsOn: EcsCluster
    Properties:
      IamInstanceProfile: ecsInstanceRole
      ImageId: !FindInMap [EcsOptimizedAmi, !Ref "AWS::Region", AmiId]
      InstanceType: !Ref EcsInstanceType
      KeyName: !Ref KeyName
      SecurityGroups:
        - !ImportValue SomeSecurityGroups
      UserData:
        Fn::Base64:
          !Sub |
            yum update -y
            yum install -y aws-cfn-bootstrap
            # Install the files and packages from the metadata.
            /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --region ${AWS::Region} --resource EcsInstanceLaunchConfiguration --configsets CreateEnvironmentFile
    Metadata:
      AWS::CloudFormation::Init:
        configSets:
          CreateEnvironmentFile:
            - createEnvFile
        createEnvFile:
          files:
            /root/ecs_helper/environment_vars:
              mode: "000644"
              owner: "root"
              group: "root"
              content: |
	        NODE_ENV=production
		DB_PORT=3306

In this example, we are creating a file called /root/ecs_helper/environment_vars that contains any number of environment variables; one per line.

Alternatively, we could just create this file manually on our Docker hosts without CloudFormation, but automation makes things easier. It also allows us to replace the hardcoded value of the NODE_ENV key with a CloudFormation parameter, thus making this stack template usable for several environments.

Mount the file containing environment variables on the target containers

With that file on hand, we can then adjust our ECS task definition to define a volume called ecs_envs_vars which defines the parent directory of our environment variable file /root/ecs_helper as the source path.

In our container, we then define a mount point which maps the ecs_envs_vars volume at a container path of our choosing, e.g. /code/ecs_helper.

An abbreviated version of the JSON representation of that task definition could look like this:

"volumes": [
  {
    "host": {
      "sourcePath": "/root/ecs_helper"
    },
    "name": "ecs_envs_vars"
  }
],
"containerDefinitions": [
  {
    ...
    "mountPoints": [
      {
        "containerPath": "/code/ecs_helper",
        "sourceVolume": "envs",
        "readOnly": true
      }
    ],
    "name": "my_container",
    ...
  }
]

Dockerfile ENTRYPOINT script to set environment variables and execute original command

Here’s a quick reference to the ENTRYPOINT Script in a Dockerfile for a quick refresher. In a nutshell: It allows us to easily execute other instructions while running our original command as process 1 in the container.

This ENTRYPOINT script will read the environment file that we mounted into the container, set them, and then execute the original command.

#!/usr/bin/env bash
set -e 

# Read environment variables from file and export them.
file_env() {
	while read -r line || [[ -n $line ]]; do
		export $line
	done < "$1"
}

FILE="/code/ecs_helper/environment_vars"

#if file exists then export enviroment variables
if [ -f $FILE ]; then
	file_env $FILE
fi

exec "$@"

Conclusion

I wish AWS would respond to the many requests from users (53 for even the first comment in the thread), so that this workaround wouldn’t be necessary. However, CloudFormation makes creating the file that defines the environment variable relatively easy, and we are using an ENTRYPOINT script for most of our Dockerfiles anyway.