|||

Docker-Compose in AWS ECS with EFS volume mounts

Install the aws ecs cli to run commands from terminal. Download Instructions

Add ecs-cli to your path if it doesnt exits already.

Make sure you have a local AWS IAM user profile setup under .aws folder that can create ecs clusters.

To configure the ecs-cli run the commands below:

    export AWS_PROFILE=<<Your profile name>>

and cluster name + region. This should create a config file under ~/.ecs/config

    ecs-cli configure
      --cluster ec2-tutorial
      --default-launch-type EC2
      --config-name ec2-tutorial
      --region ap-southeast-2

To spin up a new cluster run the command:

    ecs-cli up
      --keypair <<Your KeyPair name in EC2>>
      --capability-iam
      --size 1
      --instance-type t2.2xlarge
      --cluster-config ec2-tutorial
      --ecs-profile ec2-tutorial-profile

This should create a new ecs cluster with a single t2.2xlarge ec2 instance.

To spin the cluster down, run the command:

    ecs-cli down

Next, create your docker compose file that needs to be deployed to the ecs cluster and run the command below:

    ecs-cli compose
      --file <<Path to your docker-compose file>>.yml up
      --cluster-config ec2-tutorial
      --ecs-profile ec2-tutorial-profile

For this post, we are using a compose file that will initially deploy redis, elasticsearch and kibana. Later on we’ll extend it to use zeebe with a persistent EFS volume.

version: "3"

networks:
  zeebe_network:

volumes:
  zeebe_data:
  zeebe_elasticsearch_data:
  cpo_redis_data:
  zeebe-efs:

services:
  redis:
    image: redis
    hostname: redis
    ports:
      - 6379:6379
    volumes:
      - cpo_redis_data:/data
    networks:
      - zeebe_network

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.2
    ports:
      - "9200:9200"
    environment:
      - discovery.type=single-node
      - cluster.name=elasticsearch
      - "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
    volumes:
      - zeebe_elasticsearch_data:/usr/share/elasticsearch/data
    networks:
      - zeebe_network

  kibana:
    image: docker.elastic.co/kibana/kibana-oss:6.7.1
    ports:
      - "5601:5601"
    networks:
      - zeebe_network

Because elasticsearch is java based, it understandbly needs more memory, in order to allocate more memory and cpu to a given container, we need to create a new file called ecs-params.yml in the current directory (where we are running ecs-cli command from). The ecs-cli tool will pick up automatically look for that and configure containers accordingly given you have enough cpu and memory across the ec2 instances. Here is a sample file below:

version: 1
task_definition:
  services:
    elasticsearch:
      essential: true
      mem_limit: 2000000000
    zeebe:
      essential: true
      mem_limit: 2000000000

Next, to add a persistent volume, we’ll need to provision a network file storage (NFS) on the Amazon Elastic File System (EFS).

  • Choose VPC created by the ecs-cli compose
  • Select the subnets in that vpc
  • Remove default security groups and choose the correct one that is attached to our ec2 instance (t2.2xlarge).
  • Add a name tag in the next step.
  • Finally, create the file system.
  • Open SSH access on the ec2 instance’s security group. Inbound TCP port 22 from dev machine’s public IP.
  • Open NFS access on the ec2 instance’s security group from both subnets a & b within the VPC. Inbound NSF port 2049.

SG Inbound

Next, ssh into the ec2 instance and run nmap to the provisioned efs to make sure tcp 2049 shows up in the open state.

    nmap -Pn -p nfs fs-xxxxxx.efs.ap-southeast-2.amazonaws.com

Next, we will make this provisioned EFS available as a docker volume to a new container (zeebe) to be consumed within our compose file. This can be done by add the following snippet to the ecs-params.yml file.

version: 1
task_definition:
  services:
    elasticsearch:
      essential: true
      mem_limit: 2000000000
    zeebe:
      essential: true
      mem_limit: 2000000000
  docker_volumes:
    - name: zeebe-efs
      scope: shared
      driver: local
      autoprovision: true
      driver_opts:
        type: nfs
        device: fs-xxxxxx.efs.ap-southeast-2.amazonaws.com:/zeebe
        o: addr=fs-xxxxxx.efs.ap-southeast-2.amazonaws.com,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=120,retrans=2,noresvport

Run the ecs-cli compose command again and check the JSON tab in the ECS console of the new ecs task, it should now have an available docker volume name called zeebe-efs pointing to our provisioned EFS.

Next, lets mount the EFS in our ec2 instance using the instructions provided in the efs web console. Right click connect. Also make a sub dir under /srv/zeebe-efs/zeebe. We will be mounting the contents of the zeebe folder as a volume in our zeebe conatiner.

once mounted, lets scp a zeebe config file into the zeebe folder in that efs volume.

scp -i <<Path to Pem File> zeebe.cfg.toml
[email protected]:/srv/zeebe-efs/zeebe/zeebe.cfg.toml

If there is a permission denied error during scp, make sure to run the command below:

  sudo chown ec2-user zeebe (from ec2's /srv folder)

next, add the zeebe container and see if it can pick up that docker volume. This can be checked by doing docker exec into the running container.

  docker exec -it <<zeebe container id>> /bin/bash

The contents in the /usr/local/zeebe/conf/zeebe.conf.toml should match the toml file we scp’ed in the previous step. The final compose file looks like this:

version: "3"

networks:
  zeebe_network:

volumes:
  zeebe_data:
  zeebe_elasticsearch_data:
  cpo_redis_data:
  zeebe-efs:

services:
  redis:
    image: redis
    hostname: redis
    ports:
      - 6379:6379
    volumes:
      - cpo_redis_data:/data
    networks:
      - zeebe_network

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.2
    ports:
      - "9200:9200"
    environment:
      - discovery.type=single-node
      - cluster.name=elasticsearch
      - "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
    volumes:
      - zeebe_elasticsearch_data:/usr/share/elasticsearch/data
    networks:
      - zeebe_network

  kibana:
    image: docker.elastic.co/kibana/kibana-oss:6.7.1
    ports:
      - "5601:5601"
    networks:
      - zeebe_network

  zeebe:
    image: camunda/zeebe:latest
    hostname: zeebe
    environment:
      - ZEEBE_LOG_LEVEL=debug
    volumes:
      - zeebe_data:/usr/local/zeebe/data
      - zeebe-efs:/usr/local/zeebe/conf
    ports:
      - "26500:26500"
      - "9600:9600"
    depends_on:
      - elasticsearch
    networks:
      - zeebe_network
Up next Domain Driven Design Core Principles Some notes on Zeebe (A scalable process orchestrator)
Latest posts Refactor react code to use state store instead of multiple useState hooks Notes on Python Threat Modelling - Using Microsoft STRIDE Model WCAG - Notes Flutter CI/CD with Azure Devops & Firebase - iOS - Part 1 Flutter CI/CD with Azure Devops & Firebase - Android - Part 2 How to samples with AWS CDK A hashicorp packer project to provision an AWS AMI with node, pm2 & mongodb Some notes on Zeebe (A scalable process orchestrator) Docker-Compose in AWS ECS with EFS volume mounts Domain Driven Design Core Principles Apple Push Notifications With Amazon SNS AWS VPC Notes Building and Deploying apps using VSTS and HockeyApp - Part 3 : Windows Phone Building and Deploying apps using VSTS and HockeyApp - Part 2 : Android Building and Deploying apps using VSTS and HockeyApp - Part 1 : iOS How I diagnosed High CPU usage using Windbg WCF service NETBIOS name resolution woes The troublesome Git-Svn Marriage GTD (Getting things done) — A simplified view Javascript Refresher Sharing common connection strings between projects A simple image carousel prototype using Asp.net webforms and SignalR Simple logging with NLog Application logger SVN Externals — Share common assembly code between solutions Simple async in .net 2.0 & Winforms Clean sources Plus Console 2 — A tabbed console window