Skip to main content

atmos

Introduction

atmos is both a library and a command-line tool for provisioning, managing and orchestrating workflows across various toolchains. We use it extensively for automating cloud infrastructure and Kubernetes clusters.

atmos includes workflows for dealing with:

  • Provisioning Terraform components
  • Deploying helm charts to Kubernetes clusters using helmfile
  • Executing helm commands on Kubernetes clusters
  • Provisioning istio on Kubernetes clusters using istio operator and helmfile
  • Executing kubectl commands on Kubernetes clusters
  • Executing AWS SDK commands to orchestrate cloud infrastructure
  • Running AWS CDK constructs to define cloud resources
  • Executing commands for the serverless framework
  • Executing shell commands
  • Combining various commands into workflows to execute many commands sequentially in just one step
  • ... and many more

In essence, it's a tool that orchestrates the other CLI tools in a consistent and self-explaining manner. It's a superset of all other tools and task runners (e.g. make, terragrunt, terraform, aws cli, gcloud, etc.) and is intended to be used to tie everything together, so you can provide a simple CLI interface for your organization.

Moreover, atmos is not only a command-line interface for managing clouds and clusters. It provides many useful patterns and best practices, such as:

  • Enforces Terraform and helmfile project structure (so everybody knows where things are)
  • Provides separation of configuration and code (so the same code could be easily deployed to different regions, environments and stages)
  • It can be extended to include new features, commands, and workflows
  • The commands have a clean, consistent and easy to understand syntax
  • The CLI code is modular and self-documenting

Install

OSX

From Homebrew directly:

brew install atmos

Linux

On Debian:

# Add the Cloud Posse package repository hosted by Cloudsmith
apt-get update && apt-get install -y apt-utils curl
curl -1sLf 'https://dl.cloudsmith.io/public/cloudposse/packages/cfg/setup/bash.deb.sh' | bash

# Install atmos
apt-get install atmos@="<ATMOS_VERSION>-*"

On Alpine:

# Install the Cloud Posse package repository hosted by Cloudsmith
curl -sSL https://apk.cloudposse.com/install.sh | bash

# Install atmos

apk add atmos@cloudposse~=<ATMOS_VERSION>

Go

Install the latest version

go install github.com/cloudposse/atmos

Get a specific version

NOTE: Since the version is passed in via -ldflags during build, when running go install without using -ldflags, the CLI will return 0.0.1 when running atmos version.

go install github.com/cloudposse/atmos@v1.3.9

Build from source

make build

or run this and replace $version with the version that should be returned with atmos version.

go build -o build/atmos -v -ldflags "-X 'github.com/cloudposse/atmos/cmd.Version=$version'"

Usage

There are a number of ways you can leverage this project:

Our recommended filesystem layout looks like this:

   │   # CLI configuration
   └── cli/
   │  
   │   # Centralized components configuration
   ├── stacks/
   │   │
   │   └── $stack.yaml
   │  
   │   # Components are broken down by tool
   ├── components/
   │   │
   │   ├── terraform/ # root modules in here
   │   │   ├── vpc/
   │   │   ├── eks/
   │   │   ├── rds/
   │   │   ├── iam/
   │   │   ├── dns/
   │   │   └── sso/
   │   │
   │   └── helmfile/ # helmfiles are organized by chart
   │      ├── cert-manager/helmfile.yaml
   │      └── external-dns/helmfile.yaml
   │  
   │   # Makefile for building the CLI
   ├── Makefile
   │  
   │   # Docker image for shipping the CLI and all dependencies
   └── Dockerfile (optional)

Example

The examples/complete folder contains a complete solution that shows how to:

  • Structure the Terraform and helmfile components
  • Configure the CLI
  • Add for the Terraform and helmfile components (to provision them to different environments and stages)

CLI Configuration

Centralized Project Configuration

atmos provides separation of configuration and code, allowing you to provision the same code into different regions, environments and stages.

In our example, all the code (Terraform and helmfiles) is in the examples/complete/components folder.

The centralized stack configurations (variables for the Terraform and helmfile components) are in the examples/complete/stacks folder.

In the example, all stack configuration files are broken down by environments and stages and use the predefined format $environment-$stage.yaml.

Environments are abbreviations of AWS regions, e.g. ue2 stands for us-east-2, whereas uw2 would stand for us-west-2.

$environment-globals.yaml is where you define the global values for all stages for a particular environment. The global values get merged with the $environment-$stage.yaml configuration for a specific environment/stage by the CLI.

In the example, we defined a few config files:

  • example/stacks/ue2-dev.yaml - stack configuration (Terraform and helmfile variables) for the environment ue2 and stage dev
  • example/stacks/ue2-staging.yaml - stack configuration (Terraform and helmfile variables) for the environment ue2 and stage staging
  • example/stacks/ue2-prod.yaml - stack configuration (Terraform and helmfile variables) for the environment ue2 and stage prod
  • example/stacks/ue2-globals.yaml - global settings for the environment ue2 (e.g. region, environment)
  • example/stacks/globals.yaml - global settings for the entire solution

NOTE: The stack configuration structure and the file names described above are just an example of how to name and structure the config files. You can choose any file name for a stack. You can also include other configuration files (e.g. globals for the environment, and globals for the entire solution) into a stack config using the import directive.

Stack configuration files have a predefined format:

  import:
- ue2-globals

vars:
stage: dev

terraform:
vars:

helmfile:
vars:

components:
terraform:
vpc:
command: "/usr/bin/terraform-0.15"
backend:
s3:
workspace_key_prefix: "vpc"
vars:
cidr_block: "10.102.0.0/18"
eks:
backend:
s3:
workspace_key_prefix: "eks"
vars:

helmfile:
nginx-ingress:
vars:
installed: true

It has the following main sections:

  • import - (optional) a list of stack configuration files to import and combine with the current configuration file
  • vars - (optional) a map of variables for all components (Terraform and helmfile) in the stack
  • terraform - (optional) configuration (variables) for all Terraform components
  • helmfile - (optional) configuration (variables) for all helmfile components
  • components - (required) configuration for the Terraform and helmfile components

The components section consists of the following:

  • terraform - defines variables, the binary to execute, and the backend for each Terraform component. Terraform component names correspond to the Terraform components in the example/components folder

  • helmfile - defines variables and the binary to execute for each helmfile component. Helmfile component names correspond to the helmfile components in the example/components/helmfile folder

Run the Example

To run the example, execute the following commands in a terminal:

  • cd example
  • make all - it will build the Docker image, build the CLI tool inside the image, and then start the container

Provision Terraform Component

To provision a Terraform component using the atmos CLI, run the following commands in the container shell:

  atmos terraform plan eks --stack=ue2-dev
atmos terraform apply eks --stack=ue2-dev

where:

  • eks is the Terraform component to provision (from the components/terraform folder)
  • --stack=ue2-dev is the stack to provision the component into

Short versions of the command-line arguments can be used:

  atmos terraform plan eks -s ue2-dev
atmos terraform apply eks -s ue2-dev

To execute plan and apply in one step, use terraform deploy command:

  atmos terraform deploy eks -s ue2-dev

Provision Helmfile Component

To provision a helmfile component using the atmos CLI, run the following commands in the container shell:

  atmos helmfile diff nginx-ingress --stack=ue2-dev
atmos helmfile apply nginx-ingress --stack=ue2-dev

where:

  • nginx-ingress is the helmfile component to provision (from the components/helmfile folder)
  • --stack=ue2-dev is the stack to provision the component into

Short versions of the command-line arguments can be used:

  atmos helmfile diff nginx-ingress -s ue2-dev
atmos helmfile apply nginx-ingress -s ue2-dev

To execute diff and apply in one step, use helmfile deploy command:

  atmos helmfile deploy nginx-ingress -s ue2-dev

Workflows

Workflows are a way of combining multiple commands into one executable unit of work.

In the CLI, workflows can be defined using two different methods:

  • In the configuration file for a stack
  • In a separate file

In the first case, we define workflows in the configuration file for the stack (which we specify on the command line):

  atmos workflow deploy-all -s ue2-dev

Note that workflows defined in the stack config files can be executed only for the particular stack (environment and stage). It's not possible to provision resources for multiple stacks this way.

In the second case (defining workflows in a separate file), a single workflow can be created to provision resources into different stacks. The stacks for the workflow steps can be specified in the workflow config.

For example, to run terraform plan and helmfile diff on all terraform and helmfile components in the example, execute the following command:

  atmos workflow plan-all -f workflows

where the command-line option -f (--file for long version) instructs the atmos CLI to look for the plan-all workflow in the file example/stacks/workflows.yaml.

As we can see, in multi-environment workflows, each workflow job specifies the stack it's operating on:

workflows:
plan-all:
description: Run 'terraform plan' and 'helmfile diff' on all components for all stacks
steps:
- job: terraform plan vpc
stack: ue2-dev
- job: terraform plan eks
stack: ue2-dev
- job: helmfile diff nginx-ingress
stack: ue2-dev
- job: terraform plan vpc
stack: ue2-staging
- job: terraform plan eks
stack: ue2-staging

You can also define a workflow in a separate file without specifying the stack in the workflow's job config. In this case, the stack needs to be provided on the command line.

For example, to run the deploy-all workflow from the example/stacks/workflows.yaml file for the ue2-dev stack, execute the following command:

  atmos workflow deploy-all -f workflows