Atmos Custom Commands
Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos
CLI when you run atmos help
. It's a great way to centralize the way operational tools are run in order to improve DX.
For example, one great way to use custom commands is to tie all the miscellaneous scripts into one consistent CLI interface. Then we can kiss those
ugly, inconsistent arguments to bash scripts goodbye! Just wire up the commands in atmos to call the script. Then developers can just run atmos help
and discover all available commands.
Simple Example
Here is an example to play around with to get started.
Adding the following to atmos.yaml
will introduce a new hello
command.
# Custom CLI commands
commands:
- name: hello
description: This command says Hello world
steps:
- "echo Hello world!"
We can run this example like this:
atmos hello
Positional Arguments
Atmos also can support positional arguments. Arguments do not support default values and are required if defined.
If we add the following to atmos.yaml
, will introduce a new hello
command that accepts one name
argument.
# subcommands
commands:
- name: hello
description: This command says hello to the provided name
arguments:
- name: name
description: Name to greet
steps:
- "echo Hello {{ .Arguments.name }}!"
We can run this example like this:
atmos hello world
Passing Flags
Passing flags works much like passing positional arguments, except for that they are passed using long or short flags.
Flags can be optional (this is configured by setting the required
attribute to false
).
# subcommands
commands:
- name: hello
description: This command says hello to the provided name
flags:
- name: name
shorthand: n
description: Name to greet
required: true
steps:
- "echo Hello {{ .Flags.name }}!"
We can run this example like this, using the long flag:
atmos hello --name world
Or, using the shorthand, we can just write:
atmos hello -n world
Advanced Examples
Define a New Terraform Command
# Custom CLI commands
commands:
- name: terraform
description: Execute 'terraform' commands
# subcommands
commands:
- name: provision
description: This command provisions terraform components
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
# ENV var values support Go templates
env:
- key: ATMOS_COMPONENT
value: "{{ .Arguments.component }}"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
steps:
- atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
- atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
Override an Existing Terraform Command
# Custom CLI commands
commands:
- name: terraform
description: Execute 'terraform' commands
# subcommands
commands:
- name: apply
description: This command executes 'terraform apply -auto-approve' on terraform components
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
steps:
- atmos terraform apply {{ .Arguments.component }} -s {{ .Flags.stack }} -auto-approve
Show Component Info
# Custom CLI commands
commands:
- name: show
description: Execute 'show' commands
# subcommands
commands:
- name: component
description: Execute 'show component' command
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
# ENV var values support Go templates and have access to {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables
env:
- key: ATMOS_COMPONENT
value: "{{ .Arguments.component }}"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
- key: ATMOS_TENANT
value: "{{ .ComponentConfig.vars.tenant }}"
- key: ATMOS_STAGE
value: "{{ .ComponentConfig.vars.stage }}"
- key: ATMOS_ENVIRONMENT
value: "{{ .ComponentConfig.vars.environment }}"
# If a custom command defines 'component_config' section with 'component' and 'stack', 'atmos' generates the config for the component in the stack
# and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,
# exposing all the component sections (which are also shown by 'atmos describe component' command)
component_config:
component: "{{ .Arguments.component }}"
stack: "{{ .Flags.stack }}"
# Steps support using Go templates and can access all configuration settings (e.g. {{ .ComponentConfig.xxx.yyy.zzz }})
# Steps also have access to the ENV vars defined in the 'env' section of the 'command'
steps:
- 'echo Atmos component from argument: "{{ .Arguments.component }}"'
- 'echo ATMOS_COMPONENT: "$ATMOS_COMPONENT"'
- 'echo Atmos stack: "{{ .Flags.stack }}"'
- 'echo Terraform component: "{{ .ComponentConfig.component }}"'
- 'echo Backend S3 bucket: "{{ .ComponentConfig.backend.bucket }}"'
- 'echo Terraform workspace: "{{ .ComponentConfig.workspace }}"'
- 'echo Namespace: "{{ .ComponentConfig.vars.namespace }}"'
- 'echo Tenant: "{{ .ComponentConfig.vars.tenant }}"'
- 'echo Environment: "{{ .ComponentConfig.vars.environment }}"'
- 'echo Stage: "{{ .ComponentConfig.vars.stage }}"'
- 'echo Dependencies: "{{ .ComponentConfig.deps }}"'
Set EKS Cluster
# Custom CLI commands
commands:
- name: set-eks-cluster
description: |
Download 'kubeconfig' and set EKS cluster.
Example usage:
atmos set-eks-cluster eks/cluster -s plat-ue1-dev -r admin
atmos set-eks-cluster eks/cluster -s plat-uw2-prod --role reader
verbose: false # Set to `true` to see verbose outputs
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
- name: role
shorthand: r
description: IAM role to use
required: true
# If a custom command defines 'component_config' section with 'component' and 'stack',
# Atmos generates the config for the component in the stack
# and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,
# exposing all the component sections (which are also shown by 'atmos describe component' command)
component_config:
component: "{{ .Arguments.component }}"
stack: "{{ .Flags.stack }}"
env:
- key: KUBECONFIG
value: /dev/shm/kubecfg.{{ .Flags.stack }}-{{ .Flags.role }}
steps:
- >
aws
--profile {{ .ComponentConfig.vars.namespace }}-{{ .ComponentConfig.vars.tenant }}-gbl-{{ .ComponentConfig.vars.stage }}-{{ .Flags.role }}
--region {{ .ComponentConfig.vars.region }}
eks update-kubeconfig
--name={{ .ComponentConfig.vars.namespace }}-{{ .Flags.stack }}-eks-cluster
--kubeconfig="${KUBECONFIG}"
> /dev/null
- chmod 600 ${KUBECONFIG}
- echo ${KUBECONFIG}
Describe EKS Cluster Kubernetes Version Upgrade
# Custom CLI commands
commands:
- name: describe
description: "Execute 'describe' commands"
# subcommands
commands:
- name: eks
description: "Execute 'describe eks' commands"
# subcommands
commands:
- name: upgrade
description: "Describe the steps on how to upgrade an EKS cluster to the next Kubernetes version. Usage: atmos describe eks upgrade <eks_component> -s <stack>"
arguments:
- name: component
description: Name of the EKS component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
- name: role
shorthand: r
description: Role to assume to connect to the cluster
required: false
# If a custom command defines 'component_config' section with 'component' and 'stack',
# Atmos generates the config for the component in the stack
# and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,
# exposing all the component sections (which are also shown by 'atmos describe component' command)
component_config:
component: "{{ .Arguments.component }}"
stack: "{{ .Flags.stack }}"
env:
- key: KUBECONFIG
value: /dev/shm/kubecfg-eks-upgrade.{{ .Flags.stack }}
steps:
- |
# Set the environment
color_red="\u001b[31m"
color_green="\u001b[32m"
color_yellow="\u001b[33m"
color_blue="\u001b[34m"
color_magenta="\u001b[35m"
color_cyan="\u001b[36m"
color_black="\u001b[30m"
color_white="\u001b[37m"
color_reset="\u001b[0m"
# Check the requirements
command -v aws >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'aws' is required but it's not installed.${color_reset}"; exit 1; }
command -v kubectl >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'kubectl' is required but it's not installed.${color_reset}"; exit 1; }
command -v helm >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'helm' is required but it's not installed.${color_reset}"; exit 1; }
command -v jq >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'jq' is required but it's not installed.${color_reset}"; exit 1; }
command -v yq >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'yq' is required but it's not installed.${color_reset}"; exit 1; }
command -v pluto >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'pluto' is required but it's not installed.${color_reset}"; exit 1; }
command -v awk >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'awk' is required but it's not installed.${color_reset}"; exit 1; }
command -v sed >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'sed' is required but it's not installed.${color_reset}"; exit 1; }
command -v tr >/dev/null 2>&1 || { echo -e >&2 "\n${color_red}'tr' is required but it's not installed.${color_reset}"; exit 1; }
# Set the role to assume to connect to the cluster
role={{ .Flags.role }}
if [[ -z "$role" ]]; then
role=admin
fi
# Download kubeconfig and connect to the cluster
echo -e "\nConnecting to EKS cluster ${color_cyan}{{ .Flags.stack }}${color_reset} and downloading kubeconfig..."
aws \
--profile {{ .ComponentConfig.vars.namespace }}-{{if (index .ComponentConfig.vars "tenant") }}{{ .ComponentConfig.vars.tenant }}-gbl-{{ .ComponentConfig.vars.stage }}{{else}}gbl-{{ .ComponentConfig.vars.stage }}{{end}}-${role} \
--region {{ .ComponentConfig.vars.region }} \
eks update-kubeconfig \
--name={{ .ComponentConfig.vars.namespace }}-{{ .Flags.stack }}-eks-cluster \
--kubeconfig="${KUBECONFIG}"
chmod 600 ${KUBECONFIG}
# Check connectivity to the cluster
kubectl version -o json 2>&1>/dev/null
retVal=$?
if [ $retVal -ne 0 ]; then
echo -e "${color_red}\nCould not connect to the cluster.\nIf the cluster is provisioned in private subnets or only allows private access, make sure you are connected to the VPN.\n${color_reset}"
exit $retVal
fi
# Get the current Kubernetes version from the cluster
current_k8s_version_str=$(kubectl version -o json 2>/dev/null | jq '(.serverVersion.major + "." + .serverVersion.minor)' | sed 's/[+\"]//g')
current_k8s_version=$(echo ${current_k8s_version_str} | jq 'tonumber')
echo -e "\nThe cluster is running Kubernetes version ${current_k8s_version}"
# Get all the supported Kubernetes versions from AWS EKS
supported_eks_k8s_versions=$(aws eks describe-addon-versions | jq -r '[ .addons[].addonVersions[].compatibilities[].clusterVersion ] | unique | sort')
supported_eks_k8s_versions_csv=$(echo ${supported_eks_k8s_versions} | jq -r 'join(", ")')
echo -e "AWS EKS currently supports Kubernetes versions ${supported_eks_k8s_versions_csv}"
# Calculate the next Kubernetes version that the cluster can be upgraded to
next_k8s_version=$(echo ${supported_eks_k8s_versions} | jq -r --arg current_k8s_version "${current_k8s_version}" 'map(select((. |= tonumber) > ($current_k8s_version | tonumber)))[0]')
# Check if the cluster can be upgraded
upgrade_needed=false
if [[ ! -z "$next_k8s_version" ]] && (( $(echo $next_k8s_version $current_k8s_version | awk '{if ($1 > $2) print 1;}') )) ; then
upgrade_needed=true
else
fi
if [ ${upgrade_needed} = false ]; then
echo -e "${color_green}\nThe cluster is running the latest supported Kubernetes version ${current_k8s_version}\n${color_reset}"
exit 0
fi
# Describe the upgrade process
echo -e "${color_green}\nThe cluster can be upgraded to the next Kubernetes version ${next_k8s_version}${color_reset}"
# Describe what will be checked before the upgrade
describe_what_will_be_checked="
\nBefore upgrading the cluster to Kubernetes ${next_k8s_version}, we'll check the following:
- Pods and containers that are not ready or crashing
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
- Helm releases with removed Kubernetes API versions
https://kubernetes.io/docs/reference/using-api/deprecation-policy
https://helm.sh/docs/topics/kubernetes_apis
- EKS add-ons versions
https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
"
echo -e "${describe_what_will_be_checked}"
echo -e "${color_cyan}\nPress Enter to continue ...${color_reset}"
read -r
# Show all Pods that are not in 'Running' state
echo -e "\nChecking for Pods that are not in 'Running' state...\n"
kubectl get pods -A | grep -Ev '([0-9]+)/\1'
# Show failed or not ready containers
echo -e "\nChecking for failing containers..."
failing_containers=$(kubectl get pods -A -o json | jq '[ .items[].status.containerStatuses[].state | select(has("waiting")) | .waiting ]')
failing_containers_count=$(echo ${failing_containers} | jq 'length')
if [[ "$failing_containers_count" > 0 ]]; then
echo -e "${color_red}\nThere are ${failing_containers_count} failing container(s) on the cluster:\n${color_reset}"
echo ${failing_containers} | jq -r 'def red: "\u001b[31m"; def reset: "\u001b[0m"; (.[] | [ red + .message + reset ]) | @tsv'
echo -e "\nAlthough the cluster can be upgraded to the next Kubernetes version even with the failing Pods and containers, it's recommended to fix all the issues before upgrading.\n"
else
echo -e "${color_green}\nThere are no failing containers on the cluster\n${color_reset}"
fi
echo -e "${color_cyan}\nPress Enter to continue ...${color_reset}"
read -r
# Show Helm releases with removed Kubernetes API versions
echo -e "\nChecking for Helm releases with removed Kubernetes API versions...\n"
releases_with_removed_versions=$(pluto detect-helm --output json --only-show-removed --target-versions k8s=v${next_k8s_version} 2>/dev/null | jq 'select(has("items")) | [ .items[] ]')
releases_with_removed_versions_count=$(echo ${releases_with_removed_versions} | jq 'length')
if [[ -z "$releases_with_removed_versions_count" ]] || [[ "$releases_with_removed_versions_count" = 0 ]]; then
echo -e "${color_green}\nAll Helm releases are up to date and ready for Kubernetes ${next_k8s_version}${color_reset}"
else
echo -e "${color_red}\nThere are Helm releases with API versions removed in Kubernetes ${next_k8s_version}\n${color_reset}"
pluto detect-helm --output wide --only-show-removed --target-versions k8s=v${next_k8s_version} 2>/dev/null
helm_list_filter=$(echo ${releases_with_removed_versions} | jq -r '[ (.[].name | split("/"))[0] ] | join("|")')
helm list -A -a -f ${helm_list_filter}
# Describe how to fix the Helm releases
describe_how_to_fix_helm_releases="
\nBefore upgrading the cluster to Kubernetes ${next_k8s_version}, the Helm releases need to be fixed.
- For the Helm releases identified, you need to check for the latest version of the Chart (which has supported API versions)
or update the Chart yourself. Then deploy the updated Chart
- If the cluster was already upgraded to a new Kubernetes version without auditing for the removed API versions, it might be already running
with the removed API versions. When trying to redeploy the Helm Chart, you might encounter an error similar to the following:
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s)
for this kubernetes version and it is therefore unable to build the kubernetes
objects for performing the diff.
Error from Kubernetes: unable to recognize \"\": no matches for kind "Deployment" in version "apps/v1beta1"
Helm fails in this scenario because it attempts to create a diff patch between the current deployed release
(which contains the Kubernetes APIs that are removed) against the Chart you are passing with the updated/supported API versions.
To fix this, you need to edit the release manifests that are stored in the cluster to use the supported API versions.
You can use the Helm 'mapkubeapis' plugin to update/patch the Helm releases to supported APIs.
Execute the following commands to patch the releases identified above:
helm plugin install https://github.com/helm/helm-mapkubeapis
helm mapkubeapis <NAME> -n <NAMESPACE>
NOTE: The best practice is to upgrade Helm releases that are using deprecated API versions to supported API versions
prior to upgrading to a Kubernetes version that removes those APIs.
For more information, refer to:
- https://helm.sh/docs/topics/kubernetes_apis
- https://github.com/helm/helm-mapkubeapis
"
echo -e "${describe_how_to_fix_helm_releases}"
fi
echo -e "${color_cyan}\nPress Enter to continue ...${color_reset}"
read -r
# Check EKS add-ons versions
echo -e "\nChecking EKS add-ons versions..."
addons=$(atmos describe component {{ .Arguments.component }} -s {{ .Flags.stack }} --format json | jq -r '.vars.addons')
addons_count=$(echo ${addons} | jq -r '. | keys | length')
if [[ "$addons_count" = 0 ]]; then
echo -e "${color_yellow}
\rCould not detect the 'addons' variable for the component '{{ .Arguments.component }}' in the stack '{{ .Flags.stack }}'.
\rMake sure that EKS add-ons are configured and provisioned on the EKS cluster.
\rRefer to https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html for more details.
${color_reset}"
else
echo -e "\nThere are currently ${addons_count} add-on(s) configured for the EKS component ${color_cyan}{{ .Arguments.component }}${color_reset} in the stack ${color_cyan}{{ .Flags.stack }}${color_reset} in the variable ${color_cyan}addons${color_reset}:\n"
echo ${addons} | yq --prettyPrint '.'
echo -e "\nKubernetes ${next_k8s_version} requires the following versions of the EKS add-ons:\n"
# Detect the latest supported versions of the EKS add-ons
addons_template=$(atmos describe component {{ .Arguments.component }} -s {{ .Flags.stack }} --format json | jq -r '.vars.addons')
for ((i=0; i<${addons_count}; i++)); do
addon_name=$(echo ${addons} | jq -r '(keys)['$i']')
addon_version=$(aws eks describe-addon-versions --kubernetes-version ${next_k8s_version} --addon-name ${addon_name} --query 'addons[].addonVersions[?compatibilities[0].defaultVersion].addonVersion' --output text)
addons_template=$(jq --arg addon_name "${addon_name}" --arg addon_version "${addon_version}" '.[$addon_name].addon_version = $addon_version' <<< "${addons_template}")
done
# Print the add-ons configuration for the desired Kubernetes version
echo ${addons_template} | yq --prettyPrint '.'
fi
# Describe how to provision the EKS component with the new Kubernetes version
echo -e "${color_cyan}\nPress Enter to continue ...${color_reset}"
read -r
echo -e "\nAfter the Pods, Helm releases and EKS add-ons are configured and ready, do the following:\n
- Set the variable ${color_cyan}kubernetes_version${color_reset} to ${color_cyan}${next_k8s_version}${color_reset} for the EKS component ${color_cyan}{{ .Arguments.component }}${color_reset} in the stack ${color_cyan}{{ .Flags.stack }}${color_reset}
- Run the command ${color_cyan}atmos terraform apply {{ .Arguments.component }} -s {{ .Flags.stack }}${color_reset} to provision the component
- Run the command ${color_cyan}kubectl get pods -A${color_reset} to check the status of all Pods after the upgrade
- Run the command ${color_cyan}helm list -A -a${color_reset} to check the status of all Helm releases after the upgrade
"