EKS Kubeconfig Authentication
This guide shows you how to configure EKS kubeconfig authentication using Atmos integrations for automatic cluster access.
Overview
Atmos provides native EKS kubeconfig authentication through the integrations system. When you authenticate with an identity, Atmos automatically generates kubeconfig entries for linked EKS clusters, eliminating the need for the AWS CLI entirely.
Key benefits:
- Automatic kubeconfig: Cluster credentials are provisioned when you authenticate with an identity
- No AWS CLI required: Uses the Go SDK directly for EKS authentication and token generation
- Multi-cluster support: Configure multiple EKS clusters across accounts and regions
- Exec credential plugin:
atmos aws eks tokengenerates short-lived tokens automatically for kubectl - CI/CD ready: Works seamlessly in GitHub Actions, GitLab CI, and other pipelines
Quick Start
Basic EKS Integration
The simplest EKS setup links an integration to an existing AWS identity:
auth:
providers:
company-sso:
kind: aws/iam-identity-center
region: us-east-1
start_url: https://company.awsapps.com/start/
identities:
dev-admin:
kind: aws/permission-set
via:
provider: company-sso
principal:
name: AdministratorAccess
account: dev
integrations:
dev/eks/primary:
kind: aws/eks
via:
identity: dev-admin
spec:
cluster:
name: dev-cluster
region: us-east-2
alias: dev-eks
Authenticate and use kubectl:
# Use exec to authenticate and run kubectl in a single command.
# This sets KUBECONFIG automatically and provisions EKS kubeconfig entries.
atmos auth exec --identity dev-admin -- kubectl --context dev-eks get pods
atmos auth exec --identity dev-admin -- kubectl --context dev-eks get namespaces
# Or use shell for an interactive session with KUBECONFIG already set
atmos auth shell --identity dev-admin
kubectl --context dev-eks get pods
atmos auth login updates the kubeconfig file at ~/.config/atmos/kube/config (by default), but it does not set the KUBECONFIG environment variable in your current shell. Use atmos auth exec or atmos auth shell to get a session where KUBECONFIG is properly configured.
Understanding Integrations
Why Integrations vs Identities?
EKS kubeconfig generation is fundamentally different from AWS identities:
| Concept | IAM User/Role | EKS Kubeconfig |
|---|---|---|
| Stored identity object | Yes | No |
| Policy attachment | Yes | No |
| Server-side lifecycle | Yes | No |
| Client-only materialization | No | Yes |
Integrations are things that use an identity to materialize client-side credentials. For EKS, this means describing the cluster and writing a kubeconfig file that configures kubectl to use atmos aws eks token for authentication.
Integration Configuration
Each EKS integration requires:
integrations:
my-eks-integration: # Unique name for this integration
kind: aws/eks # Integration type
via:
identity: my-identity # Which identity provides AWS credentials
spec:
auto_provision: true # Auto-trigger on identity login (default)
cluster:
name: my-cluster # EKS cluster name (required)
region: us-east-2 # AWS region (required)
alias: my-alias # Context alias in kubeconfig (optional)
kubeconfig: # Kubeconfig settings (optional)
path: /custom/path # Custom file path
mode: "0600" # File permissions (octal)
update: merge # merge, replace, or error
Integration Configuration Options
| Field | Required | Default | Description |
|---|---|---|---|
kind | Yes | - | Must be aws/eks for EKS integrations |
via.identity | Yes | - | Name of identity providing AWS credentials |
spec.auto_provision | No | true | Auto-trigger on identity login |
spec.cluster.name | Yes | - | EKS cluster name |
spec.cluster.region | Yes | - | AWS region where cluster is located |
spec.cluster.alias | No | Cluster ARN | Context name alias in kubeconfig |
spec.cluster.kubeconfig.path | No | ~/.config/atmos/kube/config | Custom kubeconfig file path |
spec.cluster.kubeconfig.mode | No | 0600 | File permissions (octal string) |
spec.cluster.kubeconfig.update | No | merge | Update strategy: merge, replace, or error |
Common Patterns
Multi-Environment Setup
Configure separate integrations for each environment:
auth:
identities:
dev-admin:
kind: aws/permission-set
via:
provider: company-sso
principal:
name: AdministratorAccess
account: dev
prod-reader:
kind: aws/permission-set
via:
provider: company-sso
principal:
name: ReadOnlyAccess
account: production
integrations:
# Dev EKS cluster
dev/eks:
kind: aws/eks
via:
identity: dev-admin
spec:
cluster:
name: dev-cluster
region: us-east-2
alias: dev
# Production EKS cluster
prod/eks:
kind: aws/eks
via:
identity: prod-reader
spec:
cluster:
name: prod-cluster
region: us-east-1
alias: prod
Usage:
# Development work
atmos auth exec --identity dev-admin -- kubectl --context dev get pods
# Production access
atmos auth exec --identity prod-reader -- kubectl --context prod get pods
Multi-Cluster Setup
One identity can link to multiple EKS clusters:
auth:
identities:
platform-admin:
kind: aws/permission-set
via:
provider: company-sso
principal:
name: PlatformAdmin
account: platform
integrations:
platform/eks/api:
kind: aws/eks
via:
identity: platform-admin
spec:
cluster:
name: api-cluster
region: us-east-1
alias: api
platform/eks/data:
kind: aws/eks
via:
identity: platform-admin
spec:
cluster:
name: data-cluster
region: us-east-1
alias: data
platform/eks/monitoring:
kind: aws/eks
via:
identity: platform-admin
spec:
cluster:
name: monitoring-cluster
region: us-west-2
alias: monitoring
Use exec to access all clusters:
# Run kubectl in a session with all clusters provisioned
atmos auth exec --identity platform-admin -- kubectl config get-contexts
# CURRENT NAME CLUSTER AUTHINFO
# api arn:aws:eks:us-east-1:123456789012:cluster/api atmos-eks-platform-admin
# data arn:aws:eks:us-east-1:123456789012:cluster/data atmos-eks-platform-admin
# monitoring arn:aws:eks:us-west-2:123456789012:cluster/monitoring atmos-eks-platform-admin
# Or use shell for interactive access to all clusters
atmos auth shell --identity platform-admin
kubectl --context api get pods
kubectl --context monitoring get pods
Optional Integrations
Disable auto-provisioning for integrations you only need occasionally:
auth:
integrations:
# Always provision on login
dev/eks/primary:
kind: aws/eks
via:
identity: dev-admin
spec:
cluster:
name: dev-cluster
region: us-east-2
alias: dev
# Only provision when explicitly requested
dev/eks/sandbox:
kind: aws/eks
via:
identity: dev-admin
spec:
auto_provision: false # Don't auto-trigger
cluster:
name: sandbox-cluster
region: us-west-2
alias: sandbox
Usage:
# Normal exec only provisions primary cluster
atmos auth exec --identity dev-admin -- kubectl --context dev get pods
# Explicitly generate kubeconfig for sandbox when needed
atmos aws eks update-kubeconfig --integration=dev/eks/sandbox
Kubeconfig Management
Update Modes
The update field controls how kubeconfig entries are written:
| Mode | Behavior |
|---|---|
merge (default) | Merge new entries with existing kubeconfig. Existing entries for the same cluster are updated. |
replace | Overwrite the entire kubeconfig file with only this cluster's configuration. |
error | Fail if the cluster already exists in kubeconfig. Useful for preventing accidental overwrites. |
Custom Paths
By default, kubeconfig is written to ~/.config/atmos/kube/config (XDG-compliant). You can customize this:
spec:
cluster:
name: dev-cluster
region: us-east-2
kubeconfig:
path: /home/user/.kube/config # Write to standard kubectl location
File Permissions
Control the kubeconfig file permissions:
spec:
cluster:
name: dev-cluster
region: us-east-2
kubeconfig:
mode: "0600" # Owner read/write only (default)
KUBECONFIG Environment Variable
When using atmos auth env or atmos auth exec, the KUBECONFIG environment variable is automatically set to include the integration's kubeconfig path. Multiple EKS integrations produce paths joined by the OS path-list separator (: on Unix, ; on Windows) with deduplication:
eval $(atmos auth env --identity dev-admin)
echo $KUBECONFIG
# /home/user/.config/atmos/kube/config
Auto-Provisioning
When auto_provision is true (the default), EKS integrations are automatically triggered when you authenticate with their linked identity:
$ atmos auth login --identity dev-admin
Authenticating with identity: dev-admin
Opening browser for SSO authentication...
Successfully authenticated as dev-admin
✓ EKS kubeconfig: dev-eks → /home/user/.config/atmos/kube/config
After login, the kubeconfig file is updated but KUBECONFIG is not set in your current shell. Use atmos auth exec --identity dev-admin -- <command> or atmos auth shell --identity dev-admin to work with kubectl.
To disable auto-provisioning for an integration, set auto_provision: false:
integrations:
dev/eks/optional:
kind: aws/eks
via:
identity: dev-admin
spec:
auto_provision: false # Only triggered via explicit command
cluster:
name: optional-cluster
region: eu-west-1
Token Generation
How atmos aws eks token Works
The kubeconfig generated by Atmos configures kubectl to use atmos aws eks token as an exec credential plugin. When kubectl needs to authenticate:
- kubectl calls
atmos aws eks token --cluster-name <name> --region <region> --identity <identity> - Atmos authenticates the identity (using cached credentials from
atmos auth login) - Generates a pre-signed STS
GetCallerIdentityURL with the cluster name - Returns a short-lived bearer token (~15 minutes) as an
ExecCredentialJSON object - kubectl uses the token for Kubernetes API authentication
This is the same authentication mechanism used by aws eks get-token, but without requiring the AWS CLI.
Manual Token Generation
While atmos aws eks token is primarily designed for kubectl, you can also use it directly:
# Generate and inspect a token
atmos aws eks token --cluster-name dev-cluster --region us-east-2 | jq .
# Use with curl against the Kubernetes API
TOKEN=$(atmos aws eks token --cluster-name dev-cluster --region us-east-2 | jq -r .status.token)
curl -H "Authorization: Bearer $TOKEN" https://<endpoint>/api/v1/namespaces
Standalone Kubeconfig Update
You can generate kubeconfig independently of identity authentication:
By Integration Name
atmos aws eks update-kubeconfig --integration=dev/eks/primary
By Identity with Explicit Parameters
atmos aws eks update-kubeconfig --name=dev-cluster --region=us-east-2 --identity=dev-admin
By Component and Stack (Legacy)
atmos aws eks update-kubeconfig my-component -s dev-us-east-2
CI/CD Integration
EKS kubeconfig authentication works with Atmos auth and native CI/CD pipelines. If you use GitHub OIDC for authentication, EKS integrations are provisioned automatically when the identity authenticates, so atmos auth exec --identity <name> -- kubectl ... works the same way in CI as it does locally.
IAM Permissions
The identity used for EKS integration needs the following IAM permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "eks:DescribeCluster",
"Resource": "arn:aws:eks:*:123456789012:cluster/*"
},
{
"Effect": "Allow",
"Action": "sts:GetCallerIdentity",
"Resource": "*"
}
]
}
eks:DescribeClusteris needed during kubeconfig generation to retrieve the cluster endpoint and certificate authoritysts:GetCallerIdentityis used for token generation (typically allowed by default)- Kubernetes RBAC controls what the authenticated user can do within the cluster
Cleanup
When you log out of an identity, Atmos automatically cleans up kubeconfig entries for linked EKS integrations:
atmos auth logout --identity dev-admin
# Kubeconfig entries for dev-eks are removed
Cleanup is non-blocking; failures are logged as warnings and don't prevent logout.
Troubleshooting
Token Expired
EKS tokens expire after approximately 15 minutes. kubectl automatically calls atmos aws eks token to get a fresh token when needed. If you see authentication errors, try:
# Re-authenticate to refresh cached credentials
atmos auth login --identity dev-admin
Cluster Not Found
Ensure the cluster name and region match your AWS configuration:
# Verify cluster exists
aws eks describe-cluster --name dev-cluster --region us-east-2
Kubeconfig Conflicts
If you have conflicting kubeconfig entries from other tools:
spec:
cluster:
name: dev-cluster
region: us-east-2
kubeconfig:
update: replace # Overwrite instead of merge
Or use a separate kubeconfig file:
spec:
cluster:
name: dev-cluster
region: us-east-2
kubeconfig:
path: ~/.config/atmos/kube/config # Isolated from ~/.kube/config
Permission Denied
Verify the identity has eks:DescribeCluster permission:
# Check with AWS CLI
aws eks describe-cluster --name dev-cluster --region us-east-2
If the cluster is accessible but kubectl commands fail, check Kubernetes RBAC:
kubectl auth can-i --list --context dev-eks
Next Steps
- Auth Login Command — Full login command reference
- AWS EKS Token Command — EKS token generation reference
- AWS EKS Update Kubeconfig — Update kubeconfig command reference
- Auth Configuration — Complete configuration reference
- ECR Authentication Tutorial — Similar integration pattern for container registries