Skip to main content

AI Configuration

The ai section of the atmos.yaml configures the built-in AI assistant, including providers, skills, tool execution, sessions, and project instructions.

Experimental

Quick Start

atmos.yaml

ai:
enabled: true
default_provider: "anthropic"
providers:
anthropic:
model: "claude-sonnet-4-6"
api_key: !env "ANTHROPIC_API_KEY"
max_tokens: 4096

shell

export ANTHROPIC_API_KEY="your-api-key-here"
atmos ai chat

Configuration Reference

Top-Level Structure

atmos.yaml

ai:
enabled: true # Enable AI features
default_provider: "anthropic" # Default provider for CLI commands
default_skill: "general" # Default AI skill

# Provider connections
providers:
<provider-name>:
model: "model-id"
api_key: !env "ENV_VAR_NAME"
max_tokens: 4096
base_url: "" # Custom endpoint (optional)

# Tool execution
tools:
enabled: true
require_confirmation: true
allowed_tools: []
restricted_tools: []
blocked_tools: []

# Session persistence
sessions:
enabled: true
path: ".atmos/sessions"
max_sessions: 10

# Project instructions
instructions:
enabled: true
file: "ATMOS.md"

# Automatic context discovery
context:
enabled: true
auto_include: ["stacks/**/*.yaml"]
exclude: ["**/node_modules/**"]
max_files: 100
max_size_mb: 10

# Web search
web_search:
enabled: false

# Custom skills
skills:
<skill-name>:
display_name: "Display Name"
description: "What this skill does"
system_prompt: "..."
allowed_tools: []

Global Settings

enabled
Enable or disable AI features (default: false)
default_provider
Default AI provider: anthropic, openai, gemini, grok, ollama, bedrock, or azureopenai
default_skill
Default skill to activate (default: general)
send_context
Automatically send stack configurations to AI for context-aware answers (default: false)
prompt_on_send
Prompt user before sending context, even when send_context is true (default: true)
max_history_messages
Maximum conversation messages to keep. Simple message-based limit (default: 0 = unlimited)
max_history_tokens
Maximum tokens in conversation history. More precise, approximate counting (~10-20% accuracy). If both limits are set, whichever is hit first wins (default: 0 = unlimited)
timeout_seconds
Request timeout in seconds (default: 60)
max_context_files
Maximum stack files to send for context (default: 10)
max_context_lines
Maximum lines per file when sending context (default: 500)

Provider Settings

Each provider in the providers map supports:

model
Model identifier (defaults vary by provider)
api_key
API key for the provider. Use !env to read from an environment variable (e.g., api_key: !env "ANTHROPIC_API_KEY").
max_tokens
Maximum tokens per response. Roughly 1000 tokens = 750 words.
base_url
Custom API endpoint (required for Grok, Azure OpenAI; AWS region for Bedrock)
timeout_seconds
Per-provider request timeout override

Multi-Provider Configuration

Atmos supports 7 AI providers. Configure multiple and switch between them with Ctrl+P in the TUI, or set default_provider for CLI commands.

atmos.yaml

ai:
enabled: true
default_provider: "anthropic"

providers:
# Cloud Providers
anthropic:
model: "claude-sonnet-4-6"
api_key: !env "ANTHROPIC_API_KEY"
max_tokens: 4096

openai:
model: "gpt-5.4"
api_key: !env "OPENAI_API_KEY"
max_tokens: 4096

gemini:
model: "gemini-2.5-flash"
api_key: !env "GEMINI_API_KEY"
max_tokens: 8192

grok:
model: "grok-4-latest"
api_key: !env "XAI_API_KEY"
max_tokens: 4096
base_url: "https://api.x.ai/v1"

# Local Provider (no API key needed)
ollama:
model: "llama4"
base_url: "http://localhost:11434/v1"

# Enterprise Providers
bedrock:
model: "anthropic.claude-sonnet-4-6"
base_url: "us-east-1" # AWS region

azureopenai:
model: "gpt-4o" # Your Azure deployment name
api_key: !env "AZURE_OPENAI_API_KEY"
base_url: "https://your-resource.openai.azure.com"
api_version: "2025-04-01-preview"

Conversation History Management

By default, Atmos sends full conversation history with each request. For long conversations, this can cause rate limiting and higher costs. Three approaches to manage it:

Simple truncationmax_history_messages: 20
Most users, easy to configure.
Auto-compactsessions.auto_compact.enabled: true
Long conversations, preserves context.
Both — Truncation + auto-compact
Maximum control, safety net.

Auto-compact summarizes old messages using AI instead of dropping them, preserving important context. See Sessions for details.

warning

Setting history too low (4-6 messages) causes the AI to lose important context. Start with 20-30.

Environment Variables

shell

# API Keys (alternative to api_key with !env in config)
export ATMOS_ANTHROPIC_API_KEY="your-key"
export ATMOS_OPENAI_API_KEY="your-key"
export ATMOS_GEMINI_API_KEY="your-key"
export ATMOS_XAI_API_KEY="your-key"

# Context Control
export ATMOS_AI_SEND_CONTEXT=true

Verify Configuration

shell

atmos ai ask "Hello, are you working?"

Context Discovery

The context section controls automatic file discovery for AI context:

context.enabled
Enable automatic context loading (default: true)
context.auto_include
Glob patterns for files to automatically include (e.g., ["stacks/**/*.yaml"])
context.exclude
Glob patterns to exclude from context (e.g., ["**/node_modules/**"])
context.max_files
Maximum files to include (default: 100)
context.max_size_mb
Maximum total size in MB (default: 10)
context.follow_gitignore
Respect .gitignore patterns (default: true)
context.show_files
Show included files in UI output (default: false)
context.cache_enabled
Cache discovered files for faster subsequent requests (default: true)
context.cache_ttl
Cache time-to-live in seconds (default: 300)

The web_search section enables the AI to search the web for documentation and solutions:

web_search.enabled
Enable web search tool (default: false)
web_search.max_results
Maximum search results to return (default: 10)
web_search.google_api_key
Google Custom Search API key (for Google Custom Search)
web_search.google_cse_id
Google Custom Search Engine ID (for Google Custom Search)

Security Best Practices

  • Never commit API keys to version control -- use environment variables
  • Rotate keys per provider recommendations
  • Set spending limits in provider dashboards
  • Monitor usage to detect anomalies

Detailed Documentation

Configuration Reference

  • Providers - Provider settings, supported models, cloud/local/enterprise configs
  • Tools - Tool execution settings, available tools, permission modes
  • Sessions - Session persistence, auto-compact configuration
  • Skills - Custom skill schema, tool access, skill examples
  • Instructions - Project instructions settings, ATMOS.md sections

Guides and Concepts