AI Configuration
The ai section of the atmos.yaml configures the built-in AI assistant, including providers,
skills, tool execution, sessions, and project instructions.
Quick Start
atmos.yaml
Configuration Reference
Top-Level Structure
atmos.yaml
Global Settings
enabled- Enable or disable AI features (default:
false) default_provider- Default AI provider:
anthropic,openai,gemini,grok,ollama,bedrock, orazureopenai default_skill- Default skill to activate (default:
general) send_context- Automatically send stack configurations to AI for context-aware answers (default:
false) prompt_on_send- Prompt user before sending context, even when
send_contextistrue(default:true) max_history_messages- Maximum conversation messages to keep. Simple message-based limit (default:
0= unlimited) max_history_tokens- Maximum tokens in conversation history. More precise, approximate counting (~10-20% accuracy). If both limits are set, whichever is hit first wins (default:
0= unlimited) timeout_seconds- Request timeout in seconds (default:
60) max_context_files- Maximum stack files to send for context (default:
10) max_context_lines- Maximum lines per file when sending context (default:
500)
Provider Settings
Each provider in the providers map supports:
model- Model identifier (defaults vary by provider)
api_key- API key for the provider. Use
!envto read from an environment variable (e.g.,api_key: !env "ANTHROPIC_API_KEY"). max_tokens- Maximum tokens per response. Roughly 1000 tokens = 750 words.
base_url- Custom API endpoint (required for Grok, Azure OpenAI; AWS region for Bedrock)
timeout_seconds- Per-provider request timeout override
Multi-Provider Configuration
Atmos supports 7 AI providers. Configure multiple and switch between them with Ctrl+P in the TUI,
or set default_provider for CLI commands.
atmos.yaml
Conversation History Management
By default, Atmos sends full conversation history with each request. For long conversations, this can cause rate limiting and higher costs. Three approaches to manage it:
- Simple truncation —
max_history_messages: 20 - Most users, easy to configure.
- Auto-compact —
sessions.auto_compact.enabled: true - Long conversations, preserves context.
- Both — Truncation + auto-compact
- Maximum control, safety net.
Auto-compact summarizes old messages using AI instead of dropping them, preserving important context. See Sessions for details.
Setting history too low (4-6 messages) causes the AI to lose important context. Start with 20-30.
Environment Variables
Verify Configuration
Context Discovery
The context section controls automatic file discovery for AI context:
context.enabled- Enable automatic context loading (default:
true) context.auto_include- Glob patterns for files to automatically include (e.g.,
["stacks/**/*.yaml"]) context.exclude- Glob patterns to exclude from context (e.g.,
["**/node_modules/**"]) context.max_files- Maximum files to include (default:
100) context.max_size_mb- Maximum total size in MB (default:
10) context.follow_gitignore- Respect
.gitignorepatterns (default:true) context.show_files- Show included files in UI output (default:
false) context.cache_enabled- Cache discovered files for faster subsequent requests (default:
true) context.cache_ttl- Cache time-to-live in seconds (default:
300)
Web Search
The web_search section enables the AI to search the web for documentation and solutions:
web_search.enabled- Enable web search tool (default:
false) web_search.max_results- Maximum search results to return (default:
10) web_search.google_api_key- Google Custom Search API key (for Google Custom Search)
web_search.google_cse_id- Google Custom Search Engine ID (for Google Custom Search)
Security Best Practices
- Never commit API keys to version control -- use environment variables
- Rotate keys per provider recommendations
- Set spending limits in provider dashboards
- Monitor usage to detect anomalies
Detailed Documentation
Configuration Reference
- Providers - Provider settings, supported models, cloud/local/enterprise configs
- Tools - Tool execution settings, available tools, permission modes
- Sessions - Session persistence, auto-compact configuration
- Skills - Custom skill schema, tool access, skill examples
- Instructions - Project instructions settings, ATMOS.md sections
Guides and Concepts
- MCP Server - Universal MCP integration for Claude Desktop and other clients (requires separate
mcp.enabled: true) - Claude Code Integration - IDE integration with Claude Code
- Troubleshooting - Common issues and solutions
Related Commands
📄️ atmos ai ask
Ask the AI assistant a question
📄️ atmos ai chat
Start an interactive chat session
📄️ atmos ai exec
Run Atmos and shell commands via AI prompts
📄️ atmos ai sessions
Manage AI chat sessions