CrewX 0.6.0 Release - API Provider Support
CrewX 0.6.0 is here! This release significantly expands CrewX's AI provider ecosystem by adding API Provider support. Now you can use OpenAI, Anthropic, Ollama, and other API-based AI providers directly, in addition to CLI-based providers. OpenRouter is also supported via OpenAI-compatible API.
๐ฏ Key Featuresโ
1. BYOA (Bring Your Own API) - Use Your Own APIsโ
CrewX has always embraced the BYOA (Bring Your Own AI) philosophy. You could leverage your existing AI subscriptions through CLI providers (Claude Code, Gemini Code Assist, GitHub Copilot). Now, with the addition of API Providers, this philosophy has evolved further.
Existing BYOA (CLI Providers):
agents:
- id: "my_agent"
provider: "cli/claude" # Using Claude Code CLI
New BYOA (API Providers):
agents:
- id: "my_agent"
provider: "api/anthropic" # Using Anthropic API directly
inline:
apiKey: "{{env.ANTHROPIC_API_KEY}}"
model: "claude-sonnet-4-5-20250929"
Now you can use the AI API subscriptions you already own directly in CrewX. No additional cost, with your own APIs.
2. API Provider Supportโ
CrewX 0.6.0 supports the following API providers:
โ Currently Supported (Tested)โ
api/openai - OpenAI & OpenRouter
- Support for GPT-4, GPT-4 Turbo, GPT-3.5 models
- Automatic OpenRouter detection (when baseURL contains openrouter.ai)
- Direct use with OpenAI API key
# Using OpenAI directly
- id: "gpt_agent"
provider: "api/openai"
inline:
apiKey: "{{env.OPENAI_API_KEY}}"
model: "gpt-4-turbo-preview"
# Using OpenRouter
- id: "openrouter_agent"
provider: "api/openai"
inline:
baseURL: "https://openrouter.ai/api/v1"
apiKey: "{{env.OPENROUTER_API_KEY}}"
model: "anthropic/claude-3.5-sonnet"
api/anthropic - Anthropic Claude API
- Support for Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 3 Opus, and Haiku
- Direct use with Anthropic API key
- id: "claude_agent"
provider: "api/anthropic"
inline:
apiKey: "{{env.ANTHROPIC_API_KEY}}"
model: "claude-sonnet-4-5-20250929"
api/ollama - Ollama (Local Models)
- Local open-source models like Llama, Mistral
- Run locally without internet connection
- Free to use
- id: "ollama_agent"
provider: "api/ollama"
inline:
baseURL: "http://localhost:11434"
model: "llama3.2"
๐ฎ Coming Soonโ
These providers will be added in future versions:
- Google Gemini (
api/google) - Support for Gemini Pro, Ultra - AWS Bedrock (
api/bedrock) - Claude, Titan, Llama, and more - LiteLLM (
api/litellm) - Unified proxy for 100+ AI providers
3. Tool Calling - AI Can Read and Write Files Directlyโ
API Providers come with built-in Tool Calling capabilities. AI agents can now use the following tools:
Query Mode (Read-Only)โ
read_file: Read filesgrep: Pattern searchls: Directory listing
Execute Mode (Write-Enabled)โ
write_file: Write filesreplace: Replace textrun_shell_command: Execute shell commands
How it works:
- User requests "Read the README.md file for me"
- AI calls the
read_filetool to fetch file content - AI analyzes the content and responds
Query vs Execute Mode:
- Query Mode: Only read-only tools available (safe)
- Execute Mode: Can modify files and run shell commands (use with care)
# Query mode - read-only
crewx query "@my_agent Analyze the README.md file"
# Execute mode - can modify files
crewx execute "@my_agent Add an installation section to README.md"
4. Runtime Model Overrideโ
Specify a default model in agent configuration, but use a different model when needed:
agents:
- id: "smart_agent"
provider: "api/anthropic"
inline:
apiKey: "{{env.ANTHROPIC_API_KEY}}"
model: "claude-3-5-sonnet-20241022" # Default model
# Use default model
crewx q "@smart_agent Simple question"
# Override with Opus model (for complex tasks)
crewx q "@smart_agent:claude-3-opus-20240229 Design complex architecture"
# Override with Haiku model (for fast response)
crewx q "@smart_agent:claude-3-haiku-20240307 Quick question"
๐ CLI Provider vs API Provider Comparisonโ
| Feature | CLI Provider | API Provider |
|---|---|---|
| Setup | CLI tool installation required | API key only |
| Authentication | Managed by CLI tool | Provide API key directly |
| Models | Specified by CLI tool | Specified in agents.yaml |
| Tool Calling | Depends on CLI tool | Built-in CrewX tools |
| Cost | Requires CLI subscription | Based on API usage |
| Network | CLI tool โ API | Direct API calls |
| Flexibility | Limited by CLI tool capabilities | Full API functionality |
When to Use What?โ
Recommended for CLI Provider:
- โ Already using Claude Code, Gemini Code Assist, GitHub Copilot
- โ Need IDE integration and additional CLI tool features
- โ Want to delegate authentication management to CLI tool
Recommended for API Provider:
- โ Want to manage API keys directly
- โ Want to freely switch between models
- โ Use local models like Ollama
- โ Use LiteLLM for unified multi-provider management
- โ Want to leverage Tool Calling capabilities to the fullest
๐ Quick Start Examplesโ
Example 1: Code Review Agent with Anthropic APIโ
agents:
- id: "code_reviewer"
provider: "api/anthropic"
inline:
apiKey: "{{env.ANTHROPIC_API_KEY}}"
model: "claude-sonnet-4-5-20250929"
prompt: |
You are a professional code reviewer.
Find bugs, performance issues, and security vulnerabilities
and suggest improvements.
# Review file (Query mode - read-only)
crewx q "@code_reviewer Review src/app.ts"
# Perform fixes (Execute mode)
crewx x "@code_reviewer Fix security issues in src/app.ts"
Example 2: Multi-Agent with OpenAI + Ollamaโ
agents:
- id: "gpt_architect"
provider: "api/openai"
inline:
apiKey: "{{env.OPENAI_API_KEY}}"
model: "gpt-4-turbo-preview"
prompt: |
You are a system architect.
- id: "local_coder"
provider: "api/ollama"
inline:
baseURL: "http://localhost:11434"
model: "llama3.2"
prompt: |
You are a code implementation expert.
# Architecture design with GPT-4
crewx q "@gpt_architect Design microservices architecture"
# Code implementation with local Llama (cost savings)
crewx x "@local_coder Implement API endpoints"
# Query both agents simultaneously
crewx q "@gpt_architect @local_coder Analyze pros and cons of this design"
Example 3: Unified Provider Management with LiteLLMโ
agents:
- id: "litellm_agent"
provider: "api/litellm"
inline:
baseURL: "http://localhost:4000"
apiKey: "{{env.LITELLM_API_KEY}}"
model: "gpt-4" # Routed by LiteLLM
# Start LiteLLM server
litellm --config litellm_config.yaml
# Use in CrewX
crewx q "@litellm_agent Your question here"
๐ฆ Additional Improvementsโ
- Environment Variable Substitution: Safe API key management using
{{env.VAR}}syntax - Inline Configuration Priority: Override root-level settings per agent
- Agent ID Extraction: Accurately parse API agents from YAML
- Improved Parallel Processing: Normalize provider formats during multi-agent execution
๐ง How to Upgradeโ
# Update via NPM
npm update -g crewx
# Or reinstall
npm install -g crewx@latest
# Verify version
crewx --version # Should output 0.6.0
Your existing crewx.yaml configuration will continue to work as before. To use API Providers, simply add agents with the provider: "api/..." format as shown in the examples above.
๐ Next Stepsโ
Upgrade to CrewX 0.6.0 and try API Providers!
# 1. Set API keys
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
# 2. Configure crewx.yaml
vim crewx.yaml
# 3. Run agents
crewx q "@my_agent Hello!"
Have questions? Leave them on GitHub Issues! ๐
