Skip to main content

CrewX 0.6.0 Release - API Provider Support

ยท 6 min read
Doha Park
Founder @ SowonLabs

CrewX 0.6.0 is here! This release significantly expands CrewX's AI provider ecosystem by adding API Provider support. Now you can use OpenAI, Anthropic, Ollama, and other API-based AI providers directly, in addition to CLI-based providers. OpenRouter is also supported via OpenAI-compatible API.

๐ŸŽฏ Key Featuresโ€‹

1. BYOA (Bring Your Own API) - Use Your Own APIsโ€‹

CrewX has always embraced the BYOA (Bring Your Own AI) philosophy. You could leverage your existing AI subscriptions through CLI providers (Claude Code, Gemini Code Assist, GitHub Copilot). Now, with the addition of API Providers, this philosophy has evolved further.

Existing BYOA (CLI Providers):

agents:
- id: "my_agent"
provider: "cli/claude" # Using Claude Code CLI

New BYOA (API Providers):

agents:
- id: "my_agent"
provider: "api/anthropic" # Using Anthropic API directly
inline:
apiKey: "{{env.ANTHROPIC_API_KEY}}"
model: "claude-sonnet-4-5-20250929"

Now you can use the AI API subscriptions you already own directly in CrewX. No additional cost, with your own APIs.

2. API Provider Supportโ€‹

CrewX 0.6.0 supports the following API providers:

โœ… Currently Supported (Tested)โ€‹

api/openai - OpenAI & OpenRouter

  • Support for GPT-4, GPT-4 Turbo, GPT-3.5 models
  • Automatic OpenRouter detection (when baseURL contains openrouter.ai)
  • Direct use with OpenAI API key
# Using OpenAI directly
- id: "gpt_agent"
provider: "api/openai"
inline:
apiKey: "{{env.OPENAI_API_KEY}}"
model: "gpt-4-turbo-preview"

# Using OpenRouter
- id: "openrouter_agent"
provider: "api/openai"
inline:
baseURL: "https://openrouter.ai/api/v1"
apiKey: "{{env.OPENROUTER_API_KEY}}"
model: "anthropic/claude-3.5-sonnet"

api/anthropic - Anthropic Claude API

  • Support for Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 3 Opus, and Haiku
  • Direct use with Anthropic API key
- id: "claude_agent"
provider: "api/anthropic"
inline:
apiKey: "{{env.ANTHROPIC_API_KEY}}"
model: "claude-sonnet-4-5-20250929"

api/ollama - Ollama (Local Models)

  • Local open-source models like Llama, Mistral
  • Run locally without internet connection
  • Free to use
- id: "ollama_agent"
provider: "api/ollama"
inline:
baseURL: "http://localhost:11434"
model: "llama3.2"

๐Ÿ”ฎ Coming Soonโ€‹

These providers will be added in future versions:

  • Google Gemini (api/google) - Support for Gemini Pro, Ultra
  • AWS Bedrock (api/bedrock) - Claude, Titan, Llama, and more
  • LiteLLM (api/litellm) - Unified proxy for 100+ AI providers

3. Tool Calling - AI Can Read and Write Files Directlyโ€‹

API Providers come with built-in Tool Calling capabilities. AI agents can now use the following tools:

Query Mode (Read-Only)โ€‹

  • read_file: Read files
  • grep: Pattern search
  • ls: Directory listing

Execute Mode (Write-Enabled)โ€‹

  • write_file: Write files
  • replace: Replace text
  • run_shell_command: Execute shell commands

How it works:

  1. User requests "Read the README.md file for me"
  2. AI calls the read_file tool to fetch file content
  3. AI analyzes the content and responds

Query vs Execute Mode:

  • Query Mode: Only read-only tools available (safe)
  • Execute Mode: Can modify files and run shell commands (use with care)
# Query mode - read-only
crewx query "@my_agent Analyze the README.md file"

# Execute mode - can modify files
crewx execute "@my_agent Add an installation section to README.md"

4. Runtime Model Overrideโ€‹

Specify a default model in agent configuration, but use a different model when needed:

agents:
- id: "smart_agent"
provider: "api/anthropic"
inline:
apiKey: "{{env.ANTHROPIC_API_KEY}}"
model: "claude-3-5-sonnet-20241022" # Default model
# Use default model
crewx q "@smart_agent Simple question"

# Override with Opus model (for complex tasks)
crewx q "@smart_agent:claude-3-opus-20240229 Design complex architecture"

# Override with Haiku model (for fast response)
crewx q "@smart_agent:claude-3-haiku-20240307 Quick question"

๐Ÿ“‹ CLI Provider vs API Provider Comparisonโ€‹

FeatureCLI ProviderAPI Provider
SetupCLI tool installation requiredAPI key only
AuthenticationManaged by CLI toolProvide API key directly
ModelsSpecified by CLI toolSpecified in agents.yaml
Tool CallingDepends on CLI toolBuilt-in CrewX tools
CostRequires CLI subscriptionBased on API usage
NetworkCLI tool โ†’ APIDirect API calls
FlexibilityLimited by CLI tool capabilitiesFull API functionality

When to Use What?โ€‹

Recommended for CLI Provider:

  • โœ… Already using Claude Code, Gemini Code Assist, GitHub Copilot
  • โœ… Need IDE integration and additional CLI tool features
  • โœ… Want to delegate authentication management to CLI tool

Recommended for API Provider:

  • โœ… Want to manage API keys directly
  • โœ… Want to freely switch between models
  • โœ… Use local models like Ollama
  • โœ… Use LiteLLM for unified multi-provider management
  • โœ… Want to leverage Tool Calling capabilities to the fullest

๐Ÿš€ Quick Start Examplesโ€‹

Example 1: Code Review Agent with Anthropic APIโ€‹

agents:
- id: "code_reviewer"
provider: "api/anthropic"
inline:
apiKey: "{{env.ANTHROPIC_API_KEY}}"
model: "claude-sonnet-4-5-20250929"
prompt: |
You are a professional code reviewer.
Find bugs, performance issues, and security vulnerabilities
and suggest improvements.
# Review file (Query mode - read-only)
crewx q "@code_reviewer Review src/app.ts"

# Perform fixes (Execute mode)
crewx x "@code_reviewer Fix security issues in src/app.ts"

Example 2: Multi-Agent with OpenAI + Ollamaโ€‹

agents:
- id: "gpt_architect"
provider: "api/openai"
inline:
apiKey: "{{env.OPENAI_API_KEY}}"
model: "gpt-4-turbo-preview"
prompt: |
You are a system architect.

- id: "local_coder"
provider: "api/ollama"
inline:
baseURL: "http://localhost:11434"
model: "llama3.2"
prompt: |
You are a code implementation expert.
# Architecture design with GPT-4
crewx q "@gpt_architect Design microservices architecture"

# Code implementation with local Llama (cost savings)
crewx x "@local_coder Implement API endpoints"

# Query both agents simultaneously
crewx q "@gpt_architect @local_coder Analyze pros and cons of this design"

Example 3: Unified Provider Management with LiteLLMโ€‹

agents:
- id: "litellm_agent"
provider: "api/litellm"
inline:
baseURL: "http://localhost:4000"
apiKey: "{{env.LITELLM_API_KEY}}"
model: "gpt-4" # Routed by LiteLLM
# Start LiteLLM server
litellm --config litellm_config.yaml

# Use in CrewX
crewx q "@litellm_agent Your question here"

๐Ÿ“ฆ Additional Improvementsโ€‹

  • Environment Variable Substitution: Safe API key management using {{env.VAR}} syntax
  • Inline Configuration Priority: Override root-level settings per agent
  • Agent ID Extraction: Accurately parse API agents from YAML
  • Improved Parallel Processing: Normalize provider formats during multi-agent execution

๐Ÿ”ง How to Upgradeโ€‹

# Update via NPM
npm update -g crewx

# Or reinstall
npm install -g crewx@latest

# Verify version
crewx --version # Should output 0.6.0

Your existing crewx.yaml configuration will continue to work as before. To use API Providers, simply add agents with the provider: "api/..." format as shown in the examples above.

๐Ÿ“š Next Stepsโ€‹

Upgrade to CrewX 0.6.0 and try API Providers!

# 1. Set API keys
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"

# 2. Configure crewx.yaml
vim crewx.yaml

# 3. Run agents
crewx q "@my_agent Hello!"

Have questions? Leave them on GitHub Issues! ๐Ÿ™Œ