Multi-Provider · Open Source · No Telemetry
██╗  ██╗ ██████╗██╗      █████╗ ██╗   ██╗██████╗ ███████╗
╚██╗██╔╝██╔════╝██║     ██╔══██╗██║   ██║██╔══██╗██╔════╝
 ╚███╔╝ ██║     ██║     ███████║██║   ██║██║  ██║█████╗
 ██╔██╗ ██║     ██║     ██╔══██║██║   ██║██║  ██║██╔══╝
██╔╝ ██╗╚██████╗███████╗██║  ██║╚██████╔╝██████╔╝███████╗
╚═╝  ╚═╝ ╚═════╝╚══════╝╚═╝  ╚═╝ ╚═════╝ ╚═════╝ ╚══════╝

Any LLM.  One Terminal.
Zero Lock-in.

Claude Code — adapted for every model.
OpenAI, Gemini, DeepSeek, Groq, Ollama and 200+ providers.
Same powerful workflow. Your API key. Your choice.

Install Now Star on GitHub

Works with  OpenAI · Gemini · DeepSeek · Groq · Ollama · GitHub Models · Azure · AWS Bedrock · Vertex AI

Quick Start

Install in seconds

Choose your provider and start coding. No account needed for local models.

# Install globally with npm
$ npm install -g @gitlawb/xclaude
# Then launch
$ xclaude
# Use /provider inside to configure your LLM
# macOS / Linux
$ export OPENAI_API_KEY=sk-your-key-here
$ xclaude --provider openai --model gpt-4o
# Windows PowerShell
PS> $env:OPENAI_API_KEY="sk-your-key-here"
PS> xclaude --provider openai
# DeepSeek — powerful & cost-effective
$ export OPENAI_API_KEY=your-deepseek-key
$ xclaude --provider deepseek
# Or use DeepSeek-R1 for reasoning tasks
$ xclaude --provider deepseek --model deepseek-reasoner
# 100% local — no API key needed
$ ollama pull qwen2.5-coder:7b
$ xclaude --provider ollama --model qwen2.5-coder:7b
# Or with a lighter model for speed
$ xclaude --provider ollama --model llama3.2:3b
# Google Gemini — generous free tier
$ export GEMINI_API_KEY=your-gemini-key
$ xclaude --provider gemini --model gemini-2.0-flash
# Interactive provider wizard (all providers)
$ xclaude → type /provider
Supported Providers

Works with everything

Cloud or local. Fast or powerful. Cheap or free. Pick what fits.

🤖 OpenAI GPT-4o, o3, o4-mini Cloud
🔷 DeepSeek Chat, Reasoner Cloud
💎 Gemini 2.0 Flash, Pro Free tier
Groq Llama 3.3, Mixtral Fast
🦙 Ollama Any local model Local
🐙 GitHub Models, Copilot Free
☁️ Azure OpenAI Service Cloud
🌿 AWS Bedrock Claude, Llama, Titan Cloud
🔵 Vertex AI Gemini on GCP Cloud
🏠 LM Studio Local server Local
🔥 Fireworks Fast inference Cloud
🛤️ OpenRouter 200+ models Cloud
Features

Everything Claude Code has.
For any model.

No rewrites. No compromises. Full Claude Code toolkit — minus the lock-in.

🔧

Full Tool Suite

Bash, file edit, glob, grep, web fetch, web search, agent spawning, MCP — all tools work with every provider.

🧭

Agent Routing

Route different agents to different models. Use a fast cheap model for explore, a powerful one for coding.

💬

Slash Commands

200+ built-in commands. /provider, /memory, /commit, /review and more — all work out of the box.

🔌

MCP Servers

Connect any Model Context Protocol server. Databases, APIs, file systems — extend Xclaude's reach without changing providers.

🛡️

Zero Telemetry

All Anthropic analytics, tracking, GrowthBook flags and auto-updater calls are stripped at build time. Your data stays local.

🔄

Smart Retry

Automatic retry with exponential backoff on 429/500/503. Respects Retry-After headers for all providers.

⚙️

Session Memory

Persistent session state, task tracking, and auto-memory — works the same regardless of which LLM you're using.

🖥️

Vim Mode + IDE

Vim keybindings, VS Code extension support, and a rich terminal UI powered by React + Ink.