Skip to content
Migrate from OpenRouter · 90 seconds

Migrate from OpenRouter in 90 seconds

Only the base_url and API key change. Your OpenAI SDK, LangChain, LlamaIndex, model IDs, prompts, tools, streaming — everything keeps working.

Two lines changed

Before and after — side by side. Everything else stays the same.

Before — OpenRouter
from openai import OpenAI

client = OpenAI(
  base_url = "https://openrouter.ai/api/v1",
  api_key  = "sk-or-v1-..."
)

resp = client.chat.completions.create(
  model    = "openai/gpt-5.4",
  messages = [{"role": "user", "content": "hi"}],
)
After — AI Router
from openai import OpenAI

client = OpenAI(
  base_url = "https://api.airouter.kz/api/v1",
  api_key  = "air_live_..."
)

resp = client.chat.completions.create(
  model    = "openai/gpt-5.4",
  messages = [{"role": "user", "content": "hi"}],
)

# messages, model IDs, tools, response_format, stream=True — unchanged

Migration checklist

Five steps from signup to production. Most teams finish within an hour.

  1. 01

    Create an account

    Sign up at app.airouter.kz as an enterprise customer. Entity KYC and API key issuance within one business day.

  2. 02

    Top up balance

    Bank transfer against invoice. Minimum top-up is the equivalent of $500. Balance credited in USD microdollars.

  3. 03

    Generate API key

    Key with air_live_ prefix. Configure PII masking, rate limits, and per-key model allowlists.

  4. 04

    Change base_url

    One line in your OpenAI client init. The rest of your code — SDK calls, prompts, tools — stays the same.

  5. 05

    Run a canary

    Route 1–5% of production traffic through AI Router. Cross-check tokens, cost, latency in the dashboard, then cut over.

What works identically

We hold full parity with OpenRouter on the API and add enterprise features on top.

Works identically

  • POST /chat/completions with full OpenAI schema
  • GET /models — same 500+ model catalog
  • Model IDs in provider/model-name format
  • streaming (SSE), tool calls, response_format, function calling
  • vision, structured outputs, prompt caching
  • OpenAI SDK, LangChain, LlamaIndex, LiteLLM, Vercel AI SDK

Differences in your favor

  • Zero markup on the API — you pay the provider's exact price
  • B2B invoicing and bank transfer instead of credit card
  • Contractual 99.9% SLA with monetary credits
  • Data residency and regulatory compliance labels
  • Per-key PII masking policies
  • Dedicated TAM and direct escalation channel

Concierge migration for enterprise

If you have dozens of integrations, we migrate them with your team.

01

Integration audit

We find every OpenRouter call in your repos: code, configs, secrets, CI/CD. We prepare the exact change list.

02

Dual-run in parallel

We mirror traffic to AI Router in staging. Cross-check tokens, latency, cost per request against your keys.

03

Planned cutover

Phased traffic shift: 1% → 10% → 50% → 100% with one-line rollback in config. Outage-free migration.

Ready to migrate?

Get a key in one business day. Concierge migration is free for enterprise customers.