Providers

Four first-class providers plus a generic OpenAI-compatible adapter with presets for OpenRouter, Groq, Together AI, Mistral, xAI, DeepSeek, Azure AI Foundry, and more. Switch between them with a single flag.

Overview

ProviderModelsAuthOffline
OpenAI GPT-5.4, GPT-5.3-Codex OPENAI_API_KEY No
Anthropic Claude Opus 4.6, Claude Sonnet 4.6 ANTHROPIC_API_KEY No
Google Vertex AI Gemini 3.1 Pro, Gemini 3.1 Flash ADC + GOOGLE_CLOUD_PROJECT No
Ollama Llama 3, CodeLlama, Mistral, etc. None (local) Yes
Custom (OpenAI-compatible) Any OpenAI-compatible endpoint - presets for OpenRouter, Groq, Together, Mistral, xAI, Fireworks, DeepSeek, Perplexity, Cerebras, Cohere, Azure AI Foundry base_url + API key No

Switching Providers

Terminal
$ codesight review file.py --provider openai
$ codesight review file.py --provider anthropic
$ codesight review file.py --provider google
$ codesight review file.py --provider ollama
$ codesight review file.py --provider openrouter   # any custom label you saved in config

The --provider flag accepts any label saved in ~/.codesight/config.json. After codesight config picks a custom provider, the label you chose (e.g. openrouter, groq, azure) becomes the provider name.

OpenAI

Best overall accuracy for code analysis. Default provider.

Setup
$ export OPENAI_API_KEY="sk-..."
$ export CODESIGHT_MODEL="gpt-5.4"  # optional, default

Approximate cost: $0.002-0.005 per file depending on file size.

Anthropic

Strong at nuanced reasoning and catching subtle logic bugs.

Setup
$ export ANTHROPIC_API_KEY="sk-ant-..."
$ export CODESIGHT_MODEL="claude-opus-4-6-20251101"

Google Vertex AI

Requires a Google Cloud project with Vertex AI API enabled.

Setup
$ export GOOGLE_CLOUD_PROJECT="my-project"
$ export GOOGLE_CLOUD_REGION="us-central1"
$ gcloud auth application-default login

Ollama (Local / Offline)

No API key. No data leaves your machine. Fits sensitive codebases.

Setup
$ ollama serve
$ ollama pull llama3
$ codesight review file.py --provider ollama
Air-gapped environments: Ollama runs entirely locally. Pre-download models and run CodeSight on machines with no internet access.

Custom / OpenAI-Compatible Providers

Works with any endpoint that speaks the OpenAI Chat Completions API. That covers most of the ecosystem: OpenRouter's model aggregator, fast inference on Groq and Cerebras, Azure AI Foundry deployments, and open-source hosts like Together, Fireworks, DeepSeek, Perplexity, and Mistral.

Built-in Presets

The codesight config wizard ships ready-made entries for:

PresetBase URLDefault Model
OpenRouterhttps://openrouter.ai/api/v1meta-llama/llama-4-maverick
Groqhttps://api.groq.com/openai/v1llama-3.3-70b-versatile
Together AIhttps://api.together.xyz/v1meta-llama/Llama-3-70b-chat-hf
Mistralhttps://api.mistral.ai/v1mistral-large-latest
xAI (Grok)https://api.x.ai/v1grok-3
Fireworks AIhttps://api.fireworks.ai/inference/v1llama-v3p1-70b-instruct
DeepSeekhttps://api.deepseek.comdeepseek-chat
Perplexityhttps://api.perplexity.aillama-3.1-sonar-large-128k-online
Cerebrashttps://api.cerebras.ai/v1llama3.1-70b
Coherehttps://api.cohere.ai/compatibility/v1command-r-plus
Azure AI FoundryYour resource URLYour deployment name
Custom URLAnything OpenAI-compatibleAny model ID

Interactive Setup

Run the config wizard and pick Custom:

Terminal
$ codesight config
  Select a provider: Custom (OpenRouter / Groq / Together / any OpenAI-compat)
  Pick a provider:   OpenRouter
  Base URL:          https://openrouter.ai/api/v1
  API key:           sk-or-v1-...
  Model name:        meta-llama/llama-4-maverick
  Config label:      openrouter

The label you pick (e.g. openrouter) becomes the provider name you pass to --provider.

Example Usage

Terminal
$ codesight review file.py --provider openrouter
$ codesight security src/auth.py --provider groq
$ codesight bugs lib/parser.py --provider azure

Config File Example

Saved entries land in ~/.codesight/config.json. Edit the file directly if needed:

~/.codesight/config.json
{
  "default_provider": "openrouter",
  "providers": {
    "openrouter": {
      "provider": "custom",
      "api_key": "sk-or-v1-...",
      "base_url": "https://openrouter.ai/api/v1",
      "model": "meta-llama/llama-4-maverick"
    },
    "groq": {
      "provider": "custom",
      "api_key": "gsk_...",
      "base_url": "https://api.groq.com/openai/v1",
      "model": "llama-3.3-70b-versatile"
    }
  }
}
Azure AI Foundry: The wizard has a dedicated Azure Foundry entry for Claude models served through Microsoft Azure's Anthropic-compatible endpoint. Pick that instead of Custom for Azure Claude deployments.

Multi-Model Pipeline

Security analysis can chain two models - a fast local model for triage, then a cloud model for deep verification:

Terminal
$ codesight security src/auth.py --pipeline ollama/llama3:openai/gpt-5.4

Triage flags potential issues fast. Only flagged areas go to the verifier, cutting cost and latency.

Configuration File

Provider settings are stored in ~/.codesight/config.json:

~/.codesight/config.json
{
  "default_provider": "openai",
  "providers": {
    "openai": {
      "provider": "openai",
      "api_key": "sk-...",
      "model": "gpt-5.4",
      "max_tokens": 4096,
      "temperature": 0.2
    },
    "openrouter": {
      "provider": "custom",
      "api_key": "sk-or-v1-...",
      "base_url": "https://openrouter.ai/api/v1",
      "model": "meta-llama/llama-4-maverick"
    }
  }
}