Commands

Every command follows the same pattern: codesight <command> <file> [options]

review

Full code review with severity-tagged issues.

Terminal
$ codesight review src/main.py
$ codesight review src/main.py --provider anthropic
$ codesight review src/main.py -c "This handles user uploads"

Output sections: Summary, Issues (crit/warn/info), Suggestions.

The -c, --context flag is also available on bugs, security, docs, explain, and refactor. Passes extra context to the model (purpose of the file, threat model, constraints) before analysis.

bugs

Focused bug detection - logic errors, race conditions, resource leaks, edge cases.

Terminal
$ codesight bugs lib/parser.py

Output sections: Bugs Found, Risk Assessment.

security

Security audit with CWE IDs, OWASP mapping, and remediation code.

Terminal
$ codesight security src/auth.py
$ codesight security src/auth.py -c "Public endpoint, handles auth tokens"
$ codesight security src/auth.py --pipeline ollama/llama3:openai/gpt-5.4

--pipeline TRIAGE:VERIFY

Security-only flag. Two-stage run: a fast/cheap triage model flags suspicious regions, then a stronger verify model analyses only the flagged sections. Typical savings: 70-85% vs. running the strong model on the whole file.

Format: provider/model:provider/model

Pipeline examples
# Local triage -> cloud verify (cheap)
$ codesight security src/auth.py --pipeline ollama/llama3:openai/gpt-5.4

# Groq triage -> Claude verify (fast)
$ codesight security src/auth.py --pipeline groq/llama-3.3-70b-versatile:anthropic/claude-opus-4-6-20251101

# Same provider, two models
$ codesight security src/auth.py --pipeline openai/gpt-5.3-codex:openai/gpt-5.4

docs

Auto-generate docstrings and module documentation.

Terminal
$ codesight docs utils/helpers.py

explain

Plain-language breakdown of complex code.

Terminal
$ codesight explain legacy/processor.py

refactor

Refactoring suggestions with before/after diffs.

Terminal
$ codesight refactor src/handlers.py

scan

Scan an entire directory with a progress bar.

Terminal
$ codesight scan .
$ codesight scan src/ --task security
$ codesight scan . --ext .py .js

diff

Review only git-changed files.

Terminal
$ codesight diff
$ codesight diff --staged --task security

benchmark

Test LLMs against a curated set of vulnerable code samples.

Terminal
$ codesight benchmark
$ codesight benchmark --models gpt-5.4 llama3 --json

templates

Manage and run custom prompt templates.

Terminal
$ codesight templates list
$ codesight templates run quick-review src/main.py
$ codesight templates add my-template
$ codesight templates delete my-template

config

Interactive provider setup. Arrow-key menu: pick a provider, enter credentials, save to ~/.codesight/config.json.

Terminal
$ codesight config

Supports OpenAI, Anthropic, Azure AI Foundry (Claude), Google Vertex AI, Ollama, and a Custom entry with presets for OpenRouter, Groq, Together, Mistral, xAI, Fireworks, DeepSeek, Perplexity, Cerebras, Cohere, and any OpenAI-compatible URL. After saving, the wizard offers to run health.

health

Check provider connectivity. Prints the active provider, masked credentials, and runs a lightweight connection test.

Terminal
$ codesight health
$ codesight health --provider openrouter

On failure, outputs provider-specific troubleshooting (missing API key, unreachable host, wrong Azure deployment name, Ollama not running, etc.).

Global Flags

FlagDescription
-p, --providerOverride default provider. Accepts openai, anthropic, google, ollama, or any custom label saved in config (e.g. openrouter, groq, azure).
-o, --outputOutput format: markdown, json, plain, sarif
-v, --versionShow version

Exit Codes

CodeMeaning
0Clean - no issues or only info-level findings
1Warnings - high-severity issues found
2Critical - critical-severity issues found