Skip to main content
[anthropic_api]
key = '...'

[openai_api]
api_key = '...'
The Context Engine uses a global config .toml file for configuration. The file is provided during initial installation. For clarity, below is a set of configurations that can be modified according to your needs.

Models configuration (Required)

The Context Engine requires at least:
  • One embedding model
  • One chat model capable of agentic (multi-step) execution
Additional fast models can improve performance but are not required.

Model types

1. Embedding model (Required)

Drives chunking and retrieval. May require tuning if replaced. Example: text-embedding-large-3

2. Super-fast model (Optional)

A lightweight model for very quick operations. If not provided, the agentic model will handle these tasks, but indexing may become slower. Example: gpt-4.1-nano Recommendation: If no fast or super-fast model is available, it’s best to disable code description during indexing to avoid slowdowns.
[descriptor]
enabled = false

3. Fast chat model (Optional)

Used for quick, non-reasoning tasks. If not provided, the agentic model can be used instead, with some performance impact. Example: gpt-4.1-mini

4. Agentic-capable chat model (Required)

Runs the core agent workflows and multi-step reasoning. Example: claude-sonnet-4-5

Required models summary

PurposeModel ExampleRequired
Embeddingstext-embedding-large-3Yes
Agentic-capable chate.g.,claude-sonnet-4-5Yes
Fast chat modele.g.,gpt-4.1-miniOptional
Super-fast modele.g.,gpt-4.1-nanoOptional

Example model configurations

OpenAI-only example

RoleModelEmbedding
Embeddingopenai/text-embedding-large-3Yes
Super-fast modelopenai/gpt-4.1-nanoOptional
Fast chat modelopenai/gpt-4.1-miniOptional
Agentic-capable chat modelopenai/gpt-5.1Yes

Bedrock-only example

RoleModelEmbedding
Embeddingbedrock/amazon.titan-embed-text-v2:0Yes
Super-fast modelbedrock/anthropic.claude-haiku-4-5-20251001-v1:0Optional
Fast chat modelbedrock/anthropic.claude-haiku-4-5-20251001-v1:0Optional
Agentic-capable chat modelbedrock/global.anthropic.claude-sonnet-4-5-20250929-v1:0Yes

Default configuration

[anthropic_api]
key = '...'

[openai_api]
api_key = '...'

Bedrock configuration

Environment variable that must be set to enable bedrock routing
  • AI_MODELS_FILE=ai_models_bedrock.py
[aws]
AWS_ACCESS_KEY_ID = "KEY_ID"
AWS_SECRET_ACCESS_KEY = "SECRET"
AWS_REGION_NAME = "us-east-1"

[deep_ask_research_preview]
llm_model = "bedrock/global.anthropic.claude-sonnet-4-5-20250929-v1:0"

LLM gateway support

Our product supports integration with custom LLM gateways, provided they implement the same API interfaces as the official OpenAI or Anthropic model endpoints. Environment variable that must be set in all deployments to enable LLM Gateway routing
  • AI_MODELS_FILE=ai_models_onprem.py
[onprem_models]
[onprem_models.lite]
api_base = "<INSERT_GATEWAY_BASE_URL_HERE>"
[onprem_models.main]
api_base = "<INSERT_GATEWAY_BASE_URL_HERE>"
[onprem_models.remote_rag]
api_base = "<INSERT_GATEWAY_BASE_URL_HERE>"

[deep_ask_research_preview]
llm_base_url = "<INSERT_GATEWAY_BASE_URL_HERE>"
If you have unique integration needs, please reach out to the Qodo team.
  • Sentry - We use Sentry for error tracking. When enabled, we only emit code crashes and traces.
  • Mixpanel - Used for analytics only. No user data is sent.
  • CLIENT_DISPLAY_NAME - Name of your org e.g. "qodo_ai" used as identifier for the above tools
CLIENT_DISPLAY_NAME = "N/A"


[sentry]
dsn = "..."
environment = "PRODUCTION"


[mixpanel]
token = "..."
api_secret = "..."