Qodo Documentation
HomepageBlogCommunityGet Started
  • Overview
  • Qodo Gen
  • Qodo Portal
  • Administrators Actions
  • Introduction
  • Quickstart
  • Setup and Installation
    • VSCode Installation
    • JetBrains Installation
    • Sign In
    • Extension Settings
    • Uninstall
  • Qodo Gen Chat
    • Agentic Mode
      • Agentic Tools (MCPs)
    • Standard Mode
      • Focus
        • Current File Focus
        • Git Diff Focus
      • Context
        • Add Entire Folder or Project as Context
        • Add Image as Context
      • Commands
        • /ask
        • /changelog
        • /commit
        • /describe
        • /docstring
        • /enhance
        • /explain
        • /find-on-github
        • /generate-best-practices
        • /help
        • /improve
        • /issues
        • /recap
        • /review
    • Inline Context
    • Chat History
    • Model Selection
    • Chat Preferences
  • Company Codebase (RAG)
    • Tagging
    • Configuration File
  • Code Completion
  • Test Generation
    • Configuring Your Test Setup
  • Data Sharing
  • Release Notes
Powered by GitBook
LogoLogo

Terms and Privacy

  • Terms of Use
  • Privacy Policy
  • Data Processing

© 2025 Qodo. All Rights Reserved.

On this page
  • Available Models
  • Using Model Selection

Was this helpful?

  1. Qodo Gen Chat

Model Selection

Last updated 1 month ago

Was this helpful?

Unleash the full potential of AI: seamlessly switch between the world's most advanced AI models in real-time to get the best, most relevant chat experience for your query.

Available Models

  1. GPT-4.1: Excellent for coding tasks.

  2. GPT-4.0: Good for simpler coding tasks.

  3. GPT-o1: Slow reasoning model.

  4. GPT-o3-mini: Fast code reasoning model.

  5. GPT-o3-mini-high: Great for more complex coding tasks.

  6. Claude 3.5 Sonnet: Great for tests and readability.

  7. Claude 3.7 Sonnet: First Claude model to offer extensive reasoning.

  8. Gemini 2.5 Pro: Best for large context tasks.

  9. Gemini 2.0 Flash: Ideal for quick coding tasks.

  10. DeepSeek R1: SoTA slowest reasoning model (US hosted).

Using Model Selection

Select a model from the dropdown menu on the bottom left below the chatbox. The selected model will be used for your next query.