Mistral Nemo

Mistral

Compact 12B open-weight model co-developed with NVIDIA. Excellent coding performance at minimal cost.

Context Window: 128K tokens Released: 2024-07 Best For: Self-hosted deployments, cost-sensitive coding, edge deployments
  • Open weights
  • Self-hostable
  • Efficient inference
  • Mistral Nemo Pricing

    Token TypePrice per Million
    Input tokens$0.150
    Output tokens$0.150

    Estimated Cost by Project Size

    Realistic cost estimates for common coding scenarios. Assumes 30% cache hit rate where caching is available.

    ScenarioToken UsageEstimated Cost
    Small Script (1K lines) 50K input / 30K output <$0.01
    Medium Feature (10K lines) 500K input / 200K output $0.08
    Large Project (50K lines) 2,500K input / 1,000K output $0.41
    Code Review (5K lines) 250K input / 25K output $0.03

    Benchmark Performance — Mistral Nemo

    Third-party benchmark scores normalized to 0-100 scale. Higher is better. Aggregated scores from published third-party benchmarks. SWE-bench measures real GitHub issue resolution. LiveCodeBench measures competitive programming ability. HumanEval measures basic code generation. BigCodeBench measures practical, multi-step coding tasks. All scores normalized to 0-100 scale.

    Overall Score 48/100
    SWE-bench Verified
    40
    LiveCodeBench
    50
    HumanEval
    70
    BigCodeBench
    32

    Sources: SWE-bench Verified, LiveCodeBench, HumanEval, BigCodeBench

    Get Access to Mistral Nemo

    Ready to start using Mistral Nemo? Get API access directly from Mistral.

    Get API Access → Try Mistral Nemo Free →

    How Does Mistral Nemo Compare?

    ModelInput ($/M)Medium Feature Cost
    Mistral Nemo $0.150 $0.08 selected
    GPT-4o mini $0.150 $0.18 Compare
    Gemini 2.5 Flash $0.150 $0.17 Compare
    Qwen 3 Turbo $0.150 $0.17 Compare
    DeepSeek Jiuge $0.150 $0.17 Compare
    Gemini 2.0 Flash $0.100 $0.12 Compare

    Related Models

    Categories