Claude Sonnet 4

Anthropic

Anthropic's balanced model for coding and general tasks. Best price-performance ratio in the Claude family.

Context Window: 200K tokens Released: 2025-05 Best For: Day-to-day coding, code review, documentation
  • Prompt caching
  • Extended thinking
  • Large context window
  • Claude Sonnet 4 Pricing

    Token TypePrice per Million
    Input tokens$3.00
    Output tokens$15.00
    Cache read tokens$0.300
    Cache creation tokens$3.75

    Estimated Cost by Project Size

    Realistic cost estimates for common coding scenarios. Assumes 30% cache hit rate where caching is available.

    ScenarioToken UsageEstimated Cost
    Small Script (1K lines) 50K input / 30K output $0.62
    Medium Feature (10K lines) 500K input / 200K output $4.66
    Large Project (50K lines) 2,500K input / 1,000K output $23.29
    Code Review (5K lines) 250K input / 25K output $1.20

    Benchmark Performance — Claude Sonnet 4

    Third-party benchmark scores normalized to 0-100 scale. Higher is better. Aggregated scores from published third-party benchmarks. SWE-bench measures real GitHub issue resolution. LiveCodeBench measures competitive programming ability. HumanEval measures basic code generation. BigCodeBench measures practical, multi-step coding tasks. All scores normalized to 0-100 scale.

    Overall Score 78/100
    SWE-bench Verified
    74
    LiveCodeBench
    82
    HumanEval
    92
    BigCodeBench
    64

    Sources: SWE-bench Verified, LiveCodeBench, HumanEval, BigCodeBench

    Get Access to Claude Sonnet 4

    Ready to start using Claude Sonnet 4? Get API access directly from Anthropic.

    Get API Access → Try Claude Sonnet 4 Free →

    How Does Claude Sonnet 4 Compare?

    ModelInput ($/M)Medium Feature Cost
    Claude Sonnet 4 $3.00 $4.66 selected
    Claude 3.5 Sonnet $3.00 $4.66 Compare
    Claude 3 Sonnet $3.00 $4.05 Compare
    Qwen 3.6 Plus $3.00 $4.66 Compare
    Grok 3 $3.00 $4.05 Compare
    GPT-4o $2.50 $3.06 Compare

    Related Models

    Categories