Claude 3.5 Haiku

Anthropic

Fast, cost-effective model for high-volume tasks. Great for code review and simple queries.

Context Window: 200K tokens Released: 2024-10 Best For: Code review, high-volume tasks, simple queries
  • Prompt caching
  • Fast inference
  • Low cost
  • Claude 3.5 Haiku Pricing

    Token TypePrice per Million
    Input tokens$0.800
    Output tokens$4.00
    Cache read tokens$0.080
    Cache creation tokens$1.00

    Estimated Cost by Project Size

    Realistic cost estimates for common coding scenarios. Assumes 30% cache hit rate where caching is available.

    ScenarioToken UsageEstimated Cost
    Small Script (1K lines) 50K input / 30K output $0.16
    Medium Feature (10K lines) 500K input / 200K output $1.24
    Large Project (50K lines) 2,500K input / 1,000K output $6.21
    Code Review (5K lines) 250K input / 25K output $0.32

    Benchmark Performance — Claude 3.5 Haiku

    Third-party benchmark scores normalized to 0-100 scale. Higher is better. Aggregated scores from published third-party benchmarks. SWE-bench measures real GitHub issue resolution. LiveCodeBench measures competitive programming ability. HumanEval measures basic code generation. BigCodeBench measures practical, multi-step coding tasks. All scores normalized to 0-100 scale.

    Overall Score 52/100
    SWE-bench Verified
    45
    LiveCodeBench
    55
    HumanEval
    75
    BigCodeBench
    38

    Sources: SWE-bench Verified, LiveCodeBench, HumanEval, BigCodeBench

    Get Access to Claude 3.5 Haiku

    Ready to start using Claude 3.5 Haiku? Get API access directly from Anthropic.

    Get API Access → Try Claude 3.5 Haiku Free →

    How Does Claude 3.5 Haiku Compare?

    ModelInput ($/M)Medium Feature Cost
    Claude 3.5 Haiku $0.800 $1.24 selected
    Qwen Coder Plus $0.800 $1.08 Compare
    Claude 4 Haiku $0.800 $1.24 Compare
    Claude Sonnet 4 Lite $1.00 $1.55 Compare
    DeepSeek Reasoner (R1) $0.550 $0.63 Compare
    GPT-3.5 Turbo $0.500 $0.48 Compare

    Related Models

    Categories