Last verified 2025-09-22 (left) · 2025-09-22 (right)
Claude Haiku 3.5 vs GPT-4.1 mini — Pricing & Capability Comparison
Claude Haiku 3.5 charges $0.80 per million input tokens and $4.00 per million output tokens. GPT-4.1 mini comes in at $0.40 / $1.60. Context windows span 200K vs 128K tokens respectively.
Input price (per 1M)
Claude Haiku 3.5
$0.80
GPT-4.1 mini
$0.40
GPT-4.1 mini leads here
Output price (per 1M)
Claude Haiku 3.5
$4.00
GPT-4.1 mini
$1.60
GPT-4.1 mini leads here
Context window
Claude Haiku 3.5
200,000 tokens
GPT-4.1 mini
128,000 tokens
Claude Haiku 3.5 leads here
Cached input
Claude Haiku 3.5
Not published
GPT-4.1 mini
Not published
No published data
Cost comparison for 10K-token workloads
Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.
Scenario | Claude Haiku 3.5 | GPT-4.1 mini |
---|---|---|
Balanced conversation 50% input · 50% output | $0.0240 | $0.0100 |
Input-heavy workflow 80% input · 20% output | $0.0144 | $0.0064 |
Generation heavy 30% input · 70% output | $0.0304 | $0.0124 |
Cached system prompt 90% cached input · 10% fresh output | $0.0112 | $0.0052 |
Frequently asked questions
Which model is cheaper per million input tokens?
GPT-4.1 mini costs $0.40 per million input tokens versus $0.80 for Claude Haiku 3.5.
How do output prices compare?
GPT-4.1 mini charges $1.60 per million output tokens, while Claude Haiku 3.5 costs $4.00 per million.
Which model supports a larger context window?
Claude Haiku 3.5 offers 200,000 tokens (200K) versus 128K for GPT-4.1 mini.