Last verified 2025-09-22 (left) · 2025-09-22 (right)
Claude Haiku 3.5 vs GPT-4.1 nano — Pricing & Capability Comparison
Claude Haiku 3.5 charges $0.80 per million input tokens and $4.00 per million output tokens. GPT-4.1 nano comes in at $0.10 / $0.40. Context windows span 200K vs 128K tokens respectively.
Input price (per 1M)
Claude Haiku 3.5
$0.80
GPT-4.1 nano
$0.10
GPT-4.1 nano leads here
Output price (per 1M)
Claude Haiku 3.5
$4.00
GPT-4.1 nano
$0.40
GPT-4.1 nano leads here
Context window
Claude Haiku 3.5
200,000 tokens
GPT-4.1 nano
128,000 tokens
Claude Haiku 3.5 leads here
Cached input
Claude Haiku 3.5
Not published
GPT-4.1 nano
Not published
No published data
Cost comparison for 10K-token workloads
Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.
Scenario | Claude Haiku 3.5 | GPT-4.1 nano |
---|---|---|
Balanced conversation 50% input · 50% output | $0.0240 | $0.0025 |
Input-heavy workflow 80% input · 20% output | $0.0144 | $0.0016 |
Generation heavy 30% input · 70% output | $0.0304 | $0.0031 |
Cached system prompt 90% cached input · 10% fresh output | $0.0112 | $0.0013 |
Frequently asked questions
Which model is cheaper per million input tokens?
GPT-4.1 nano costs $0.10 per million input tokens versus $0.80 for Claude Haiku 3.5.
How do output prices compare?
GPT-4.1 nano charges $0.40 per million output tokens, while Claude Haiku 3.5 costs $4.00 per million.
Which model supports a larger context window?
Claude Haiku 3.5 offers 200,000 tokens (200K) versus 128K for GPT-4.1 nano.