Last verified 2025-09-22 (left) · 2025-09-22 (right)
Claude Haiku 3.5 vs GPT-4o (fine-tuned) — Pricing & Capability Comparison
Claude Haiku 3.5 charges $0.80 per million input tokens and $4.00 per million output tokens. GPT-4o (fine-tuned) comes in at $3.75 / $15.00. Context windows span 200K vs 128K tokens respectively.
Input price (per 1M)
Claude Haiku 3.5
$0.80
GPT-4o (fine-tuned)
$3.75
Claude Haiku 3.5 leads here
Output price (per 1M)
Claude Haiku 3.5
$4.00
GPT-4o (fine-tuned)
$15.00
Claude Haiku 3.5 leads here
Context window
Claude Haiku 3.5
200,000 tokens
GPT-4o (fine-tuned)
128,000 tokens
Claude Haiku 3.5 leads here
Cached input
Claude Haiku 3.5
Not published
GPT-4o (fine-tuned)
$1.875
GPT-4o (fine-tuned) leads here
Cost comparison for 10K-token workloads
Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.
Scenario | Claude Haiku 3.5 | GPT-4o (fine-tuned) | GPT-4o (fine-tuned) cached |
---|---|---|---|
Balanced conversation 50% input · 50% output | $0.0240 | $0.0938 | $0.0844 |
Input-heavy workflow 80% input · 20% output | $0.0144 | $0.0600 | $0.0450 |
Generation heavy 30% input · 70% output | $0.0304 | $0.116 | $0.111 |
Cached system prompt 90% cached input · 10% fresh output | $0.0112 | $0.0487 | $0.0319 |
Frequently asked questions
Which model is cheaper per million input tokens?
Claude Haiku 3.5 costs $0.80 per million input tokens versus $3.75 for GPT-4o (fine-tuned).
How do output prices compare?
Claude Haiku 3.5 charges $4.00 per million output tokens, while GPT-4o (fine-tuned) costs $15.00 per million.
Which model supports a larger context window?
Claude Haiku 3.5 offers 200,000 tokens (200K) versus 128K for GPT-4o (fine-tuned).