Last verified 2025-09-22 (left) · 2025-09-22 (right)

GPT-5 Chat vs o4-mini — Pricing & Capability Comparison

GPT-5 Chat charges $1.25 per million input tokens and $10.00 per million output tokens. o4-mini comes in at $1.10 / $4.40. Context windows span 200K vs 200K tokens respectively.

Input price (per 1M)

GPT-5 Chat

$1.25

o4-mini

$1.10

o4-mini leads here

Output price (per 1M)

GPT-5 Chat

$10.00

o4-mini

$4.40

o4-mini leads here

Context window

GPT-5 Chat

200,000 tokens

o4-mini

200,000 tokens

GPT-5 Chat leads here

Cached input

GPT-5 Chat

$0.125

o4-mini

Not published

GPT-5 Chat leads here

Cost comparison for 10K-token workloads

Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.

ScenarioGPT-5 Chato4-miniGPT-5 Chat cached
Balanced conversation
50% input · 50% output
$0.0563$0.0275$0.0506
Input-heavy workflow
80% input · 20% output
$0.0300$0.0176$0.0210
Generation heavy
30% input · 70% output
$0.0738$0.0341$0.0704
Cached system prompt
90% cached input · 10% fresh output
$0.0212$0.0143$0.0111

Frequently asked questions

Which model is cheaper per million input tokens?

o4-mini costs $1.10 per million input tokens versus $1.25 for GPT-5 Chat.

How do output prices compare?

o4-mini charges $4.40 per million output tokens, while GPT-5 Chat costs $10.00 per million.

Which model supports a larger context window?

GPT-5 Chat offers 200,000 tokens (200K) versus 200K for o4-mini.

Related resources