Last verified 2025-09-22 (left) · 2025-09-22 (right)

GPT-4.1 mini vs o4-mini — Pricing & Capability Comparison

GPT-4.1 mini charges $0.40 per million input tokens and $1.60 per million output tokens. o4-mini comes in at $1.10 / $4.40. Context windows span 128K vs 200K tokens respectively.

Input price (per 1M)

GPT-4.1 mini

$0.40

o4-mini

$1.10

GPT-4.1 mini leads here

Output price (per 1M)

GPT-4.1 mini

$1.60

o4-mini

$4.40

GPT-4.1 mini leads here

Context window

GPT-4.1 mini

128,000 tokens

o4-mini

200,000 tokens

o4-mini leads here

Cached input

GPT-4.1 mini

Not published

o4-mini

Not published

No published data

Cost comparison for 10K-token workloads

Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.

ScenarioGPT-4.1 minio4-mini
Balanced conversation
50% input · 50% output
$0.0100$0.0275
Input-heavy workflow
80% input · 20% output
$0.0064$0.0176
Generation heavy
30% input · 70% output
$0.0124$0.0341
Cached system prompt
90% cached input · 10% fresh output
$0.0052$0.0143

Frequently asked questions

Which model is cheaper per million input tokens?

GPT-4.1 mini costs $0.40 per million input tokens versus $1.10 for o4-mini.

How do output prices compare?

GPT-4.1 mini charges $1.60 per million output tokens, while o4-mini costs $4.40 per million.

Which model supports a larger context window?

o4-mini offers 200,000 tokens (200K) versus 128K for GPT-4.1 mini.

Related resources