Last verified 2025-09-22 (left) · 2025-09-22 (right)

GPT-4.1 nano vs o4-mini — Pricing & Capability Comparison

GPT-4.1 nano charges $0.10 per million input tokens and $0.40 per million output tokens. o4-mini comes in at $1.10 / $4.40. Context windows span 128K vs 200K tokens respectively.

Input price (per 1M)

GPT-4.1 nano

$0.10

o4-mini

$1.10

GPT-4.1 nano leads here

Output price (per 1M)

GPT-4.1 nano

$0.40

o4-mini

$4.40

GPT-4.1 nano leads here

Context window

GPT-4.1 nano

128,000 tokens

o4-mini

200,000 tokens

o4-mini leads here

Cached input

GPT-4.1 nano

Not published

o4-mini

Not published

No published data

Cost comparison for 10K-token workloads

Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.

ScenarioGPT-4.1 nanoo4-mini
Balanced conversation
50% input · 50% output
$0.0025$0.0275
Input-heavy workflow
80% input · 20% output
$0.0016$0.0176
Generation heavy
30% input · 70% output
$0.0031$0.0341
Cached system prompt
90% cached input · 10% fresh output
$0.0013$0.0143

Frequently asked questions

Which model is cheaper per million input tokens?

GPT-4.1 nano costs $0.10 per million input tokens versus $1.10 for o4-mini.

How do output prices compare?

GPT-4.1 nano charges $0.40 per million output tokens, while o4-mini costs $4.40 per million.

Which model supports a larger context window?

o4-mini offers 200,000 tokens (200K) versus 128K for GPT-4.1 nano.

Related resources