Last verified 2025-09-22 (left) · 2025-09-22 (right)

Gemini 2.0 Flash vs GPT-5 nano — Pricing & Capability Comparison

Gemini 2.0 Flash charges $0.10 per million input tokens and $0.40 per million output tokens. GPT-5 nano comes in at $0.05 / $0.40. Context windows span 1M vs 200K tokens respectively.

Input price (per 1M)

Gemini 2.0 Flash

$0.10

GPT-5 nano

$0.05

GPT-5 nano leads here

Output price (per 1M)

Gemini 2.0 Flash

$0.40

GPT-5 nano

$0.40

Gemini 2.0 Flash leads here

Context window

Gemini 2.0 Flash

1,000,000 tokens

GPT-5 nano

200,000 tokens

Gemini 2.0 Flash leads here

Cached input

Gemini 2.0 Flash

Not published

GPT-5 nano

Not published

No published data

Cost comparison for 10K-token workloads

Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.

ScenarioGemini 2.0 FlashGPT-5 nano
Balanced conversation
50% input · 50% output
$0.0025$0.0023
Input-heavy workflow
80% input · 20% output
$0.0016$0.0012
Generation heavy
30% input · 70% output
$0.0031$0.0030
Cached system prompt
90% cached input · 10% fresh output
$0.0013$0.0009

Frequently asked questions

Which model is cheaper per million input tokens?

GPT-5 nano costs $0.05 per million input tokens versus $0.10 for Gemini 2.0 Flash.

How do output prices compare?

Gemini 2.0 Flash charges $0.40 per million output tokens, while GPT-5 nano costs $0.40 per million.

Which model supports a larger context window?

Gemini 2.0 Flash offers 1,000,000 tokens (1M) versus 200K for GPT-5 nano.

Related resources