Last verified 2025-09-22 (left) · 2025-09-22 (right)

Gemini 2.0 Flash-Lite vs GPT-5 nano — Pricing & Capability Comparison

Gemini 2.0 Flash-Lite charges $0.07 per million input tokens and $0.30 per million output tokens. GPT-5 nano comes in at $0.05 / $0.40. Context windows span 1M vs 200K tokens respectively.

Input price (per 1M)

Gemini 2.0 Flash-Lite

$0.07

GPT-5 nano

$0.05

GPT-5 nano leads here

Output price (per 1M)

Gemini 2.0 Flash-Lite

$0.30

GPT-5 nano

$0.40

Gemini 2.0 Flash-Lite leads here

Context window

Gemini 2.0 Flash-Lite

1,000,000 tokens

GPT-5 nano

200,000 tokens

Gemini 2.0 Flash-Lite leads here

Cached input

Gemini 2.0 Flash-Lite

Not published

GPT-5 nano

Not published

No published data

Cost comparison for 10K-token workloads

Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.

ScenarioGemini 2.0 Flash-LiteGPT-5 nano
Balanced conversation
50% input · 50% output
$0.0019$0.0023
Input-heavy workflow
80% input · 20% output
$0.0012$0.0012
Generation heavy
30% input · 70% output
$0.0023$0.0030
Cached system prompt
90% cached input · 10% fresh output
$0.0010$0.0009

Frequently asked questions

Which model is cheaper per million input tokens?

GPT-5 nano costs $0.05 per million input tokens versus $0.07 for Gemini 2.0 Flash-Lite.

How do output prices compare?

Gemini 2.0 Flash-Lite charges $0.30 per million output tokens, while GPT-5 nano costs $0.40 per million.

Which model supports a larger context window?

Gemini 2.0 Flash-Lite offers 1,000,000 tokens (1M) versus 200K for GPT-5 nano.

Related resources