Last verified 2025-09-22 (left) · 2025-09-22 (right)
Gemini 2.0 Flash vs GPT-4.1 nano — Pricing & Capability Comparison
Gemini 2.0 Flash charges $0.10 per million input tokens and $0.40 per million output tokens. GPT-4.1 nano comes in at $0.10 / $0.40. Context windows span 1M vs 128K tokens respectively.
Input price (per 1M)
Gemini 2.0 Flash
$0.10
GPT-4.1 nano
$0.10
Gemini 2.0 Flash leads here
Output price (per 1M)
Gemini 2.0 Flash
$0.40
GPT-4.1 nano
$0.40
Gemini 2.0 Flash leads here
Context window
Gemini 2.0 Flash
1,000,000 tokens
GPT-4.1 nano
128,000 tokens
Gemini 2.0 Flash leads here
Cached input
Gemini 2.0 Flash
Not published
GPT-4.1 nano
Not published
No published data
Cost comparison for 10K-token workloads
Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.
Scenario | Gemini 2.0 Flash | GPT-4.1 nano |
---|---|---|
Balanced conversation 50% input · 50% output | $0.0025 | $0.0025 |
Input-heavy workflow 80% input · 20% output | $0.0016 | $0.0016 |
Generation heavy 30% input · 70% output | $0.0031 | $0.0031 |
Cached system prompt 90% cached input · 10% fresh output | $0.0013 | $0.0013 |
Frequently asked questions
Which model is cheaper per million input tokens?
Gemini 2.0 Flash costs $0.10 per million input tokens versus $0.10 for GPT-4.1 nano.
How do output prices compare?
Gemini 2.0 Flash charges $0.40 per million output tokens, while GPT-4.1 nano costs $0.40 per million.
Which model supports a larger context window?
Gemini 2.0 Flash offers 1,000,000 tokens (1M) versus 128K for GPT-4.1 nano.