Last verified 2025-09-22 (left) · 2025-09-22 (right)

Gemini 2.0 Flash-Lite vs GPT-4.1 mini — Pricing & Capability Comparison

Gemini 2.0 Flash-Lite charges $0.07 per million input tokens and $0.30 per million output tokens. GPT-4.1 mini comes in at $0.40 / $1.60. Context windows span 1M vs 128K tokens respectively.

Input price (per 1M)

Gemini 2.0 Flash-Lite

$0.07

GPT-4.1 mini

$0.40

Gemini 2.0 Flash-Lite leads here

Output price (per 1M)

Gemini 2.0 Flash-Lite

$0.30

GPT-4.1 mini

$1.60

Gemini 2.0 Flash-Lite leads here

Context window

Gemini 2.0 Flash-Lite

1,000,000 tokens

GPT-4.1 mini

128,000 tokens

Gemini 2.0 Flash-Lite leads here

Cached input

Gemini 2.0 Flash-Lite

Not published

GPT-4.1 mini

Not published

No published data

Cost comparison for 10K-token workloads

Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.

ScenarioGemini 2.0 Flash-LiteGPT-4.1 mini
Balanced conversation
50% input · 50% output
$0.0019$0.0100
Input-heavy workflow
80% input · 20% output
$0.0012$0.0064
Generation heavy
30% input · 70% output
$0.0023$0.0124
Cached system prompt
90% cached input · 10% fresh output
$0.0010$0.0052

Frequently asked questions

Which model is cheaper per million input tokens?

Gemini 2.0 Flash-Lite costs $0.07 per million input tokens versus $0.40 for GPT-4.1 mini.

How do output prices compare?

Gemini 2.0 Flash-Lite charges $0.30 per million output tokens, while GPT-4.1 mini costs $1.60 per million.

Which model supports a larger context window?

Gemini 2.0 Flash-Lite offers 1,000,000 tokens (1M) versus 128K for GPT-4.1 mini.

Related resources