Last verified 2025-09-22 (left) · 2025-09-22 (right)
Gemini 2.5 Flash-Lite vs GPT-5 Chat — Pricing & Capability Comparison
Gemini 2.5 Flash-Lite charges $0.10 per million input tokens and $0.40 per million output tokens. GPT-5 Chat comes in at $1.25 / $10.00. Context windows span 1M vs 200K tokens respectively.
TL;DR — Quick Comparison
- ✓Gemini 2.5 Flash-Lite is cheaper overall: $0.50 per 1M tokens (in+out) vs $11.25 for GPT-5 Chat — saves $10.75 per 1M tokens
- ✓Input pricing: Gemini 2.5 Flash-Lite $0.10/1M vs GPT-5 Chat $1.25/1M
- ✓Output pricing: Gemini 2.5 Flash-Lite $0.40/1M vs GPT-5 Chat $10.00/1M
- ✓Context window: Gemini 2.5 Flash-Lite offers more (1M vs 200K)
- ✓Use our calculator below to estimate costs for your specific usage pattern
Input price (per 1M)
Gemini 2.5 Flash-Lite
$0.10
GPT-5 Chat
$1.25
Gemini 2.5 Flash-Lite leads here
Output price (per 1M)
Gemini 2.5 Flash-Lite
$0.40
GPT-5 Chat
$10.00
Gemini 2.5 Flash-Lite leads here
Context window
Gemini 2.5 Flash-Lite
1,000,000 tokens
GPT-5 Chat
200,000 tokens
Gemini 2.5 Flash-Lite leads here
Cached input
Gemini 2.5 Flash-Lite
Not published
GPT-5 Chat
$0.125
GPT-5 Chat leads here
Cost comparison for 10K-token workloads
Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.
| Scenario | Gemini 2.5 Flash-Lite | GPT-5 Chat | GPT-5 Chat cached |
|---|---|---|---|
Balanced conversation 50% input · 50% output | $0.0025 | $0.0563 | $0.0506 |
Input-heavy workflow 80% input · 20% output | $0.0016 | $0.0300 | $0.0210 |
Generation heavy 30% input · 70% output | $0.0031 | $0.0738 | $0.0704 |
Cached system prompt 90% cached input · 10% fresh output | $0.0013 | $0.0212 | $0.0111 |
Frequently asked questions
Which is cheaper: Gemini 2.5 Flash-Lite or GPT-5 Chat?
Gemini 2.5 Flash-Lite is cheaper for input tokens at $0.10 per 1M tokens compared to $1.25. For output, Gemini 2.5 Flash-Lite costs $0.40 per 1M tokens versus $10.00 for GPT-5 Chat.
What is the cost per 1M tokens for Gemini 2.5 Flash-Lite?
Gemini 2.5 Flash-Lite pricing: $0.10 per 1M input tokens and $0.40 per 1M output tokens. Context window: 1,000,000 tokens.
What is the cost per 1M tokens for GPT-5 Chat?
GPT-5 Chat pricing: $1.25 per 1M input tokens and $10.00 per 1M output tokens. Context window: 200,000 tokens.
How much does it cost per 1K tokens?
Per 1K tokens: Gemini 2.5 Flash-Lite costs $0.0001 input / $0.0004 output. GPT-5 Chat costs $0.0013 input / $0.0100 output. This is useful for calculating small-scale usage costs.
Which model supports a larger context window?
Gemini 2.5 Flash-Lite offers 1,000,000 tokens (1M) versus 200K for GPT-5 Chat.
What is the estimated monthly cost for typical usage?
For a typical workload of 10M input + 2M output tokens per month: Gemini 2.5 Flash-Lite would cost approximately $1.80, while GPT-5 Chat would cost $32.50. Gemini 2.5 Flash-Lite is more economical for this usage pattern.
Do these models support prompt caching?
Gemini 2.5 Flash-Lite does not publish cached pricing. GPT-5 Chat supports caching at $0.125 per 1M tokens, saving up to 90%.
Which model is best for my use case?
Choose Gemini 2.5 Flash-Lite for cost-sensitive applications with high input volume. Choose Gemini 2.5 Flash-Lite if you need 1M context for long documents or conversations. Consider prompt caching if you have repeated context. Use our token calculator to model your specific usage pattern.