Last verified 2026-05-15 (left) · 2026-05-15 (right)

Gemini 3.1 Flash-Lite Preview vs GPT-5.4 nano — Pricing & Capability Comparison

Gemini 3.1 Flash-Lite Preview charges $0.25 per million input tokens and $1.50 per million output tokens. GPT-5.4 nano comes in at $0.20 / $1.25. Context windows span 1M vs 400K tokens respectively.

TL;DR — Quick Comparison

  • GPT-5.4 nano is cheaper overall: $1.45 per 1M tokens (in+out) vs $1.75 for Gemini 3.1 Flash-Lite Preview — saves $0.30 per 1M tokens
  • Input pricing: Gemini 3.1 Flash-Lite Preview $0.25/1M vs GPT-5.4 nano $0.20/1M
  • Output pricing: Gemini 3.1 Flash-Lite Preview $1.50/1M vs GPT-5.4 nano $1.25/1M
  • Context window: Gemini 3.1 Flash-Lite Preview offers more (1M vs 400K)
  • Use our calculator below to estimate costs for your specific usage pattern

Input price (per 1M)

Gemini 3.1 Flash-Lite Preview

$0.25

GPT-5.4 nano

$0.20

GPT-5.4 nano leads here

Output price (per 1M)

Gemini 3.1 Flash-Lite Preview

$1.50

GPT-5.4 nano

$1.25

GPT-5.4 nano leads here

Context window

Gemini 3.1 Flash-Lite Preview

1,000,000 tokens

GPT-5.4 nano

400,000 tokens

Gemini 3.1 Flash-Lite Preview leads here

Cached input

Gemini 3.1 Flash-Lite Preview

$0.025

GPT-5.4 nano

$0.020

GPT-5.4 nano leads here

Which one should you choose?

Skip the spreadsheet if you just need the practical takeaway. Use these rules when deciding between Gemini 3.1 Flash-Lite Preview and GPT-5.4 nano.

Choose GPT-5.4 nano if input tokens dominate your bill

GPT-5.4 nano has the lower input rate, which usually matters most for chat, RAG, classification, and long-prompt workflows where prompt volume stays much larger than generated output.

Choose GPT-5.4 nano if you generate long answers

GPT-5.4 nano is cheaper on output tokens, so it tends to win for report generation, coding assistance, reasoning traces, and any workflow where completions are long.

Choose Gemini 3.1 Flash-Lite Preview if context size is the blocker

Gemini 3.1 Flash-Lite Preview offers the larger published context window, which is more important than small pricing differences when you need to fit large files, long chats, or multi-document prompts into one request.

Cost comparison for 10K-token workloads

Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.

ScenarioGemini 3.1 Flash-Lite PreviewGPT-5.4 nanoGemini 3.1 Flash-Lite Preview cachedGPT-5.4 nano cached
Balanced conversation
50% input · 50% output
$0.0087$0.0073$0.0076$0.0064
Input-heavy workflow
80% input · 20% output
$0.0050$0.0041$0.0032$0.0027
Generation heavy
30% input · 70% output
$0.0113$0.0094$0.0106$0.0088
Cached system prompt
90% cached input · 10% fresh output
$0.0037$0.0030$0.0017$0.0014

Frequently asked questions

Which is cheaper: Gemini 3.1 Flash-Lite Preview or GPT-5.4 nano?

GPT-5.4 nano is cheaper for input tokens at $0.20 per 1M tokens compared to $0.25. For output, GPT-5.4 nano costs $1.25 per 1M tokens versus $1.50 for Gemini 3.1 Flash-Lite Preview.

What is the cost per 1M tokens for Gemini 3.1 Flash-Lite Preview?

Gemini 3.1 Flash-Lite Preview pricing: $0.25 per 1M input tokens and $1.50 per 1M output tokens. Context window: 1,000,000 tokens.

What is the cost per 1M tokens for GPT-5.4 nano?

GPT-5.4 nano pricing: $0.20 per 1M input tokens and $1.25 per 1M output tokens. Context window: 400,000 tokens.

How much does it cost per 1K tokens?

Per 1K tokens: Gemini 3.1 Flash-Lite Preview costs $0.0003 input / $0.0015 output. GPT-5.4 nano costs $0.0002 input / $0.0013 output. This is useful for calculating small-scale usage costs.

Which model supports a larger context window?

Gemini 3.1 Flash-Lite Preview offers 1,000,000 tokens (1M) versus 400K for GPT-5.4 nano.

What is the estimated monthly cost for typical usage?

For a typical workload of 10M input + 2M output tokens per month: Gemini 3.1 Flash-Lite Preview would cost approximately $5.50, while GPT-5.4 nano would cost $4.50. GPT-5.4 nano is more economical for this usage pattern.

Do these models support prompt caching?

Gemini 3.1 Flash-Lite Preview supports prompt caching at $0.025 per 1M cached tokens, reducing costs for repeated context by up to 90%. GPT-5.4 nano supports caching at $0.020 per 1M tokens, saving up to 90%.

Which model is best for my use case?

Choose GPT-5.4 nano for cost-sensitive applications with high input volume. Choose Gemini 3.1 Flash-Lite Preview if you need 1M context for long documents or conversations. Consider prompt caching if you have repeated context. Use our token calculator to model your specific usage pattern.

Keep exploring this decision

More related resources