Verified 2025-09-22 · sourced from OpenAI
GPT-4o Mini Pricing Calculator: Official OpenAI API Cost for 1K, 100K & 1M Tokens
Free GPT-4o mini pricing calculator for OpenAI API budgeting: check the official current $0.15 input / $0.60 output rates per 1M tokens, estimate 1K, 10K, 100K, and 1M token spend instantly, confirm the 128K context window, and compare GPT-4o mini with GPT-5 mini, GPT-4.1 mini, GPT-4o, or o4-mini before you ship chatbots, support agents, and other high-volume production workloads.
Quick answer: GPT-4o mini pricing per 1M tokens is $0.15 input and $0.60 output. Context window: 128,000 tokens.
Best for searches like GPT-4o mini pricing, GPT-4o mini pricing calculator, official GPT-4o mini pricing, free GPT-4o mini pricing calculator, GPT-4o mini token calculator, OpenAI GPT-4o mini pricing, GPT-4o mini API pricing, GPT-4o mini API cost, GPT-4o mini cost per 1M tokens, GPT-4o mini cost per 100K tokens, GPT-4o mini cost per 1000 tokens, GPT-4o mini price per 1K tokens, GPT-4o mini input token price, GPT-4o mini output token price, GPT-4o mini 128K context window, GPT-4o mini 100K token cost, GPT-4o mini 1M token cost, GPT-4o mini cheap OpenAI model, no signup GPT-4o mini calculator, GPT-4o mini vs GPT-4.1 mini pricing, GPT-4o mini vs GPT-5 mini pricing.
Pick the route that matches what you searched for
Some visitors want a fast GPT-4o mini API cost estimate, others want a direct 100K or 1M token budget, and some are already comparing alternatives. These shortcuts remove the extra click.
Estimate a single request or prompt budget right now.
Jump straight to the most common budgeting checkpoint.
Use this when you are sizing production traffic or a monthly plan.
Open the closest head-to-head comparison instead of researching from scratch.
Context window
128,000 tokens
Input price
$0.15 / 1M
Output price
$0.60 / 1M
Cached input
Not published
Usage scenarios
Compare standard and cached pricing (where available) across common workloads.
| Scenario | Tokens in | Tokens out | Total tokens | Standard cost |
|---|---|---|---|---|
Quick chat reply Single user question with a short assistant answer | 650 | 220 | 870 | $0.0002 |
Coding assistant session Multi-turn pair programming exchange (≈6 turns) | 2,600 | 1,400 | 4,000 | $0.0012 |
Knowledge base response Retrieval-augmented answer referencing multiple passages | 12,000 | 3,000 | 15,000 | $0.0036 |
Near-max context run Large document processing approaching the 128K token limit | 112,000 | 16,000 | 128,000 | $0.0264 |
Daily & monthly budgeting
Translate usage into predictable operating expenses across popular deployment sizes.
| Profile | Requests/day | Tokens/day | Daily cost | Monthly cost |
|---|---|---|---|---|
| Team pilot | 25 | 75,000 | $0.0225 | $0.675 |
| Product launch | 100 | 500,000 | $0.142 | $4.27 |
| Enterprise scale | 500 | 3,000,000 | $0.900 | $27.00 |
Pricing notes
- Cost-efficient multimodal tier ideal for realtime chat and lightweight agents.
Frequently asked questions
How much does GPT-4o mini cost per 1,000 tokens?
At the published rates of $0.15 per million input tokens and $0.60 per million output tokens, a typical 1,000 token request (≈70% input, 30% output) costs about $0.0003.
What is the context window for GPT-4o mini?
GPT-4o mini supports up to 128,000 tokens (128K), allowing large prompts and retrieval-augmented payloads in a single call.
How fresh is the GPT-4o mini pricing data?
Pricing is sourced from https://platform.openai.com/docs/pricing and was last verified on 2025-09-22. The calculator updates automatically when models.json is refreshed.
What is GPT-4o mini pricing per 1M tokens?
GPT-4o mini is currently priced at $0.15 per million input tokens and $0.60 per million output tokens. This makes it one of the most cost-effective models for production applications requiring GPT-4-level capabilities at scale.
What does 100K or 1M GPT-4o mini tokens cost?
At the current GPT-4o mini API rate, 100K input tokens cost about $0.0150 and 100K output tokens cost about $0.0600. At 1M tokens, the full published rate is $0.15 input and $0.60 output, which is why teams often use GPT-4o mini for high-volume support bots, extraction, and lightweight agent workflows.
Does GPT-4o mini have a 128K context window?
Yes. GPT-4o mini supports a 128,000-token (128K) context window, which is enough for long prompts, multi-turn chat memory, and many retrieval-augmented generation workflows without moving to a more expensive large-context model.
Should I use GPT-4o mini or o4-mini for my application?
GPT-4o mini excels at general-purpose tasks with a 128K context window at lower costs, ideal for chatbots, content generation, and standard API integrations. o4-mini offers a larger 200K context window optimized for reasoning tasks but at a higher price point. Choose based on your context requirements and budget.