GPT-5 Chat Pricing & Token Costs (2025)
Per 1M tokens: input $1.25 · output $10.00$ · cached 0.125. Context window $200,000 tokens with source and verification details.
TL;DR — Pricing Quick Summary
- ✓Input pricing: $1.25 per 1M tokens ($0.0013 per 1K)
- ✓Output pricing: $10.00 per 1M tokens ($0.0100 per 1K)
- ✓Prompt caching: $0.125 per 1M tokens — save 90% on repeated context
- ✓Context window: 200,000 tokens
- ✓Typical monthly cost: $32.50 for 10M input + 2M output tokens
- ✓Daily cost example: $0.5625 for 100K tokens (50K in, 50K out)
Key metrics
- Context window
- 200,000 tokens
- Input price
- $1.25 / 1M tokens
- Output price
- $10.00 / 1M tokens
- Cached input
- $0.125 / 1M tokens
Official link:https://platform.openai.com/docs/pricing
Last verified: 2025-09-22
- · Cached input pricing reflects OpenAI's official cache billing tier.
Multi-currency (per 1M tokens)
| Currency | Input | Cached | Output |
|---|---|---|---|
| USD | $1.25 | $0.13 | $10.00 |
| CNY | ¥8.94 | ¥0.89 | ¥71.50 |
| EUR | 1,15 € | 0,12 € | 9,20 € |
| JPY | ¥183 | ¥18 | ¥1,460 |
* Live search cost uses sources / 1000 × price and currently applies to xAI Grok only.
Frequently Asked Questions
What is the cost per 1M tokens for OpenAI GPT-5 Chat?
OpenAI GPT-5 Chat costs $1.25 per 1M input tokens and $10.00 per 1M output tokens, with cached input at $0.125 per 1M tokens.
How much does it cost per 1K tokens?
Per 1K tokens: $0.0013 for input and $0.0100 for output. This is useful for calculating costs for smaller workloads or individual API calls.
What is the estimated monthly cost for typical usage?
For a typical workload of 10M input + 2M output tokens per month, OpenAI GPT-5 Chat would cost approximately $32.50. Daily usage of 100K tokens (50K in, 50K out) costs about $0.5625.
Does OpenAI GPT-5 Chat offer a free tier?
Check OpenAI's official documentation for free tier availability. Some providers offer free credits for new users or limited free usage. Visit https://platform.openai.com/docs/pricing for current free tier details.
How does prompt caching work to reduce costs?
With prompt caching enabled, input pricing drops to $0.125 per 1M tokens for repeated context (a 90% discount), while output remains $10.00 per 1M tokens. Caching is ideal for repeated prompts or system messages.
What is the context window size for GPT-5 Chat?
GPT-5 Chat supports a 200,000 token context window. This determines the maximum combined length of your input prompt and output response.
How frequently is this pricing information updated?
All prices reference official OpenAI documentation (https://platform.openai.com/docs/pricing), last verified on 2025-09-22. We update pricing as soon as providers announce changes.
How can I calculate exact costs for my use case?
Use our free token calculator to estimate costs based on your specific usage pattern. The calculator supports all major models and shows costs in multiple currencies. You can also compare costs across different models to find the most economical option.