Verified 2025-12-12 ยท sourced from OpenAI
GPT-5.2 Pro Token Calculator & Cost Guide
Estimate OpenAI GPT-5.2 Pro API usage in dollars before you send a single request. Standard pricing is $21.00 per million input tokens and $168.00 per million output tokens with a 400K token context window.
Context window
400,000 tokens
Input price
$21.00 / 1M
Output price
$168.00 / 1M
Cached input
$2.100 / 1M
Usage scenarios
Compare standard and cached pricing (where available) across common workloads.
| Scenario | Tokens in | Tokens out | Total tokens | Standard cost | Cached cost |
|---|---|---|---|---|---|
Quick chat reply Single user question with a short assistant answer | 650 | 220 | 870 | $0.0506 | $0.0383 |
Coding assistant session Multi-turn pair programming exchange (โ6 turns) | 2,600 | 1,400 | 4,000 | $0.290 | $0.241 |
Knowledge base response Retrieval-augmented answer referencing multiple passages | 12,000 | 3,000 | 15,000 | $0.756 | $0.529 |
Near-max context run Large document processing approaching the 400K token limit | 352,000 | 48,000 | 400,000 | $15.46 | $8.80 |
Daily & monthly budgeting
Translate usage into predictable operating expenses across popular deployment sizes.
| Profile | Requests/day | Tokens/day | Daily cost | Monthly cost | Cached daily | Cached monthly |
|---|---|---|---|---|---|---|
| Team pilot | 25 | 75,000 | $5.25 | $157.50 | $4.31 | $129.15 |
| Product launch | 100 | 500,000 | $32.55 | $976.50 | $25.93 | $778.05 |
| Enterprise scale | 500 | 3,000,000 | $210.00 | $6300.00 | $172.20 | $5166.00 |
Pricing notes
- ๐ GPT-5.2 Pro - Maximum reasoning capability (Dec 2025)
- 400K context window, 128K max output
- 12x price premium over standard GPT-5.2 for enhanced reasoning
Frequently asked questions
How much does GPT-5.2 Pro cost per 1,000 tokens?
At the published rates of $21.00 per million input tokens and $168.00 per million output tokens, a typical 1,000 token request (โ70% input, 30% output) costs about $0.0651.
Does GPT-5.2 Pro offer cached input discounts?
GPT-5.2 Pro drops input costs to $2.100 per million cached tokens. Using cached contexts, that same 1,000 token call totals $0.0519, a significant saving for chatbots and RAG systems.
What is the context window for GPT-5.2 Pro?
GPT-5.2 Pro supports up to 400,000 tokens (400K), allowing large prompts and retrieval-augmented payloads in a single call.
How fresh is the GPT-5.2 Pro pricing data?
Pricing is sourced from https://platform.openai.com/docs/pricing and was last verified on 2025-12-12. The calculator updates automatically when models.json is refreshed.