Last verified 2025-09-22 (left) · 2026-03-06 (right)
Claude Opus 4.1 (Legacy) vs GPT-5.2 Pro — Pricing & Capability Comparison
Claude Opus 4.1 (Legacy) charges $15.00 per million input tokens and $75.00 per million output tokens. GPT-5.2 Pro comes in at $21.00 / $168.00. Context windows span 200K vs 400K tokens respectively.
TL;DR — Quick Comparison
- ✓Claude Opus 4.1 (Legacy) is cheaper overall: $90.00 per 1M tokens (in+out) vs $189.00 for GPT-5.2 Pro — saves $99.00 per 1M tokens
- ✓Input pricing: Claude Opus 4.1 (Legacy) $15.00/1M vs GPT-5.2 Pro $21.00/1M
- ✓Output pricing: Claude Opus 4.1 (Legacy) $75.00/1M vs GPT-5.2 Pro $168.00/1M
- ✓Context window: GPT-5.2 Pro offers more (400K vs 200K)
- ✓Use our calculator below to estimate costs for your specific usage pattern
Input price (per 1M)
Claude Opus 4.1 (Legacy)
$15.00
GPT-5.2 Pro
$21.00
Claude Opus 4.1 (Legacy) leads here
Output price (per 1M)
Claude Opus 4.1 (Legacy)
$75.00
GPT-5.2 Pro
$168.00
Claude Opus 4.1 (Legacy) leads here
Context window
Claude Opus 4.1 (Legacy)
200,000 tokens
GPT-5.2 Pro
400,000 tokens
GPT-5.2 Pro leads here
Cached input
Claude Opus 4.1 (Legacy)
Not published
GPT-5.2 Pro
Not published
No published data
Which one should you choose?
Skip the spreadsheet if you just need the practical takeaway. Use these rules when deciding between Claude Opus 4.1 (Legacy) and GPT-5.2 Pro.
Choose Claude Opus 4.1 (Legacy) if input tokens dominate your bill
Claude Opus 4.1 (Legacy) has the lower input rate, which usually matters most for chat, RAG, classification, and long-prompt workflows where prompt volume stays much larger than generated output.
Choose Claude Opus 4.1 (Legacy) if you generate long answers
Claude Opus 4.1 (Legacy) is cheaper on output tokens, so it tends to win for report generation, coding assistance, reasoning traces, and any workflow where completions are long.
Choose GPT-5.2 Pro if context size is the blocker
GPT-5.2 Pro offers the larger published context window, which is more important than small pricing differences when you need to fit large files, long chats, or multi-document prompts into one request.
Cost comparison for 10K-token workloads
Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.
| Scenario | Claude Opus 4.1 (Legacy) | GPT-5.2 Pro |
|---|---|---|
Balanced conversation 50% input · 50% output | $0.450 | $0.945 |
Input-heavy workflow 80% input · 20% output | $0.270 | $0.504 |
Generation heavy 30% input · 70% output | $0.570 | $1.24 |
Cached system prompt 90% cached input · 10% fresh output | $0.210 | $0.357 |
Frequently asked questions
Which is cheaper: Claude Opus 4.1 (Legacy) or GPT-5.2 Pro?
Claude Opus 4.1 (Legacy) is cheaper for input tokens at $15.00 per 1M tokens compared to $21.00. For output, Claude Opus 4.1 (Legacy) costs $75.00 per 1M tokens versus $168.00 for GPT-5.2 Pro.
What is the cost per 1M tokens for Claude Opus 4.1 (Legacy)?
Claude Opus 4.1 (Legacy) pricing: $15.00 per 1M input tokens and $75.00 per 1M output tokens. Context window: 200,000 tokens.
What is the cost per 1M tokens for GPT-5.2 Pro?
GPT-5.2 Pro pricing: $21.00 per 1M input tokens and $168.00 per 1M output tokens. Context window: 400,000 tokens.
How much does it cost per 1K tokens?
Per 1K tokens: Claude Opus 4.1 (Legacy) costs $0.0150 input / $0.0750 output. GPT-5.2 Pro costs $0.0210 input / $0.1680 output. This is useful for calculating small-scale usage costs.
Which model supports a larger context window?
GPT-5.2 Pro offers 400,000 tokens (400K) versus 200K for Claude Opus 4.1 (Legacy).
What is the estimated monthly cost for typical usage?
For a typical workload of 10M input + 2M output tokens per month: Claude Opus 4.1 (Legacy) would cost approximately $300.00, while GPT-5.2 Pro would cost $546.00. Claude Opus 4.1 (Legacy) is more economical for this usage pattern.
Do these models support prompt caching?
Claude Opus 4.1 (Legacy) does not publish cached pricing. GPT-5.2 Pro does not publish cached pricing.
Which model is best for my use case?
Choose Claude Opus 4.1 (Legacy) for cost-sensitive applications with high input volume. Choose GPT-5.2 Pro if you need 400K context for long documents or conversations. Consider prompt caching if you have repeated context. Use our token calculator to model your specific usage pattern.
Keep exploring this decision
Start from the pricing hub to compare calculators, cost pages, and top decision paths.
Estimate Claude Opus 4.1 (Legacy) cost with your own token mix.
Estimate GPT-5.2 Pro cost with your own token mix.
Model the same prompt volume across multiple models before you commit.
See a simplified 100K-token cost view for Claude Opus 4.1 (Legacy).
See a simplified 100K-token cost view for GPT-5.2 Pro.
Jump to other side-by-side model pricing comparisons.