Claude Opus 4.1
Exceptional reasoning model for specialized complex tasks requiring advanced analytical capabilities
Leaderboards
QUALITY
Average Score combining domain-specific Autobench scores; Higher is better
- 4.24
- 4.51
- 4.49
- 4.48
- 4.42
- 4.41
- 4.33
- 4.32
- 4.31
- 4.27
- 4.18
- 4.18
- 4.18
- 4.17
- 4.17
- 4.06
- 3.98
- 3.98
- 3.95
- 3.95
- 3.88
- 3.66
- 3.64
- 3.59
- 3.54
- 3.49
PRICE
USD cent per average answer; Lower is better
- 9.13
- 0.02
- 0.02
- 0.03
- 0.05
- 0.08
- 0.08
- 0.09
- 0.12
- 0.14
- 0.18
- 0.24
- 0.24
- 0.36
- 0.45
- 0.63
- 0.63
- 0.64
- 0.83
- 0.87
- 0.91
- 1.59
- 1.71
- 1.85
- 2.92
- 4.37
LATENCY
Average Latency in Seconds; Lower is better
- 48.62s
- 5.29s
- 7.53s
- 7.74s
- 10.65s
- 11.52s
- 17.54s
- 19.16s
- 24.36s
- 26.12s
- 27.01s
- 29.72s
- 32.86s
- 33.67s
- 39.05s
- 40.3s
- 48.71s
- 60.96s
- 61.12s
- 63.9s
- 65.02s
- 65.03s
- 65.9s
- 66.5s
- 68.34s
- 72.64s
- 80.74s
- 90s
- 119.17s
Performance vs. Industry Average
Intelligence
Claude Opus 4.1 is of higher intelligence compared to average (4.1), with an intelligence score of 4.2.
Price
Claude Opus 4.1 is more expensive compared to average ($0.91 per 1M Tokens) with a price of $9.13 per 1M Tokens.
Latency
Claude Opus 4.1 has a higher average latency compared to average (45.24s), with an average latency of 48.62s.
P99 Latency
Claude Opus 4.1 has a lower P99 latency compared to average (172.60s), taking 155.15s to receive the first token at P99 (TTFT).
Context Window
Claude Opus 4.1 has a smaller context window than average (246k tokens), with a context window of 200k tokens.