Gpt oss 120b
GPT-OSS-120B is an open-weight MoE model from OpenAI (117B params, 5.1B active). It is optimized for single-GPU deployment and excels in reasoning and agentic tasks.
Leaderboards
QUALITY
Average Score combining domain-specific Autobench scores; Higher is better
- 4.18
- 4.48
- 4.43
- 4.39
- 4.38
- 4.32
- 4.29
- 4.29
- 4.20
- 4.17
- 4.17
- 4.13
- 4.12
- 4.11
- 4.11
- 4.06
- 4.06
- 3.99
- 3.88
- 3.86
- 3.78
- 3.47
PRICE
USD cent per average answer; Lower is better
- 0.11
- 0.07
- 0.08
- 0.09
- 0.33
- 0.34
- 0.54
- 0.71
- 0.91
- 0.99
- 1.25
- 1.30
- 1.86
- 2.12
- 3.79
- 3.94
- 6.48
- 7.36
- 8.12
- 10.80
- 11.39
- 17.26
- 81.88
LATENCY
Average Latency in Seconds; Lower is better
- 75.48s
- 20.42s
- 23.60s
- 30.08s
- 31.40s
- 38.77s
- 45.56s
- 51.84s
- 52.25s
- 61.46s
- 65.62s
- 66.78s
- 69.24s
- 76.11s
- 82.80s
- 86.80s
- 89.96s
- 93.49s
- 99.62s
- 104.78s
- 110.95s
- 122.42s
- 124.57s
- 130.10s
- 136.96s
- 144.01s
- 163.15s
- 169.73s
- 171.50s
- 180.11s
- 187.43s
- 227.43s
- 247.97s
- 261.38s
- 310.39s
Performance vs. Industry Average
Intelligence
Gpt oss 120b is of higher intelligence compared to average (4.1), with an intelligence score of 4.2.
Price
Gpt oss 120b is cheaper compared to average ($4.58 per 1M Tokens) with a price of $0.11 per 1M Tokens.
Latency
Gpt oss 120b has a lower average latency compared to average (116.45s), with an average latency of 75.48s.
P99 Latency
Gpt oss 120b has a lower P99 latency compared to average (339.37s), taking 291.84s to receive the first token at P99 (TTFT).
Context Window
Gpt oss 120b has a smaller context window than average (351k tokens), with a context window of 131k tokens.