Enterprise Tech Leaderboard
Claim Your Achievement
Download your benchmark badge and showcase your performance across platforms, presentations, and client-facing materials.
Filter
Category
Year
Metric
Latency
Value
2.00 s
Year
2025
AI Gateways
for OpenAI
2nd
SambaNova
Metric
Latency
Value
3.00 s
Year
2025
AI Gateways
for OpenAI
3rd
Together.ai
Metric
Latency
Value
11.00 s
Year
2025
LMMs
1st
llama-4-maverick
Metric
Success Rate
Value
56 %
Year
2025
LMMs
2nd
claude-4-opus
Metric
Success Rate
Value
51 %
Year
2025
LMMs
3rd
qwen-2.5-72b-instruct
Metric
Success Rate
Value
45 %
Year
2025
AI Code
Models
1st
o1
Metric
Accuracy
Value
86 %
Year
2025
AI Code
Models
2nd
o3-mini
Metric
Accuracy
Value
86 %
Year
2025
AI Code
Models
3rd
claude-3.7-sonnet
Metric
Accuracy
Value
67 %
Year
2025
SERP API
1st
Nimble
Metric
Response Time
Value
6.16 ms
Year
2025
Vendor | Benchmark | Metric | Value | Year | |
|---|---|---|---|---|---|
X | 1st Latency | 2.00 s | 2025 | ||
SambaNova | 2nd Latency | 3.00 s | 2025 | ||
Together.ai | 3rd Latency | 11.00 s | 2025 | ||
llama-4-maverick | 1st Success Rate | 56 % | 2025 | ||
claude-4-opus | 2nd Success Rate | 51 % | 2025 | ||
qwen-2.5-72b-instruct | 3rd Success Rate | 45 % | 2025 | ||
o1 | 1st Accuracy | 86 % | 2025 | ||
o3-mini | 2nd Accuracy | 86 % | 2025 | ||
claude-3.7-sonnet | 3rd Accuracy | 67 % | 2025 | ||
Nimble | 1st Response Time | 6.16 ms | 2025 | ||
BENCHMARKS
Stay Ahead of New Badge Releases
Enter your work email to receive notifications whenever new benchmarks are published or when your badges get updated.




