Back to all LLMs

Claude Opus 4.6

1M token context window (beta) with state-of-the-art coding and agentic capabilities. Highest score on Terminal-Bench 2.0 and Humanity's Last Exam. Improved planning, longer task persistence, and better debugging skills. Context compaction and adaptive thinking enable longer-running tasks. If you want to compare the best LLMs for your data, try Agentset.

Leaderboard Rank
#1
of 16
ELO Rating
1916
#1
Win Rate
83.0%
#1
Latency
11547ms
#9

Model Information

Provider
Anthropic
License
Proprietary
Input Price per 1M
$5.00
Output Price per 1M
$25.00
Context Window
1000K
Release Date
2026-02-05
Model Name
anthropic-claude-opus-4-6
Total Evaluations
1350

Performance Record

Wins1121 (83.0%)
Losses98 (7.3%)
Ties131 (9.7%)
Wins
Losses
Ties

LLMs Are Just One Piece of RAG

Agentset gives you a managed RAG pipeline with the top-ranked models and best practices baked in. No infrastructure to maintain, no LLM orchestration to manage.

Trusted by teams building production RAG applications

5M+
Documents
1,500+
Teams
99.9%
Uptime

Performance Overview

ELO ratings by dataset

Claude Opus 4.6's ELO performance varies across different benchmark datasets, showing its strengths in specific domains.

Claude Opus 4.6 - ELO by Dataset

Detailed Metrics

Dataset breakdown

Performance metrics across different benchmark datasets, including accuracy and latency percentiles.

MSMARCO

ELO 203689.3% WR402W-17L-31T

Quality Metrics

Correctness
4.93
Faithfulness
4.90
Grounding
4.90
Relevance
5.00
Completeness
4.97
Overall
4.94

Latency Distribution

Mean
7669ms
Min
3748ms
Max
12462ms

PG

ELO 191585.1% WR383W-48L-19T

Quality Metrics

Correctness
5.00
Faithfulness
5.00
Grounding
5.00
Relevance
5.00
Completeness
5.00
Overall
5.00

Latency Distribution

Mean
16812ms
Min
11207ms
Max
26006ms

SciFact

ELO 179774.7% WR336W-33L-81T

Quality Metrics

Correctness
4.83
Faithfulness
4.83
Grounding
4.83
Relevance
5.00
Completeness
4.73
Overall
4.84

Latency Distribution

Mean
10159ms
Min
4747ms
Max
19093ms

Build RAG in Minutes, Not Months

Agentset gives you a complete RAG API with top-ranked LLMs and smart retrieval built in. Upload your data, call the API, and get grounded answers from day one.

import { Agentset } from "agentset";

const agentset = new Agentset();
const ns = agentset.namespace("ns_1234");

const results = await ns.search(
  "What is multi-head attention?"
);

for (const result of results) {
  console.log(result.text);
}

Compare Models

See how it stacks up

Compare Claude Opus 4.6 with other top llms to understand the differences in performance, accuracy, and latency.