GPT-5.1
400K context with adaptive reasoning that allocates more processing to complex retrieved content. Extended prompt caching feature with longer retention optimizes performance for production RAG systems. If you want to compare the best LLMs for your data, try Agentset.
Model Information
- Provider
- OpenAI
- License
- Proprietary
- Input Price per 1M
- $1.25
- Output Price per 1M
- $10.00
- Context Window
- 400K
- Release Date
- 2025-11-13
- Model Name
- gpt-5.1
- Total Evaluations
- 1350
Performance Record
LLMs Are Just One Piece of RAG
Agentset gives you a managed RAG pipeline with the top-ranked models and best practices baked in. No infrastructure to maintain, no LLM orchestration to manage.
Trusted by teams building production RAG applications
Performance Overview
ELO ratings by dataset
GPT-5.1's ELO performance varies across different benchmark datasets, showing its strengths in specific domains.
GPT-5.1 - ELO by Dataset
Detailed Metrics
Dataset breakdown
Performance metrics across different benchmark datasets, including accuracy and latency percentiles.
PG
Quality Metrics
- Correctness
- 5.00
- Faithfulness
- 5.00
- Grounding
- 5.00
- Relevance
- 5.00
- Completeness
- 4.77
- Overall
- 4.95
Latency Distribution
- Mean
- 29008ms
- Min
- 4393ms
- Max
- 43887ms
SciFact
Quality Metrics
- Correctness
- 5.00
- Faithfulness
- 5.00
- Grounding
- 5.00
- Relevance
- 4.97
- Completeness
- 4.97
- Overall
- 4.99
Latency Distribution
- Mean
- 10454ms
- Min
- 4700ms
- Max
- 21205ms
MSMARCO
Quality Metrics
- Correctness
- 4.97
- Faithfulness
- 4.97
- Grounding
- 4.97
- Relevance
- 5.00
- Completeness
- 4.93
- Overall
- 4.97
Latency Distribution
- Mean
- 9111ms
- Min
- 3841ms
- Max
- 34731ms
Build RAG in Minutes, Not Months
Agentset gives you a complete RAG API with top-ranked LLMs and smart retrieval built in. Upload your data, call the API, and get grounded answers from day one.
import { Agentset } from "agentset";
const agentset = new Agentset();
const ns = agentset.namespace("ns_1234");
const results = await ns.search(
"What is multi-head attention?"
);
for (const result of results) {
console.log(result.text);
}Compare Models
See how it stacks up
Compare GPT-5.1 with other top llms to understand the differences in performance, accuracy, and latency.