Back to all LLMs

Gemini 2.5 Pro

1M token context window accommodates extensive retrieved document sets without chunking. Multimodal RAG processes up to 3,000 images, 60 minutes of video, and 8.4 hours of audio alongside text. If you want to compare the best LLMs for your data, try Agentset.

Leaderboard Rank
#11
of 16
ELO Rating
1416
#11
Win Rate
35.4%
#10
Latency
15199ms
#11

Model Information

Provider
Google
License
Proprietary
Input Price per 1M
$1.25
Output Price per 1M
$10.00
Context Window
1049K
Release Date
2025-06-17
Model Name
gemini-2.5-pro
Total Evaluations
1350

Performance Record

Wins478 (35.4%)
Losses691 (51.2%)
Ties181 (13.4%)
Wins
Losses
Ties

LLMs Are Just One Piece of RAG

Agentset gives you a managed RAG pipeline with the top-ranked models and best practices baked in. No infrastructure to maintain, no LLM orchestration to manage.

Trusted by teams building production RAG applications

5M+
Documents
1,500+
Teams
99.9%
Uptime

Performance Overview

ELO ratings by dataset

Gemini 2.5 Pro's ELO performance varies across different benchmark datasets, showing its strengths in specific domains.

Gemini 2.5 Pro - ELO by Dataset

Detailed Metrics

Dataset breakdown

Performance metrics across different benchmark datasets, including accuracy and latency percentiles.

PG

ELO 150142.4% WR191W-230L-29T

Quality Metrics

Correctness
5.00
Faithfulness
5.00
Grounding
5.00
Relevance
5.00
Completeness
5.00
Overall
5.00

Latency Distribution

Mean
17834ms
Min
11067ms
Max
49308ms

MSMARCO

ELO 148042.7% WR192W-196L-62T

Quality Metrics

Correctness
4.90
Faithfulness
4.93
Grounding
4.93
Relevance
5.00
Completeness
4.90
Overall
4.93

Latency Distribution

Mean
12449ms
Min
7629ms
Max
23066ms

SciFact

ELO 126721.1% WR95W-265L-90T

Quality Metrics

Correctness
4.70
Faithfulness
4.80
Grounding
4.80
Relevance
4.70
Completeness
4.50
Overall
4.70

Latency Distribution

Mean
15314ms
Min
8817ms
Max
35365ms

Build RAG in Minutes, Not Months

Agentset gives you a complete RAG API with top-ranked LLMs and smart retrieval built in. Upload your data, call the API, and get grounded answers from day one.

import { Agentset } from "agentset";

const agentset = new Agentset();
const ns = agentset.namespace("ns_1234");

const results = await ns.search(
  "What is multi-head attention?"
);

for (const result of results) {
  console.log(result.text);
}

Compare Models

See how it stacks up

Compare Gemini 2.5 Pro with other top llms to understand the differences in performance, accuracy, and latency.