Voyage AI Rerank 2.5 Lite vs Cohere Rerank 3.5

Detailed comparison between Voyage AI Rerank 2.5 Lite and Cohere Rerank 3.5. See which reranker best meets your accuracy and performance needs.

Model Comparison

Voyage AI Rerank 2.5 Lite takes the lead.

Both Voyage AI Rerank 2.5 Lite and Cohere Rerank 3.5 are powerful reranking models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why Voyage AI Rerank 2.5 Lite:

  • Voyage AI Rerank 2.5 Lite has 107 higher ELO rating
  • Cohere Rerank 3.5 is 115ms faster on average
  • Voyage AI Rerank 2.5 Lite has a 12.6% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

Voyage AI Rerank 2.5 Lite

1510

Cohere Rerank 3.5

1403

Win Rate

Head-to-head performance

Voyage AI Rerank 2.5 Lite

50.4%

Cohere Rerank 3.5

37.8%

Accuracy (nDCG@10)

Ranking quality metric

Voyage AI Rerank 2.5 Lite

0.679

Cohere Rerank 3.5

0.689

Average Latency

Response time

Voyage AI Rerank 2.5 Lite

607ms

Cohere Rerank 3.5

492ms

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricVoyage AI Rerank 2.5 LiteCohere Rerank 3.5Description
Overall Performance
ELO Rating
1510
1403
Overall ranking quality based on pairwise comparisons
Win Rate
50.4%
37.8%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.020
$0.050
Cost per million tokens processed
Release Date
2025-08-11
2024-12-02
Model release date
Accuracy Metrics
Avg nDCG@10
0.679
0.689
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
607ms
492ms
Average response time across all datasets

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

FiQa

MetricVoyage AI Rerank 2.5 LiteCohere Rerank 3.5Description
Accuracy Metrics
nDCG@5
0.111
0.121
Ranking quality at top 5 results
nDCG@10
0.122
0.127
Ranking quality at top 10 results
Recall@5
0.103
0.118
% of relevant docs in top 5
Recall@10
0.135
0.130
% of relevant docs in top 10
Latency Metrics
Mean
686ms
589ms
Average response time
P50
611ms
603ms
50th percentile (median)
P90
829ms
913ms
90th percentile

PG

MetricVoyage AI Rerank 2.5 LiteCohere Rerank 3.5Description
Accuracy Metrics
Latency Metrics
Mean
637ms
661ms
Average response time
P50
614ms
613ms
50th percentile (median)
P90
817ms
792ms
90th percentile

business reports

MetricVoyage AI Rerank 2.5 LiteCohere Rerank 3.5Description
Accuracy Metrics
Latency Metrics
Mean
580ms
395ms
Average response time
P50
607ms
324ms
50th percentile (median)
P90
816ms
570ms
90th percentile

MSMARCO

MetricVoyage AI Rerank 2.5 LiteCohere Rerank 3.5Description
Accuracy Metrics
nDCG@5
0.981
0.990
Ranking quality at top 5 results
nDCG@10
0.983
0.990
Ranking quality at top 10 results
Recall@5
0.993
1.000
% of relevant docs in top 5
Recall@10
1.000
1.000
% of relevant docs in top 10
Latency Metrics
Mean
542ms
317ms
Average response time
P50
611ms
299ms
50th percentile (median)
P90
635ms
320ms
90th percentile

DBPedia

MetricVoyage AI Rerank 2.5 LiteCohere Rerank 3.5Description
Accuracy Metrics
nDCG@5
0.692
0.691
Ranking quality at top 5 results
nDCG@10
0.763
0.760
Ranking quality at top 10 results
Recall@5
0.064
0.063
% of relevant docs in top 5
Recall@10
0.111
0.107
% of relevant docs in top 10
Latency Metrics
Mean
555ms
355ms
Average response time
P50
605ms
305ms
50th percentile (median)
P90
664ms
523ms
90th percentile

Explore More

Compare more rerankers

See how all reranking models stack up. Compare Cohere, Jina AI, Voyage, ZeRank, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect reranker for your RAG application.