Contextual AI Rerank v2 Instruct vs Voyage AI Rerank 2.5 Lite
Detailed comparison between Contextual AI Rerank v2 Instruct and Voyage AI Rerank 2.5 Lite. See which reranker best meets your accuracy and performance needs.
Model Comparison
Two competitive rerankers, closely matched.
Both Contextual AI Rerank v2 Instruct and Voyage AI Rerank 2.5 Lite are powerful reranking models designed to improve retrieval quality in RAG applications. They show comparable performance across key metrics.
Key similarities:
- Contextual AI Rerank v2 Instruct has 40 higher ELO rating
- Voyage AI Rerank 2.5 Lite is 2403ms faster on average
- Voyage AI Rerank 2.5 Lite has a 5.1% higher win rate
Overview
Key metrics
ELO Rating
Overall ranking quality
Contextual AI Rerank v2 Instruct
Voyage AI Rerank 2.5 Lite
Win Rate
Head-to-head performance
Contextual AI Rerank v2 Instruct
Voyage AI Rerank 2.5 Lite
Accuracy (nDCG@10)
Ranking quality metric
Contextual AI Rerank v2 Instruct
Voyage AI Rerank 2.5 Lite
Average Latency
Response time
Contextual AI Rerank v2 Instruct
Voyage AI Rerank 2.5 Lite
Visual Performance Analysis
Performance
ELO Rating Comparison
Win/Loss/Tie Breakdown
Accuracy Across Datasets (nDCG@10)
Latency Distribution (ms)
Breakdown
How the models stack up
| Metric | Contextual AI Rerank v2 Instruct | Voyage AI Rerank 2.5 Lite | Description |
|---|---|---|---|
| Overall Performance | |||
| ELO Rating | 1550 | 1510 | Overall ranking quality based on pairwise comparisons |
| Win Rate | 45.2% | 50.4% | Percentage of comparisons won against other models |
| Pricing & Availability | |||
| Price per 1M tokens | $0.050 | $0.020 | Cost per million tokens processed |
| Release Date | 2025-09-12 | 2025-08-11 | Model release date |
| Accuracy Metrics | |||
| Avg nDCG@10 | 0.687 | 0.679 | Normalized discounted cumulative gain at position 10 |
| Performance Metrics | |||
| Avg Latency | 3010ms | 607ms | Average response time across all datasets |
Dataset Performance
By field
Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.
FiQa
| Metric | Contextual AI Rerank v2 Instruct | Voyage AI Rerank 2.5 Lite | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.119 | 0.111 | Ranking quality at top 5 results |
| nDCG@10 | 0.125 | 0.122 | Ranking quality at top 10 results |
| Recall@5 | 0.123 | 0.103 | % of relevant docs in top 5 |
| Recall@10 | 0.135 | 0.135 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 2913ms | 686ms | Average response time |
| P50 | 2863ms | 611ms | 50th percentile (median) |
| P90 | 3289ms | 829ms | 90th percentile |
PG
| Metric | Contextual AI Rerank v2 Instruct | Voyage AI Rerank 2.5 Lite | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| Latency Metrics | |||
| Mean | 3195ms | 637ms | Average response time |
| P50 | 2951ms | 614ms | 50th percentile (median) |
| P90 | 3781ms | 817ms | 90th percentile |
business reports
| Metric | Contextual AI Rerank v2 Instruct | Voyage AI Rerank 2.5 Lite | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| Latency Metrics | |||
| Mean | 2883ms | 580ms | Average response time |
| P50 | 2686ms | 607ms | 50th percentile (median) |
| P90 | 3161ms | 816ms | 90th percentile |
MSMARCO
| Metric | Contextual AI Rerank v2 Instruct | Voyage AI Rerank 2.5 Lite | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.975 | 0.981 | Ranking quality at top 5 results |
| nDCG@10 | 0.975 | 0.983 | Ranking quality at top 10 results |
| Recall@5 | 1.000 | 0.993 | % of relevant docs in top 5 |
| Recall@10 | 1.000 | 1.000 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 2952ms | 542ms | Average response time |
| P50 | 2853ms | 611ms | 50th percentile (median) |
| P90 | 3398ms | 635ms | 90th percentile |
DBPedia
| Metric | Contextual AI Rerank v2 Instruct | Voyage AI Rerank 2.5 Lite | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.734 | 0.692 | Ranking quality at top 5 results |
| nDCG@10 | 0.772 | 0.763 | Ranking quality at top 10 results |
| Recall@5 | 0.067 | 0.064 | % of relevant docs in top 5 |
| Recall@10 | 0.108 | 0.111 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 2803ms | 555ms | Average response time |
| P50 | 2786ms | 605ms | 50th percentile (median) |
| P90 | 3138ms | 664ms | 90th percentile |
Explore More
Compare more rerankers
See how all reranking models stack up. Compare Cohere, Jina AI, Voyage, ZeRank, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect reranker for your RAG application.