Contextual AI Rerank v2 Instruct vs Jina Reranker v2 Base Multilingual

Detailed comparison between Contextual AI Rerank v2 Instruct and Jina Reranker v2 Base Multilingual. See which reranker best meets your accuracy and performance needs.

Model Comparison

Contextual AI Rerank v2 Instruct takes the lead.

Both Contextual AI Rerank v2 Instruct and Jina Reranker v2 Base Multilingual are powerful reranking models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why Contextual AI Rerank v2 Instruct:

  • Contextual AI Rerank v2 Instruct has 155 higher ELO rating
  • Contextual AI Rerank v2 Instruct delivers better accuracy (nDCG@10: 0.230 vs 0.193)
  • Contextual AI Rerank v2 Instruct has a 13.8% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

Contextual AI Rerank v2 Instruct

1461

Jina Reranker v2 Base Multilingual

1306

Win Rate

Head-to-head performance

Contextual AI Rerank v2 Instruct

42.3%

Jina Reranker v2 Base Multilingual

28.5%

Accuracy (nDCG@10)

Ranking quality metric

Contextual AI Rerank v2 Instruct

0.230

Jina Reranker v2 Base Multilingual

0.193

Average Latency

Response time

Contextual AI Rerank v2 Instruct

3333ms

Jina Reranker v2 Base Multilingual

746ms

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricContextual AI Rerank v2 InstructJina Reranker v2 Base MultilingualDescription
Overall Performance
ELO Rating
1461
1306
Overall ranking quality based on pairwise comparisons
Win Rate
42.3%
28.5%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.050
$0.045
Cost per million tokens processed
Release Date
2025-09-12
2024-06-25
Model release date
Accuracy Metrics
Avg nDCG@10
0.230
0.193
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
3333ms
746ms
Average response time across all datasets

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

MSMARCO

MetricContextual AI Rerank v2 InstructJina Reranker v2 Base MultilingualDescription
Accuracy Metrics
nDCG@5
0.510
0.499
Ranking quality at top 5 results
nDCG@10
0.538
0.539
Ranking quality at top 10 results
Recall@5
0.720
0.680
% of relevant docs in top 5
Recall@10
0.800
0.800
% of relevant docs in top 10
Latency Metrics
Mean
3283ms
694ms
Average response time
P50
3260ms
616ms
50th percentile (median)
P90
3885ms
911ms
90th percentile

arguana

MetricContextual AI Rerank v2 InstructJina Reranker v2 Base MultilingualDescription
Accuracy Metrics
nDCG@5
0.525
0.314
Ranking quality at top 5 results
nDCG@10
0.560
0.374
Ranking quality at top 10 results
Recall@5
0.860
0.580
% of relevant docs in top 5
Recall@10
0.960
0.760
% of relevant docs in top 10
Latency Metrics
Mean
3627ms
689ms
Average response time
P50
3601ms
617ms
50th percentile (median)
P90
4037ms
834ms
90th percentile

FiQa

MetricContextual AI Rerank v2 InstructJina Reranker v2 Base MultilingualDescription
Accuracy Metrics
nDCG@5
0.119
0.105
Ranking quality at top 5 results
nDCG@10
0.125
0.108
Ranking quality at top 10 results
Recall@5
0.123
0.088
% of relevant docs in top 5
Recall@10
0.135
0.093
% of relevant docs in top 10
Latency Metrics
Mean
3283ms
676ms
Average response time
P50
3209ms
626ms
50th percentile (median)
P90
3891ms
837ms
90th percentile

business reports

MetricContextual AI Rerank v2 InstructJina Reranker v2 Base MultilingualDescription
Latency Metrics
Mean
3231ms
690ms
Average response time
P50
3129ms
620ms
50th percentile (median)
P90
3651ms
824ms
90th percentile

PG

MetricContextual AI Rerank v2 InstructJina Reranker v2 Base MultilingualDescription
Latency Metrics
Mean
3566ms
1059ms
Average response time
P50
3475ms
823ms
50th percentile (median)
P90
4148ms
1744ms
90th percentile

DBPedia

MetricContextual AI Rerank v2 InstructJina Reranker v2 Base MultilingualDescription
Accuracy Metrics
nDCG@5
0.158
0.128
Ranking quality at top 5 results
nDCG@10
0.159
0.136
Ranking quality at top 10 results
Recall@5
0.004
0.004
% of relevant docs in top 5
Recall@10
0.005
0.005
% of relevant docs in top 10
Latency Metrics
Mean
3010ms
671ms
Average response time
P50
3042ms
614ms
50th percentile (median)
P90
3283ms
825ms
90th percentile

Explore More

Compare more rerankers

See how all reranking models stack up. Compare Cohere, Jina AI, Voyage, ZeRank, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect reranker for your RAG application.