Jina Reranker v2 Base Multilingual vs Zerank 1

Detailed comparison between Jina Reranker v2 Base Multilingual and Zerank 1. See which reranker best meets your accuracy and performance needs.

Model Comparison

Zerank 1 takes the lead.

Both Jina Reranker v2 Base Multilingual and Zerank 1 are powerful reranking models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why Zerank 1:

  • Zerank 1 has 147 higher ELO rating
  • Zerank 1 delivers better accuracy (nDCG@10: 0.493 vs 0.477)
  • Jina Reranker v2 Base Multilingual is 212ms faster on average
  • Zerank 1 has a 19.9% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

Jina Reranker v2 Base Multilingual

1458

Zerank 1

1605

Win Rate

Head-to-head performance

Jina Reranker v2 Base Multilingual

43.2%

Zerank 1

63.1%

Accuracy (nDCG@10)

Ranking quality metric

Jina Reranker v2 Base Multilingual

0.477

Zerank 1

0.493

Average Latency

Response time

Jina Reranker v2 Base Multilingual

1044ms

Zerank 1

1256ms

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricJina Reranker v2 Base MultilingualZerank 1Description
Overall Performance
ELO Rating
1458
1605
Overall ranking quality based on pairwise comparisons
Win Rate
43.2%
63.1%
Percentage of comparisons won against other models
Accuracy Metrics
Avg nDCG@10
0.477
0.493
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
1044ms
1256ms
Average response time across all datasets

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

BEIR/fiqa

MetricJina Reranker v2 Base MultilingualZerank 1Description
Accuracy Metrics
nDCG@5
0.112
0.115
Ranking quality at top 5 results
nDCG@10
0.121
0.121
Ranking quality at top 10 results
Recall@5
0.105
0.105
% of relevant docs in top 5
Recall@10
0.130
0.125
% of relevant docs in top 10
Latency Metrics
Mean
1014ms
1172ms
Average response time
P50
827ms
1179ms
50th percentile (median)
P90
1605ms
1326ms
90th percentile

BEIR/scifact

MetricJina Reranker v2 Base MultilingualZerank 1Description
Accuracy Metrics
nDCG@5
0.830
0.863
Ranking quality at top 5 results
nDCG@10
0.832
0.866
Ranking quality at top 10 results
Recall@5
0.871
0.916
% of relevant docs in top 5
Recall@10
0.876
0.920
% of relevant docs in top 10
Latency Metrics
Mean
1123ms
1358ms
Average response time
P50
1015ms
1287ms
50th percentile (median)
P90
1285ms
1571ms
90th percentile

PG

MetricJina Reranker v2 Base MultilingualZerank 1Description
Accuracy Metrics
Latency Metrics
Mean
994ms
1238ms
Average response time
P50
839ms
1217ms
50th percentile (median)
P90
1436ms
1316ms
90th percentile

Explore More

Compare more rerankers

See how all reranking models stack up. Compare Cohere, Jina AI, Voyage, ZeRank, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect reranker for your RAG application.