Jina Reranker v2 Base Multilingual vs Zerank 1 Small

Detailed comparison between Jina Reranker v2 Base Multilingual and Zerank 1 Small. See which reranker best meets your accuracy and performance needs.

Model Comparison

Zerank 1 Small takes the lead.

Both Jina Reranker v2 Base Multilingual and Zerank 1 Small are powerful reranking models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why Zerank 1 Small:

  • Zerank 1 Small has 91 higher ELO rating
  • Zerank 1 Small delivers better accuracy (nDCG@10: 0.498 vs 0.477)
  • Jina Reranker v2 Base Multilingual is 207ms faster on average
  • Zerank 1 Small has a 14.5% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

Jina Reranker v2 Base Multilingual

1458

Zerank 1 Small

1549

Win Rate

Head-to-head performance

Jina Reranker v2 Base Multilingual

43.2%

Zerank 1 Small

57.7%

Accuracy (nDCG@10)

Ranking quality metric

Jina Reranker v2 Base Multilingual

0.477

Zerank 1 Small

0.498

Average Latency

Response time

Jina Reranker v2 Base Multilingual

1044ms

Zerank 1 Small

1251ms

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricJina Reranker v2 Base MultilingualZerank 1 SmallDescription
Overall Performance
ELO Rating
1458
1549
Overall ranking quality based on pairwise comparisons
Win Rate
43.2%
57.7%
Percentage of comparisons won against other models
Accuracy Metrics
Avg nDCG@10
0.477
0.498
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
1044ms
1251ms
Average response time across all datasets

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

BEIR/fiqa

MetricJina Reranker v2 Base MultilingualZerank 1 SmallDescription
Accuracy Metrics
nDCG@5
0.112
0.114
Ranking quality at top 5 results
nDCG@10
0.121
0.124
Ranking quality at top 10 results
Recall@5
0.105
0.098
% of relevant docs in top 5
Recall@10
0.130
0.125
% of relevant docs in top 10
Latency Metrics
Mean
1014ms
1183ms
Average response time
P50
827ms
1170ms
50th percentile (median)
P90
1605ms
1357ms
90th percentile

BEIR/scifact

MetricJina Reranker v2 Base MultilingualZerank 1 SmallDescription
Accuracy Metrics
nDCG@5
0.830
0.869
Ranking quality at top 5 results
nDCG@10
0.832
0.872
Ranking quality at top 10 results
Recall@5
0.871
0.916
% of relevant docs in top 5
Recall@10
0.876
0.920
% of relevant docs in top 10
Latency Metrics
Mean
1123ms
1345ms
Average response time
P50
1015ms
1272ms
50th percentile (median)
P90
1285ms
1552ms
90th percentile

PG

MetricJina Reranker v2 Base MultilingualZerank 1 SmallDescription
Accuracy Metrics
Latency Metrics
Mean
994ms
1224ms
Average response time
P50
839ms
1187ms
50th percentile (median)
P90
1436ms
1307ms
90th percentile

Explore More

Compare more rerankers

See how all reranking models stack up. Compare Cohere, Jina AI, Voyage, ZeRank, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect reranker for your RAG application.