Cohere Rerank 3.5 vs Zerank 1
Detailed comparison between Cohere Rerank 3.5 and Zerank 1. See which reranker best meets your accuracy and performance needs.
Model Comparison
Zerank 1 takes the lead.
Both Cohere Rerank 3.5 and Zerank 1 are powerful reranking models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.
Why Zerank 1:
- Zerank 1 has 210 higher ELO rating
- Cohere Rerank 3.5 is 653ms faster on average
- Zerank 1 has a 28.1% higher win rate
Overview
Key metrics
ELO Rating
Overall ranking quality
Cohere Rerank 3.5
Zerank 1
Win Rate
Head-to-head performance
Cohere Rerank 3.5
Zerank 1
Accuracy (nDCG@10)
Ranking quality metric
Cohere Rerank 3.5
Zerank 1
Average Latency
Response time
Cohere Rerank 3.5
Zerank 1
Visual Performance Analysis
Performance
ELO Rating Comparison
Win/Loss/Tie Breakdown
Accuracy Across Datasets (nDCG@10)
Latency Distribution (ms)
Breakdown
How the models stack up
| Metric | Cohere Rerank 3.5 | Zerank 1 | Description |
|---|---|---|---|
| Overall Performance | |||
| ELO Rating | 1395 | 1605 | Overall ranking quality based on pairwise comparisons |
| Win Rate | 34.9% | 63.1% | Percentage of comparisons won against other models |
| Accuracy Metrics | |||
| Avg nDCG@10 | 0.497 | 0.493 | Normalized discounted cumulative gain at position 10 |
| Performance Metrics | |||
| Avg Latency | 603ms | 1256ms | Average response time across all datasets |
Dataset Performance
By field
Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.
BEIR/fiqa
| Metric | Cohere Rerank 3.5 | Zerank 1 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.121 | 0.115 | Ranking quality at top 5 results |
| nDCG@10 | 0.127 | 0.121 | Ranking quality at top 10 results |
| Recall@5 | 0.118 | 0.105 | % of relevant docs in top 5 |
| Recall@10 | 0.130 | 0.125 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 510ms | 1172ms | Average response time |
| P50 | 569ms | 1179ms | 50th percentile (median) |
| P90 | 617ms | 1326ms | 90th percentile |
BEIR/scifact
| Metric | Cohere Rerank 3.5 | Zerank 1 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.856 | 0.863 | Ranking quality at top 5 results |
| nDCG@10 | 0.866 | 0.866 | Ranking quality at top 10 results |
| Recall@5 | 0.891 | 0.916 | % of relevant docs in top 5 |
| Recall@10 | 0.920 | 0.920 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 638ms | 1358ms | Average response time |
| P50 | 612ms | 1287ms | 50th percentile (median) |
| P90 | 819ms | 1571ms | 90th percentile |
PG
| Metric | Cohere Rerank 3.5 | Zerank 1 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| Latency Metrics | |||
| Mean | 661ms | 1238ms | Average response time |
| P50 | 613ms | 1217ms | 50th percentile (median) |
| P90 | 792ms | 1316ms | 90th percentile |
Explore More
Compare more rerankers
See how all reranking models stack up. Compare Cohere, Jina AI, Voyage, ZeRank, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect reranker for your RAG application.