Cohere Rerank 3.5
Reranking model designed for production RAG applications. Improves search relevance by reordering retrieved documents based on semantic similarity to queries. Supports multilingual capabilities and handles diverse domains.
Model Information
- Provider
 - Cohere
 - License
 - Proprietary
 - Model Name
 - cohere-rerank-v3.5
 - Total Evaluations
 - 375
 
Performance Record
Wins262 (34.9%)
Losses485 (64.7%)
Ties3 (0.4%)
Wins
Losses
Ties
Performance Overview
ELO ratings by dataset
Cohere Rerank 3.5's ELO performance varies across different benchmark datasets, showing its strengths in specific domains.
Cohere Rerank 3.5 - ELO by Dataset
Detailed Metrics
Dataset breakdown
Performance metrics across different benchmark datasets, including accuracy and latency percentiles.
BEIR/scifact
ELO 147445.6% WR114W-135L-1T
Accuracy Metrics
- nDCG@5
 - 0.856
 - nDCG@10
 - 0.866
 - Recall@5
 - 0.891
 - Recall@10
 - 0.920
 
Latency Distribution
- Mean
 - 638ms
 - P50 (Median)
 - 612ms
 - P90
 - 819ms
 
BEIR/fiqa
ELO 138935.2% WR88W-160L-2T
Accuracy Metrics
- nDCG@5
 - 0.121
 - nDCG@10
 - 0.127
 - Recall@5
 - 0.118
 - Recall@10
 - 0.130
 
Latency Distribution
- Mean
 - 510ms
 - P50 (Median)
 - 569ms
 - P90
 - 617ms
 
PG
ELO 133124.0% WR60W-190L-0T
Latency Distribution
- Mean
 - 661ms
 - P50 (Median)
 - 613ms
 - P90
 - 792ms
 
Compare Models
See how it stacks up
Compare Cohere Rerank 3.5 with other top rerankers to understand the differences in performance, accuracy, and latency.