Cohere Embed Multilingual v3 vs OpenAI text-embedding-3-large

Detailed comparison between Cohere Embed Multilingual v3 and OpenAI text-embedding-3-large. See which embedding best meets your accuracy and performance needs.

Model Comparison

OpenAI text-embedding-3-large takes the lead.

Both Cohere Embed Multilingual v3 and OpenAI text-embedding-3-large are powerful embedding models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why OpenAI text-embedding-3-large:

  • OpenAI text-embedding-3-large has 38 higher ELO rating
  • OpenAI text-embedding-3-large delivers better accuracy (nDCG@10: 0.811 vs 0.781)
  • Cohere Embed Multilingual v3 is 8898ms faster on average
  • OpenAI text-embedding-3-large has a 12.9% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

Cohere Embed Multilingual v3

1501

OpenAI text-embedding-3-large

1539

Win Rate

Head-to-head performance

Cohere Embed Multilingual v3

42.9%

OpenAI text-embedding-3-large

55.7%

Accuracy (nDCG@10)

Ranking quality metric

Cohere Embed Multilingual v3

0.781

OpenAI text-embedding-3-large

0.811

Average Latency

Response time

Cohere Embed Multilingual v3

24024ms

OpenAI text-embedding-3-large

32922ms

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Overall Performance
ELO Rating
1501
1539
Overall ranking quality based on pairwise comparisons
Win Rate
42.9%
55.7%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.100
$0.130
Cost per million tokens processed
Release Date
2024-02-07
2024-01-25
Model release date
Accuracy Metrics
Avg nDCG@10
0.781
0.811
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
24024ms
32922ms
Average response time across all datasets

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

PG

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Accuracy Metrics
Latency Metrics
Mean
37785ms
57366ms
Average response time
P50
37029ms
56219ms
50th percentile (median)
P90
43453ms
65971ms
90th percentile

business reports

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Accuracy Metrics
Latency Metrics
Mean
5204ms
8013ms
Average response time
P50
5100ms
7853ms
50th percentile (median)
P90
5985ms
9215ms
90th percentile

DBPedia

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Accuracy Metrics
nDCG@5
0.619
0.648
Ranking quality at top 5 results
nDCG@10
0.591
0.641
Ranking quality at top 10 results
Recall@5
0.222
0.255
% of relevant docs in top 5
Recall@10
0.329
0.377
% of relevant docs in top 10
Latency Metrics
Mean
33570ms
42058ms
Average response time
P50
32899ms
41217ms
50th percentile (median)
P90
38606ms
48367ms
90th percentile

FiQa

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Accuracy Metrics
nDCG@5
0.637
0.730
Ranking quality at top 5 results
nDCG@10
0.654
0.752
Ranking quality at top 10 results
Recall@5
0.621
0.700
% of relevant docs in top 5
Recall@10
0.692
0.781
% of relevant docs in top 10
Latency Metrics
Mean
37581ms
48635ms
Average response time
P50
36829ms
47662ms
50th percentile (median)
P90
43218ms
55930ms
90th percentile

SciFact

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Accuracy Metrics
nDCG@5
0.723
0.726
Ranking quality at top 5 results
nDCG@10
0.732
0.761
Ranking quality at top 10 results
Recall@5
0.808
0.768
% of relevant docs in top 5
Recall@10
0.833
0.863
% of relevant docs in top 10
Latency Metrics
Mean
39542ms
55796ms
Average response time
P50
38751ms
54680ms
50th percentile (median)
P90
45473ms
64165ms
90th percentile

MSMARCO

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Accuracy Metrics
nDCG@5
0.996
1.000
Ranking quality at top 5 results
nDCG@10
0.994
1.000
Ranking quality at top 10 results
Recall@5
0.122
0.123
% of relevant docs in top 5
Recall@10
0.218
0.224
% of relevant docs in top 10
Latency Metrics
Mean
32380ms
41860ms
Average response time
P50
31732ms
41023ms
50th percentile (median)
P90
37237ms
48139ms
90th percentile

NorQuAD

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Accuracy Metrics
Latency Metrics
Mean
3824ms
5613ms
Average response time
P50
3748ms
5501ms
50th percentile (median)
P90
4398ms
6455ms
90th percentile

ARCD

MetricCohere Embed Multilingual v3OpenAI text-embedding-3-largeDescription
Accuracy Metrics
nDCG@5
0.925
0.899
Ranking quality at top 5 results
nDCG@10
0.933
0.899
Ranking quality at top 10 results
Recall@5
0.940
0.940
% of relevant docs in top 5
Recall@10
0.960
0.940
% of relevant docs in top 10
Latency Metrics
Mean
2305ms
4036ms
Average response time
P50
2259ms
3955ms
50th percentile (median)
P90
2651ms
4641ms
90th percentile

Explore More

Compare more embeddings

See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.