OpenAI text-embedding-3-large vs Cohere Embed Multilingual v3

Detailed comparison between OpenAI text-embedding-3-large and Cohere Embed Multilingual v3. See which embedding best meets your accuracy and performance needs.

Model Comparison

OpenAI text-embedding-3-large takes the lead.

Both OpenAI text-embedding-3-large and Cohere Embed Multilingual v3 are powerful embedding models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why OpenAI text-embedding-3-large:

  • OpenAI text-embedding-3-large has 38 higher ELO rating
  • OpenAI text-embedding-3-large delivers better accuracy (nDCG@10: 0.811 vs 0.781)
  • Cohere Embed Multilingual v3 is 8898ms faster on average
  • OpenAI text-embedding-3-large has a 12.9% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

OpenAI text-embedding-3-large

1539

Cohere Embed Multilingual v3

1501

Win Rate

Head-to-head performance

OpenAI text-embedding-3-large

55.7%

Cohere Embed Multilingual v3

42.9%

Accuracy (nDCG@10)

Ranking quality metric

OpenAI text-embedding-3-large

0.811

Cohere Embed Multilingual v3

0.781

Average Latency

Response time

OpenAI text-embedding-3-large

32922ms

Cohere Embed Multilingual v3

24024ms

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Overall Performance
ELO Rating
1539
1501
Overall ranking quality based on pairwise comparisons
Win Rate
55.7%
42.9%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.130
$0.100
Cost per million tokens processed
Release Date
2024-01-25
2024-02-07
Model release date
Accuracy Metrics
Avg nDCG@10
0.811
0.781
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
32922ms
24024ms
Average response time across all datasets

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

PG

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Accuracy Metrics
Latency Metrics
Mean
57366ms
37785ms
Average response time
P50
56219ms
37029ms
50th percentile (median)
P90
65971ms
43453ms
90th percentile

business reports

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Accuracy Metrics
Latency Metrics
Mean
8013ms
5204ms
Average response time
P50
7853ms
5100ms
50th percentile (median)
P90
9215ms
5985ms
90th percentile

DBPedia

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Accuracy Metrics
nDCG@5
0.648
0.619
Ranking quality at top 5 results
nDCG@10
0.641
0.591
Ranking quality at top 10 results
Recall@5
0.255
0.222
% of relevant docs in top 5
Recall@10
0.377
0.329
% of relevant docs in top 10
Latency Metrics
Mean
42058ms
33570ms
Average response time
P50
41217ms
32899ms
50th percentile (median)
P90
48367ms
38606ms
90th percentile

FiQa

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Accuracy Metrics
nDCG@5
0.730
0.637
Ranking quality at top 5 results
nDCG@10
0.752
0.654
Ranking quality at top 10 results
Recall@5
0.700
0.621
% of relevant docs in top 5
Recall@10
0.781
0.692
% of relevant docs in top 10
Latency Metrics
Mean
48635ms
37581ms
Average response time
P50
47662ms
36829ms
50th percentile (median)
P90
55930ms
43218ms
90th percentile

SciFact

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Accuracy Metrics
nDCG@5
0.726
0.723
Ranking quality at top 5 results
nDCG@10
0.761
0.732
Ranking quality at top 10 results
Recall@5
0.768
0.808
% of relevant docs in top 5
Recall@10
0.863
0.833
% of relevant docs in top 10
Latency Metrics
Mean
55796ms
39542ms
Average response time
P50
54680ms
38751ms
50th percentile (median)
P90
64165ms
45473ms
90th percentile

MSMARCO

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Accuracy Metrics
nDCG@5
1.000
0.996
Ranking quality at top 5 results
nDCG@10
1.000
0.994
Ranking quality at top 10 results
Recall@5
0.123
0.122
% of relevant docs in top 5
Recall@10
0.224
0.218
% of relevant docs in top 10
Latency Metrics
Mean
41860ms
32380ms
Average response time
P50
41023ms
31732ms
50th percentile (median)
P90
48139ms
37237ms
90th percentile

NorQuAD

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Accuracy Metrics
Latency Metrics
Mean
5613ms
3824ms
Average response time
P50
5501ms
3748ms
50th percentile (median)
P90
6455ms
4398ms
90th percentile

ARCD

MetricOpenAI text-embedding-3-largeCohere Embed Multilingual v3Description
Accuracy Metrics
nDCG@5
0.899
0.925
Ranking quality at top 5 results
nDCG@10
0.899
0.933
Ranking quality at top 10 results
Recall@5
0.940
0.940
% of relevant docs in top 5
Recall@10
0.940
0.960
% of relevant docs in top 10
Latency Metrics
Mean
4036ms
2305ms
Average response time
P50
3955ms
2259ms
50th percentile (median)
P90
4641ms
2651ms
90th percentile

Explore More

Compare more embeddings

See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.