OpenAI text-embedding-3-large vs BAAI/bge-m3

Detailed comparison between OpenAI text-embedding-3-large and BAAI/bge-m3. See which embedding best meets your accuracy and performance needs.

Model Comparison

OpenAI text-embedding-3-large takes the lead.

Both OpenAI text-embedding-3-large and BAAI/bge-m3 are powerful embedding models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why OpenAI text-embedding-3-large:

  • OpenAI text-embedding-3-large has 48 higher ELO rating
  • OpenAI text-embedding-3-large delivers better accuracy (nDCG@10: 0.811 vs 0.753)
  • OpenAI text-embedding-3-large has a 14.8% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

OpenAI text-embedding-3-large

1539

BAAI/bge-m3

1491

Win Rate

Head-to-head performance

OpenAI text-embedding-3-large

55.7%

BAAI/bge-m3

40.9%

Accuracy (nDCG@10)

Ranking quality metric

OpenAI text-embedding-3-large

0.811

BAAI/bge-m3

0.753

Average Latency

Response time

OpenAI text-embedding-3-large

11ms

BAAI/bge-m3

29ms

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Overall Performance
ELO Rating
1539
1491
Overall ranking quality based on pairwise comparisons
Win Rate
55.7%
40.9%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.130
$0.010
Cost per million tokens processed
Dimensions
3072
1024
Vector embedding dimensions (lower is more efficient)
Release Date
2024-01-25
2024-01-27
Model release date
Accuracy Metrics
Avg nDCG@10
0.811
0.753
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
11ms
29ms
Average response time across all datasets

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

PG

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Accuracy Metrics
Latency Metrics
Mean
10ms
16ms
Average response time
P50
10ms
16ms
50th percentile (median)
P90
11ms
19ms
90th percentile

business reports

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Accuracy Metrics
Latency Metrics
Mean
10ms
34ms
Average response time
P50
10ms
33ms
50th percentile (median)
P90
12ms
39ms
90th percentile

DBPedia

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Accuracy Metrics
nDCG@5
0.648
0.625
Ranking quality at top 5 results
nDCG@10
0.641
0.603
Ranking quality at top 10 results
Recall@5
0.255
0.236
% of relevant docs in top 5
Recall@10
0.377
0.341
% of relevant docs in top 10
Latency Metrics
Mean
8ms
17ms
Average response time
P50
8ms
17ms
50th percentile (median)
P90
9ms
20ms
90th percentile

FiQa

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Accuracy Metrics
nDCG@5
0.730
0.597
Ranking quality at top 5 results
nDCG@10
0.752
0.609
Ranking quality at top 10 results
Recall@5
0.700
0.607
% of relevant docs in top 5
Recall@10
0.781
0.666
% of relevant docs in top 10
Latency Metrics
Mean
10ms
32ms
Average response time
P50
9ms
31ms
50th percentile (median)
P90
11ms
37ms
90th percentile

SciFact

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Accuracy Metrics
nDCG@5
0.726
0.578
Ranking quality at top 5 results
nDCG@10
0.761
0.617
Ranking quality at top 10 results
Recall@5
0.768
0.652
% of relevant docs in top 5
Recall@10
0.863
0.763
% of relevant docs in top 10
Latency Metrics
Mean
11ms
31ms
Average response time
P50
11ms
30ms
50th percentile (median)
P90
13ms
35ms
90th percentile

MSMARCO

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Accuracy Metrics
nDCG@5
1.000
0.997
Ranking quality at top 5 results
nDCG@10
1.000
0.997
Ranking quality at top 10 results
Recall@5
0.123
0.122
% of relevant docs in top 5
Recall@10
0.224
0.220
% of relevant docs in top 10
Latency Metrics
Mean
8ms
19ms
Average response time
P50
8ms
18ms
50th percentile (median)
P90
9ms
22ms
90th percentile

NorQuAD

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Accuracy Metrics
Latency Metrics
Mean
14ms
41ms
Average response time
P50
14ms
40ms
50th percentile (median)
P90
16ms
47ms
90th percentile

ARCD

MetricOpenAI text-embedding-3-largeBAAI/bge-m3Description
Accuracy Metrics
nDCG@5
0.899
0.941
Ranking quality at top 5 results
nDCG@10
0.899
0.941
Ranking quality at top 10 results
Recall@5
0.940
0.960
% of relevant docs in top 5
Recall@10
0.940
0.960
% of relevant docs in top 10
Latency Metrics
Mean
14ms
40ms
Average response time
P50
14ms
40ms
50th percentile (median)
P90
16ms
47ms
90th percentile

Explore More

Compare more embeddings

See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.