OpenAI text-embedding-3-small vs Gemini text-embedding-004

Detailed comparison between OpenAI text-embedding-3-small and Gemini text-embedding-004. See which embedding best meets your accuracy and performance needs.

Model Comparison

OpenAI text-embedding-3-small takes the lead.

Both OpenAI text-embedding-3-small and Gemini text-embedding-004 are powerful embedding models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why OpenAI text-embedding-3-small:

  • OpenAI text-embedding-3-small has 56 higher ELO rating
  • OpenAI text-embedding-3-small delivers better accuracy (nDCG@10: 0.762 vs 0.585)
  • OpenAI text-embedding-3-small has a 16.6% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

OpenAI text-embedding-3-small

1503

Gemini text-embedding-004

1447

Win Rate

Head-to-head performance

OpenAI text-embedding-3-small

44.6%

Gemini text-embedding-004

28.0%

Accuracy (nDCG@10)

Ranking quality metric

OpenAI text-embedding-3-small

0.762

Gemini text-embedding-004

0.585

Average Latency

Response time

OpenAI text-embedding-3-small

10ms

Gemini text-embedding-004

13ms

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Overall Performance
ELO Rating
1503
1447
Overall ranking quality based on pairwise comparisons
Win Rate
44.6%
28.0%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.020
$0.020
Cost per million tokens processed
Dimensions
1536
768
Vector embedding dimensions (lower is more efficient)
Release Date
2024-01-25
2024-05-14
Model release date
Accuracy Metrics
Avg nDCG@10
0.762
0.585
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
10ms
13ms
Average response time across all datasets

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

PG

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Accuracy Metrics
Latency Metrics
Mean
9ms
13ms
Average response time
P50
9ms
13ms
50th percentile (median)
P90
11ms
15ms
90th percentile

business reports

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Accuracy Metrics
Latency Metrics
Mean
7ms
13ms
Average response time
P50
7ms
12ms
50th percentile (median)
P90
8ms
14ms
90th percentile

DBPedia

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.605
0.536
Ranking quality at top 5 results
nDCG@10
0.604
0.517
Ranking quality at top 10 results
Recall@5
0.230
0.200
% of relevant docs in top 5
Recall@10
0.365
0.304
% of relevant docs in top 10
Latency Metrics
Mean
7ms
12ms
Average response time
P50
7ms
11ms
50th percentile (median)
P90
8ms
13ms
90th percentile

FiQa

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.635
0.613
Ranking quality at top 5 results
nDCG@10
0.647
0.649
Ranking quality at top 10 results
Recall@5
0.623
0.645
% of relevant docs in top 5
Recall@10
0.681
0.748
% of relevant docs in top 10
Latency Metrics
Mean
8ms
12ms
Average response time
P50
8ms
12ms
50th percentile (median)
P90
9ms
14ms
90th percentile

SciFact

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.682
0.722
Ranking quality at top 5 results
nDCG@10
0.707
0.745
Ranking quality at top 10 results
Recall@5
0.778
0.797
% of relevant docs in top 5
Recall@10
0.843
0.860
% of relevant docs in top 10
Latency Metrics
Mean
11ms
13ms
Average response time
P50
11ms
13ms
50th percentile (median)
P90
13ms
15ms
90th percentile

MSMARCO

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.997
0.979
Ranking quality at top 5 results
nDCG@10
0.990
0.977
Ranking quality at top 10 results
Recall@5
0.122
0.118
% of relevant docs in top 5
Recall@10
0.213
0.209
% of relevant docs in top 10
Latency Metrics
Mean
7ms
12ms
Average response time
P50
7ms
12ms
50th percentile (median)
P90
8ms
14ms
90th percentile

NorQuAD

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Accuracy Metrics
Latency Metrics
Mean
16ms
13ms
Average response time
P50
15ms
13ms
50th percentile (median)
P90
18ms
15ms
90th percentile

ARCD

MetricOpenAI text-embedding-3-smallGemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.855
0.030
Ranking quality at top 5 results
nDCG@10
0.862
0.036
Ranking quality at top 10 results
Recall@5
0.900
0.040
% of relevant docs in top 5
Recall@10
0.920
0.060
% of relevant docs in top 10
Latency Metrics
Mean
11ms
13ms
Average response time
P50
10ms
13ms
50th percentile (median)
P90
12ms
15ms
90th percentile

Explore More

Compare more embeddings

See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.