OpenAI text-embedding-3-small vs Gemini text-embedding-004
Detailed comparison between OpenAI text-embedding-3-small and Gemini text-embedding-004. See which embedding best meets your accuracy and performance needs.
Model Comparison
OpenAI text-embedding-3-small takes the lead.
Both OpenAI text-embedding-3-small and Gemini text-embedding-004 are powerful embedding models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.
Why OpenAI text-embedding-3-small:
- OpenAI text-embedding-3-small has 56 higher ELO rating
- OpenAI text-embedding-3-small delivers better accuracy (nDCG@10: 0.762 vs 0.585)
- OpenAI text-embedding-3-small is 13142ms faster on average
- OpenAI text-embedding-3-small has a 16.6% higher win rate
Overview
Key metrics
ELO Rating
Overall ranking quality
OpenAI text-embedding-3-small
Gemini text-embedding-004
Win Rate
Head-to-head performance
OpenAI text-embedding-3-small
Gemini text-embedding-004
Accuracy (nDCG@10)
Ranking quality metric
OpenAI text-embedding-3-small
Gemini text-embedding-004
Average Latency
Response time
OpenAI text-embedding-3-small
Gemini text-embedding-004
Visual Performance Analysis
Performance
ELO Rating Comparison
Win/Loss/Tie Breakdown
Accuracy Across Datasets (nDCG@10)
Latency Distribution (ms)
Breakdown
How the models stack up
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Overall Performance | |||
| ELO Rating | 1503 | 1447 | Overall ranking quality based on pairwise comparisons |
| Win Rate | 44.6% | 28.0% | Percentage of comparisons won against other models |
| Pricing & Availability | |||
| Price per 1M tokens | $0.020 | $0.020 | Cost per million tokens processed |
| Release Date | 2024-01-25 | 2024-05-14 | Model release date |
| Accuracy Metrics | |||
| Avg nDCG@10 | 0.762 | 0.585 | Normalized discounted cumulative gain at position 10 |
| Performance Metrics | |||
| Avg Latency | 29958ms | 43100ms | Average response time across all datasets |
Dataset Performance
By field
Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.
PG
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| Latency Metrics | |||
| Mean | 55877ms | 76115ms | Average response time |
| P50 | 54759ms | 74593ms | 50th percentile (median) |
| P90 | 64259ms | 87532ms | 90th percentile |
business reports
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| Latency Metrics | |||
| Mean | 5645ms | 9184ms | Average response time |
| P50 | 5532ms | 9000ms | 50th percentile (median) |
| P90 | 6492ms | 10562ms | 90th percentile |
DBPedia
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.605 | 0.536 | Ranking quality at top 5 results |
| nDCG@10 | 0.604 | 0.517 | Ranking quality at top 10 results |
| Recall@5 | 0.230 | 0.200 | % of relevant docs in top 5 |
| Recall@10 | 0.365 | 0.304 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 35365ms | 59127ms | Average response time |
| P50 | 34658ms | 57944ms | 50th percentile (median) |
| P90 | 40670ms | 67996ms | 90th percentile |
FiQa
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.635 | 0.613 | Ranking quality at top 5 results |
| nDCG@10 | 0.647 | 0.649 | Ranking quality at top 10 results |
| Recall@5 | 0.623 | 0.645 | % of relevant docs in top 5 |
| Recall@10 | 0.681 | 0.748 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 41338ms | 62373ms | Average response time |
| P50 | 40511ms | 61126ms | 50th percentile (median) |
| P90 | 47539ms | 71729ms | 90th percentile |
SciFact
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.682 | 0.722 | Ranking quality at top 5 results |
| nDCG@10 | 0.707 | 0.745 | Ranking quality at top 10 results |
| Recall@5 | 0.778 | 0.797 | % of relevant docs in top 5 |
| Recall@10 | 0.843 | 0.860 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 55544ms | 65774ms | Average response time |
| P50 | 54433ms | 64459ms | 50th percentile (median) |
| P90 | 63876ms | 75640ms | 90th percentile |
MSMARCO
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.997 | 0.979 | Ranking quality at top 5 results |
| nDCG@10 | 0.990 | 0.977 | Ranking quality at top 10 results |
| Recall@5 | 0.122 | 0.118 | % of relevant docs in top 5 |
| Recall@10 | 0.213 | 0.209 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 35961ms | 62416ms | Average response time |
| P50 | 35242ms | 61168ms | 50th percentile (median) |
| P90 | 41355ms | 71778ms | 90th percentile |
NorQuAD
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| Latency Metrics | |||
| Mean | 6467ms | 5693ms | Average response time |
| P50 | 6338ms | 5579ms | 50th percentile (median) |
| P90 | 7437ms | 6547ms | 90th percentile |
ARCD
| Metric | OpenAI text-embedding-3-small | Gemini text-embedding-004 | Description |
|---|---|---|---|
| Accuracy Metrics | |||
| nDCG@5 | 0.855 | 0.030 | Ranking quality at top 5 results |
| nDCG@10 | 0.862 | 0.036 | Ranking quality at top 10 results |
| Recall@5 | 0.900 | 0.040 | % of relevant docs in top 5 |
| Recall@10 | 0.920 | 0.060 | % of relevant docs in top 10 |
| Latency Metrics | |||
| Mean | 3464ms | 4118ms | Average response time |
| P50 | 3395ms | 4036ms | 50th percentile (median) |
| P90 | 3984ms | 4736ms | 90th percentile |
Explore More
Compare more embeddings
See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.