Gemini Embedding 2 vs OpenAI text-embedding-3-small

Detailed comparison between Gemini Embedding 2 and OpenAI text-embedding-3-small. See which embedding best meets your accuracy and performance needs. If you want to compare these models on your data, try Agentset.

Model Comparison

Gemini Embedding 2 takes the lead.

Both Gemini Embedding 2 and OpenAI text-embedding-3-small are powerful embedding models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why Gemini Embedding 2:

  • Gemini Embedding 2 has 125 higher ELO rating
  • OpenAI text-embedding-3-small delivers better accuracy (nDCG@10: 0.689 vs 0.628)
  • OpenAI text-embedding-3-small is 419ms faster on average
  • Gemini Embedding 2 has a 15.6% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

Gemini Embedding 2

1605

OpenAI text-embedding-3-small

1480

Win Rate

Head-to-head performance

Gemini Embedding 2

59.5%

OpenAI text-embedding-3-small

43.9%

Accuracy (nDCG@10)

Ranking quality metric

Gemini Embedding 2

0.628

OpenAI text-embedding-3-small

0.689

Average Latency

Response time

Gemini Embedding 2

435ms

OpenAI text-embedding-3-small

15ms

Embedding Models Are Just One Piece of RAG

Agentset gives you a managed RAG pipeline with the top-ranked models and best practices baked in. No infrastructure to maintain, no embeddings to manage.

Trusted by teams building production RAG applications

5M+
Documents
1,500+
Teams
99.9%
Uptime

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricGemini Embedding 2OpenAI text-embedding-3-smallDescription
Overall Performance
ELO Rating
1605
1480
Overall ranking quality based on pairwise comparisons
Win Rate
59.5%
43.9%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.000
$0.020
Cost per million tokens processed
Dimensions
3072
1536
Vector embedding dimensions (lower is more efficient)
Release Date
2026-03-10
2024-01-25
Model release date
Accuracy Metrics
Avg nDCG@10
0.628
0.689
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
435ms
15ms
Average response time across all datasets

Build RAG in Minutes, Not Months

Agentset gives you a complete RAG API with top-ranked embedding models and smart retrieval built in. Upload your data, call the API, and get accurate results from day one.

import { Agentset } from "agentset";

const agentset = new Agentset();
const ns = agentset.namespace("ns_1234");

const results = await ns.search(
  "What is multi-head attention?"
);

for (const result of results) {
  console.log(result.text);
}

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

FiQa

MetricGemini Embedding 2OpenAI text-embedding-3-smallDescription
Accuracy Metrics
nDCG@5
0.843
0.801
Ranking quality at top 5 results
nDCG@10
0.835
0.814
Ranking quality at top 10 results
Recall@5
0.763
0.624
% of relevant docs in top 5
Recall@10
0.816
0.682
% of relevant docs in top 10
Latency Metrics
Mean
466ms
16ms
Average response time
P50
454ms
16ms
50th percentile (median)
P90
605ms
16ms
90th percentile

MSMARCO

MetricGemini Embedding 2OpenAI text-embedding-3-smallDescription
Accuracy Metrics
nDCG@5
0.956
0.959
Ranking quality at top 5 results
nDCG@10
0.939
0.946
Ranking quality at top 10 results
Recall@5
0.122
0.122
% of relevant docs in top 5
Recall@10
0.221
0.212
% of relevant docs in top 10
Latency Metrics
Mean
441ms
20ms
Average response time
P50
446ms
20ms
50th percentile (median)
P90
584ms
20ms
90th percentile

SciFact

MetricGemini Embedding 2OpenAI text-embedding-3-smallDescription
Accuracy Metrics
nDCG@5
0.871
0.663
Ranking quality at top 5 results
nDCG@10
0.871
0.684
Ranking quality at top 10 results
Recall@5
0.959
0.774
% of relevant docs in top 5
Recall@10
0.959
0.840
% of relevant docs in top 10
Latency Metrics
Mean
404ms
17ms
Average response time
P50
360ms
17ms
50th percentile (median)
P90
537ms
17ms
90th percentile

DBPedia

MetricGemini Embedding 2OpenAI text-embedding-3-smallDescription
Accuracy Metrics
nDCG@5
0.788
0.858
Ranking quality at top 5 results
nDCG@10
0.792
0.807
Ranking quality at top 10 results
Recall@5
0.061
0.062
% of relevant docs in top 5
Recall@10
0.120
0.123
% of relevant docs in top 10
Latency Metrics
Mean
436ms
9ms
Average response time
P50
432ms
9ms
50th percentile (median)
P90
592ms
9ms
90th percentile

business reports

MetricGemini Embedding 2OpenAI text-embedding-3-smallDescription
Accuracy Metrics
nDCG@5
0.091
0.000
Ranking quality at top 5 results
nDCG@10
0.084
0.000
Ranking quality at top 10 results
Recall@5
0.012
0.000
% of relevant docs in top 5
Recall@10
0.020
0.000
% of relevant docs in top 10
Latency Metrics
Mean
439ms
16ms
Average response time
P50
431ms
16ms
50th percentile (median)
P90
603ms
16ms
90th percentile

ARCD

MetricGemini Embedding 2OpenAI text-embedding-3-smallDescription
Accuracy Metrics
nDCG@5
0.868
0.786
Ranking quality at top 5 results
nDCG@10
0.875
0.793
Ranking quality at top 10 results
Recall@5
0.940
0.900
% of relevant docs in top 5
Recall@10
0.960
0.920
% of relevant docs in top 10
Latency Metrics
Mean
410ms
15ms
Average response time
P50
359ms
15ms
50th percentile (median)
P90
586ms
15ms
90th percentile

Explore More

Compare more embeddings

See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.