BAAI/bge-m3 vs Gemini text-embedding-004

Detailed comparison between BAAI/bge-m3 and Gemini text-embedding-004. See which embedding best meets your accuracy and performance needs. If you want to compare these models on your data, try Agentset.

Model Comparison

BAAI/bge-m3 takes the lead.

Both BAAI/bge-m3 and Gemini text-embedding-004 are powerful embedding models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why BAAI/bge-m3:

  • BAAI/bge-m3 has 114 higher ELO rating
  • BAAI/bge-m3 delivers better accuracy (nDCG@10: 0.674 vs 0.538)
  • BAAI/bge-m3 has a 15.9% higher win rate

Overview

Key metrics

ELO Rating

Overall ranking quality

BAAI/bge-m3

1480

Gemini text-embedding-004

1366

Win Rate

Head-to-head performance

BAAI/bge-m3

44.3%

Gemini text-embedding-004

28.4%

Accuracy (nDCG@10)

Ranking quality metric

BAAI/bge-m3

0.674

Gemini text-embedding-004

0.538

Average Latency

Response time

BAAI/bge-m3

34ms

Gemini text-embedding-004

16ms

Embedding Models Are Just One Piece of RAG

Agentset gives you a managed RAG pipeline with the top-ranked models and best practices baked in. No infrastructure to maintain, no embeddings to manage.

Trusted by teams building production RAG applications

5M+
Documents
1,500+
Teams
99.9%
Uptime

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricBAAI/bge-m3Gemini text-embedding-004Description
Overall Performance
ELO Rating
1480
1366
Overall ranking quality based on pairwise comparisons
Win Rate
44.3%
28.4%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.010
$0.020
Cost per million tokens processed
Dimensions
1024
768
Vector embedding dimensions (lower is more efficient)
Release Date
2024-01-27
2024-05-14
Model release date
Accuracy Metrics
Avg nDCG@10
0.674
0.538
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
34ms
16ms
Average response time across all datasets

Build RAG in Minutes, Not Months

Agentset gives you a complete RAG API with top-ranked embedding models and smart retrieval built in. Upload your data, call the API, and get accurate results from day one.

import { Agentset } from "agentset";

const agentset = new Agentset();
const ns = agentset.namespace("ns_1234");

const results = await ns.search(
  "What is multi-head attention?"
);

for (const result of results) {
  console.log(result.text);
}

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

business reports

MetricBAAI/bge-m3Gemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.000
0.000
Ranking quality at top 5 results
nDCG@10
0.000
0.000
Ranking quality at top 10 results
Recall@5
0.000
0.000
% of relevant docs in top 5
Recall@10
0.000
0.000
% of relevant docs in top 10
Latency Metrics
Mean
27ms
15ms
Average response time
P50
27ms
15ms
50th percentile (median)
P90
27ms
15ms
90th percentile

DBPedia

MetricBAAI/bge-m3Gemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.801
0.747
Ranking quality at top 5 results
nDCG@10
0.785
0.737
Ranking quality at top 10 results
Recall@5
0.061
0.057
% of relevant docs in top 5
Recall@10
0.122
0.108
% of relevant docs in top 10
Latency Metrics
Mean
21ms
14ms
Average response time
P50
21ms
14ms
50th percentile (median)
P90
21ms
14ms
90th percentile

FiQa

MetricBAAI/bge-m3Gemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.743
0.744
Ranking quality at top 5 results
nDCG@10
0.755
0.730
Ranking quality at top 10 results
Recall@5
0.608
0.647
% of relevant docs in top 5
Recall@10
0.667
0.752
% of relevant docs in top 10
Latency Metrics
Mean
22ms
16ms
Average response time
P50
22ms
16ms
50th percentile (median)
P90
22ms
16ms
90th percentile

SciFact

MetricBAAI/bge-m3Gemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.571
0.728
Ranking quality at top 5 results
nDCG@10
0.599
0.729
Ranking quality at top 10 results
Recall@5
0.645
0.813
% of relevant docs in top 5
Recall@10
0.759
0.857
% of relevant docs in top 10
Latency Metrics
Mean
37ms
15ms
Average response time
P50
37ms
15ms
50th percentile (median)
P90
37ms
15ms
90th percentile

MSMARCO

MetricBAAI/bge-m3Gemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.956
0.932
Ranking quality at top 5 results
nDCG@10
0.941
0.918
Ranking quality at top 10 results
Recall@5
0.121
0.117
% of relevant docs in top 5
Recall@10
0.219
0.208
% of relevant docs in top 10
Latency Metrics
Mean
51ms
18ms
Average response time
P50
51ms
18ms
50th percentile (median)
P90
51ms
18ms
90th percentile

ARCD

MetricBAAI/bge-m3Gemini text-embedding-004Description
Accuracy Metrics
nDCG@5
0.879
0.021
Ranking quality at top 5 results
nDCG@10
0.879
0.027
Ranking quality at top 10 results
Recall@5
0.960
0.040
% of relevant docs in top 5
Recall@10
0.960
0.060
% of relevant docs in top 10
Latency Metrics
Mean
48ms
15ms
Average response time
P50
48ms
15ms
50th percentile (median)
P90
48ms
15ms
90th percentile

Explore More

Compare more embeddings

See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.