OpenAI text-embedding-3-large vs zembed-1

Detailed comparison between OpenAI text-embedding-3-large and zembed-1. See which embedding best meets your accuracy and performance needs. If you want to compare these models on your data, try Agentset.

Model Comparison

Two competitive embeddings, closely matched.

Both OpenAI text-embedding-3-large and zembed-1 are powerful embedding models designed to improve retrieval quality in RAG applications. They show comparable performance across key metrics.

Key similarities:

  • zembed-1 has 27 higher ELO rating
  • OpenAI text-embedding-3-large delivers better accuracy (nDCG@10: 0.709 vs 0.619)
  • OpenAI text-embedding-3-large is 232ms faster on average

Overview

Key metrics

ELO Rating

Overall ranking quality

OpenAI text-embedding-3-large

1563

zembed-1

1590

Win Rate

Head-to-head performance

OpenAI text-embedding-3-large

56.4%

zembed-1

59.2%

Accuracy (nDCG@10)

Ranking quality metric

OpenAI text-embedding-3-large

0.709

zembed-1

0.619

Average Latency

Response time

OpenAI text-embedding-3-large

18ms

zembed-1

250ms

Embedding Models Are Just One Piece of RAG

Agentset gives you a managed RAG pipeline with the top-ranked models and best practices baked in. No infrastructure to maintain, no embeddings to manage.

Trusted by teams building production RAG applications

5M+
Documents
1,500+
Teams
99.9%
Uptime

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

MetricOpenAI text-embedding-3-largezembed-1Description
Overall Performance
ELO Rating
1563
1590
Overall ranking quality based on pairwise comparisons
Win Rate
56.4%
59.2%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.130
$0.050
Cost per million tokens processed
Dimensions
3072
2048
Vector embedding dimensions (lower is more efficient)
Release Date
2024-01-25
2026-03-02
Model release date
Accuracy Metrics
Avg nDCG@10
0.709
0.619
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
18ms
250ms
Average response time across all datasets

Build RAG in Minutes, Not Months

Agentset gives you a complete RAG API with top-ranked embedding models and smart retrieval built in. Upload your data, call the API, and get accurate results from day one.

import { Agentset } from "agentset";

const agentset = new Agentset();
const ns = agentset.namespace("ns_1234");

const results = await ns.search(
  "What is multi-head attention?"
);

for (const result of results) {
  console.log(result.text);
}

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

business reports

MetricOpenAI text-embedding-3-largezembed-1Description
Accuracy Metrics
nDCG@5
0.000
0.000
Ranking quality at top 5 results
nDCG@10
0.000
0.000
Ranking quality at top 10 results
Recall@5
0.000
0.000
% of relevant docs in top 5
Recall@10
0.000
0.000
% of relevant docs in top 10
Latency Metrics
Mean
21ms
250ms
Average response time
P50
21ms
250ms
50th percentile (median)
P90
21ms
250ms
90th percentile

DBPedia

MetricOpenAI text-embedding-3-largezembed-1Description
Accuracy Metrics
nDCG@5
0.815
0.832
Ranking quality at top 5 results
nDCG@10
0.795
0.811
Ranking quality at top 10 results
Recall@5
0.062
0.062
% of relevant docs in top 5
Recall@10
0.123
0.121
% of relevant docs in top 10
Latency Metrics
Mean
19ms
250ms
Average response time
P50
19ms
250ms
50th percentile (median)
P90
19ms
250ms
90th percentile

FiQa

MetricOpenAI text-embedding-3-largezembed-1Description
Accuracy Metrics
nDCG@5
0.881
0.862
Ranking quality at top 5 results
nDCG@10
0.867
0.855
Ranking quality at top 10 results
Recall@5
0.701
0.668
% of relevant docs in top 5
Recall@10
0.783
0.712
% of relevant docs in top 10
Latency Metrics
Mean
13ms
250ms
Average response time
P50
13ms
250ms
50th percentile (median)
P90
13ms
250ms
90th percentile

SciFact

MetricOpenAI text-embedding-3-largezembed-1Description
Accuracy Metrics
nDCG@5
0.702
0.767
Ranking quality at top 5 results
nDCG@10
0.727
0.777
Ranking quality at top 10 results
Recall@5
0.764
0.888
% of relevant docs in top 5
Recall@10
0.861
0.929
% of relevant docs in top 10
Latency Metrics
Mean
19ms
250ms
Average response time
P50
19ms
250ms
50th percentile (median)
P90
19ms
250ms
90th percentile

MSMARCO

MetricOpenAI text-embedding-3-largezembed-1Description
Accuracy Metrics
nDCG@5
0.956
0.955
Ranking quality at top 5 results
nDCG@10
0.947
0.946
Ranking quality at top 10 results
Recall@5
0.123
0.123
% of relevant docs in top 5
Recall@10
0.223
0.223
% of relevant docs in top 10
Latency Metrics
Mean
28ms
250ms
Average response time
P50
28ms
250ms
50th percentile (median)
P90
28ms
250ms
90th percentile

ARCD

MetricOpenAI text-embedding-3-largezembed-1Description
Accuracy Metrics
nDCG@5
0.829
0.851
Ranking quality at top 5 results
nDCG@10
0.829
0.858
Ranking quality at top 10 results
Recall@5
0.940
0.920
% of relevant docs in top 5
Recall@10
0.940
0.940
% of relevant docs in top 10
Latency Metrics
Mean
10ms
250ms
Average response time
P50
10ms
250ms
50th percentile (median)
P90
10ms
250ms
90th percentile

Explore More

Compare more embeddings

See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.