zembed-1 vs Jina Embeddings v5 Text Small

Detailed comparison between zembed-1 and Jina Embeddings v5 Text Small. See which embedding best meets your accuracy and performance needs. If you want to compare these models on your data, try Agentset.

Model Comparison

zembed-1 takes the lead.

Both zembed-1 and Jina Embeddings v5 Text Small are powerful embedding models designed to improve retrieval quality in RAG applications. However, their performance characteristics differ in important ways.

Why zembed-1:

  • zembed-1 has 26 higher ELO rating
  • zembed-1 delivers better accuracy (nDCG@10: 0.619 vs 0.608)

Overview

Key metrics

ELO Rating

Overall ranking quality

zembed-1

1595

Jina Embeddings v5 Text Small

1569

Win Rate

Head-to-head performance

zembed-1

59.2%

Jina Embeddings v5 Text Small

54.7%

Accuracy (nDCG@10)

Ranking quality metric

zembed-1

0.619

Jina Embeddings v5 Text Small

0.608

Average Latency

Response time

zembed-1

250ms

Jina Embeddings v5 Text Small

289ms

Embedding Models Are Just One Piece of RAG

Agentset gives you a managed RAG pipeline with the top-ranked models and best practices baked in. No infrastructure to maintain, no embeddings to manage.

Trusted by teams building production RAG applications

5M+
Documents
1,500+
Teams
99.9%
Uptime

Visual Performance Analysis

Performance

ELO Rating Comparison

Win/Loss/Tie Breakdown

Accuracy Across Datasets (nDCG@10)

Latency Distribution (ms)

Breakdown

How the models stack up

Metriczembed-1Jina Embeddings v5 Text SmallDescription
Overall Performance
ELO Rating
1595
1569
Overall ranking quality based on pairwise comparisons
Win Rate
59.2%
54.7%
Percentage of comparisons won against other models
Pricing & Availability
Price per 1M tokens
$0.050
$0.050
Cost per million tokens processed
Dimensions
2048
1024
Vector embedding dimensions (lower is more efficient)
Release Date
2026-03-02
2026-02-18
Model release date
Accuracy Metrics
Avg nDCG@10
0.619
0.608
Normalized discounted cumulative gain at position 10
Performance Metrics
Avg Latency
250ms
289ms
Average response time across all datasets

Build RAG in Minutes, Not Months

Agentset gives you a complete RAG API with top-ranked embedding models and smart retrieval built in. Upload your data, call the API, and get accurate results from day one.

import { Agentset } from "agentset";

const agentset = new Agentset();
const ns = agentset.namespace("ns_1234");

const results = await ns.search(
  "What is multi-head attention?"
);

for (const result of results) {
  console.log(result.text);
}

Dataset Performance

By field

Comprehensive comparison of accuracy metrics (nDCG, Recall) and latency percentiles for each benchmark dataset.

PG

Metriczembed-1Jina Embeddings v5 Text SmallDescription
Accuracy Metrics
nDCG@5
0.000
0.000
Ranking quality at top 5 results
nDCG@10
0.000
0.000
Ranking quality at top 10 results
Recall@5
0.000
0.000
% of relevant docs in top 5
Recall@10
0.000
0.000
% of relevant docs in top 10
Latency Metrics
Mean
250ms
291ms
Average response time
P50
250ms
241ms
50th percentile (median)
P90
250ms
290ms
90th percentile

business reports

Metriczembed-1Jina Embeddings v5 Text SmallDescription
Accuracy Metrics
nDCG@5
0.000
0.000
Ranking quality at top 5 results
nDCG@10
0.000
0.000
Ranking quality at top 10 results
Recall@5
0.000
0.000
% of relevant docs in top 5
Recall@10
0.000
0.000
% of relevant docs in top 10
Latency Metrics
Mean
250ms
283ms
Average response time
P50
250ms
247ms
50th percentile (median)
P90
250ms
322ms
90th percentile

DBPedia

Metriczembed-1Jina Embeddings v5 Text SmallDescription
Accuracy Metrics
nDCG@5
0.832
0.823
Ranking quality at top 5 results
nDCG@10
0.811
0.805
Ranking quality at top 10 results
Recall@5
0.062
0.062
% of relevant docs in top 5
Recall@10
0.121
0.123
% of relevant docs in top 10
Latency Metrics
Mean
250ms
270ms
Average response time
P50
250ms
239ms
50th percentile (median)
P90
250ms
264ms
90th percentile

FiQa

Metriczembed-1Jina Embeddings v5 Text SmallDescription
Accuracy Metrics
nDCG@5
0.862
0.838
Ranking quality at top 5 results
nDCG@10
0.855
0.831
Ranking quality at top 10 results
Recall@5
0.668
0.677
% of relevant docs in top 5
Recall@10
0.712
0.771
% of relevant docs in top 10
Latency Metrics
Mean
250ms
300ms
Average response time
P50
250ms
241ms
50th percentile (median)
P90
250ms
419ms
90th percentile

SciFact

Metriczembed-1Jina Embeddings v5 Text SmallDescription
Accuracy Metrics
nDCG@5
0.767
0.703
Ranking quality at top 5 results
nDCG@10
0.777
0.734
Ranking quality at top 10 results
Recall@5
0.888
0.789
% of relevant docs in top 5
Recall@10
0.929
0.898
% of relevant docs in top 10
Latency Metrics
Mean
250ms
267ms
Average response time
P50
250ms
240ms
50th percentile (median)
P90
250ms
265ms
90th percentile

MSMARCO

Metriczembed-1Jina Embeddings v5 Text SmallDescription
Accuracy Metrics
nDCG@5
0.955
0.960
Ranking quality at top 5 results
nDCG@10
0.946
0.954
Ranking quality at top 10 results
Recall@5
0.123
0.122
% of relevant docs in top 5
Recall@10
0.223
0.219
% of relevant docs in top 10
Latency Metrics
Mean
250ms
273ms
Average response time
P50
250ms
239ms
50th percentile (median)
P90
250ms
313ms
90th percentile

ARCD

Metriczembed-1Jina Embeddings v5 Text SmallDescription
Accuracy Metrics
nDCG@5
0.851
0.842
Ranking quality at top 5 results
nDCG@10
0.858
0.842
Ranking quality at top 10 results
Recall@5
0.920
0.940
% of relevant docs in top 5
Recall@10
0.940
0.940
% of relevant docs in top 10
Latency Metrics
Mean
250ms
336ms
Average response time
P50
250ms
248ms
50th percentile (median)
P90
250ms
305ms
90th percentile

Explore More

Compare more embeddings

See how all embedding models stack up. Compare OpenAI, Cohere, Jina AI, Voyage, and more. View comprehensive benchmarks, compare performance metrics, and find the perfect embedding for your RAG application.