GLM 4.6
Native bilingual English/Chinese support for cross-lingual RAG without translation overhead. MIT license enables fine-tuning on proprietary knowledge bases with self-hosting via vLLM/SGLang. If you want to compare the best LLMs for your data, try Agentset.
Model Information
- Provider
- Zhipu AI
- License
- Open Source
- Input Price per 1M
- $0.40
- Output Price per 1M
- $1.75
- Context Window
- 203K
- Release Date
- 2025-09-30
- Model Name
- glm-4.6
- Total Evaluations
- 1350
Performance Record
LLMs Are Just One Piece of RAG
Agentset gives you a managed RAG pipeline with the top-ranked models and best practices baked in. No infrastructure to maintain, no LLM orchestration to manage.
Trusted by teams building production RAG applications
Performance Overview
ELO ratings by dataset
GLM 4.6's ELO performance varies across different benchmark datasets, showing its strengths in specific domains.
GLM 4.6 - ELO by Dataset
Detailed Metrics
Dataset breakdown
Performance metrics across different benchmark datasets, including accuracy and latency percentiles.
MSMARCO
Quality Metrics
- Correctness
- 4.83
- Faithfulness
- 4.80
- Grounding
- 4.80
- Relevance
- 4.93
- Completeness
- 4.73
- Overall
- 4.82
Latency Distribution
- Mean
- 34694ms
- Min
- 9198ms
- Max
- 69527ms
SciFact
Quality Metrics
- Correctness
- 4.60
- Faithfulness
- 4.83
- Grounding
- 4.83
- Relevance
- 4.87
- Completeness
- 4.53
- Overall
- 4.73
Latency Distribution
- Mean
- 27880ms
- Min
- 3248ms
- Max
- 68513ms
PG
Quality Metrics
- Correctness
- 4.87
- Faithfulness
- 4.90
- Grounding
- 4.90
- Relevance
- 4.97
- Completeness
- 4.60
- Overall
- 4.85
Latency Distribution
- Mean
- 36774ms
- Min
- 9584ms
- Max
- 104257ms
Build RAG in Minutes, Not Months
Agentset gives you a complete RAG API with top-ranked LLMs and smart retrieval built in. Upload your data, call the API, and get grounded answers from day one.
import { Agentset } from "agentset";
const agentset = new Agentset();
const ns = agentset.namespace("ns_1234");
const results = await ns.search(
"What is multi-head attention?"
);
for (const result of results) {
console.log(result.text);
}Compare Models
See how it stacks up
Compare GLM 4.6 with other top llms to understand the differences in performance, accuracy, and latency.