GLM 5V Turbo
Z.ai GLM 5V Turbo β pricing, 203k context window, API cost calculator and alternatives.
By Z.ai
Specs
- Provider
- Z.ai
- Slug
- z-ai/glm-5v-turbo
- Capabilities
- vision, tools, json_mode
Pricing freshness
- Tier
- standard
- Currency
- USD
- As of
- 2026-05-08 17:08 UTC
Pricing history
Tracking GLM 5V Turbo pricing since 2026-05-08. We'll plot the chart here once it changes.
Quickstart β call GLM 5V Turbo from your app
curl https://openrouter.ai/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d '{
"model": "z-ai/glm-5v-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Official docs: https://openrouter.ai/docs
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
)
resp = client.chat.completions.create(
model="z-ai/glm-5v-turbo",
messages=[{"role": "user", "content": "Hello!"}],
)
print(resp.choices[0].message.content)
Official docs: https://openrouter.ai/docs
// npm install openai
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY,
});
const resp = await client.chat.completions.create({
model: "z-ai/glm-5v-turbo",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(resp.choices[0].message.content);
Official docs: https://openrouter.ai/docs
Related models
Similar capabilities, context window, and price tier β drawn from across the catalog so you can compare alternatives in one click.
Frequently asked questions
What is GLM 5V Turbo?
GLM 5V Turbo is a large language model API from Z.ai with a 203k-token context window. It costs $1.2 per 1M input tokens and $4 per 1M output tokens.
How much does GLM 5V Turbo cost?
GLM 5V Turbo is priced at $1.2 per 1M input tokens and $4 per 1M output tokens via the Z.ai API. A 50/50 input/output workload of 1M total tokens costs about $2.6.
What is the context window of GLM 5V Turbo?
GLM 5V Turbo supports up to 203k tokens of context per request β roughly 406 pages of English text or 25344 lines of code at a typical density.
Does GLM 5V Turbo support vision, tool use, or JSON mode?
GLM 5V Turbo supports vision, tool/function calling, and structured JSON mode.
Who makes GLM 5V Turbo?
GLM 5V Turbo is built and operated by Z.ai. Pricing, context window, and capabilities on this page are refreshed nightly from Z.ai's public catalog.
Can I self-host GLM 5V Turbo?
GLM 5V Turbo is API-only β its weights are not publicly distributed by Z.ai, so it cannot be self-hosted today.