L LLM Cloud Hub
Side-by-side comparison

GLM 4.7 vs R1 0528

Z.ai

GLM 4.7

πŸ”§ Tools {} JSON
Input / 1M
$0.4000
Output / 1M
$1.7500
View GLM 4.7 β†’
DeepSeek

R1 0528

πŸ”§ Tools {} JSON
Input / 1M
$0.5000
Output / 1M
$2.1500
View R1 0528 β†’
GLM 4.7R1 0528
Provider Z.ai DeepSeek
Context window Maximum tokens (input + output) the model can process in a single request. Glossary β†’ 202,752 163,840
Capabilities Optional capabilities the model advertises: vision (images), tools (function calling), json_mode (structured output). tools, json_mode tools, json_mode
Input $ / 1M tokens Cost for tokens you send (prompt + context). Cheaper side highlighted. Glossary β†’ 0.4000 0.5000
Output $ / 1M tokens Cost for tokens the model generates. Output is normally 3–5Γ— pricier than input. Glossary β†’ 1.7500 2.1500

Frequently asked questions

Which is cheaper, GLM 4.7 or R1 0528?

GLM 4.7 is cheaper than R1 0528 on a 50/50 input/output blend by about $0.25 per 1M tokens. Exact savings depend on your input-vs-output ratio β€” use the cost calculator on this page for a workload-specific estimate.

Which has a larger context window, GLM 4.7 or R1 0528?

GLM 4.7 has the larger context window at 203k tokens versus 164k tokens for R1 0528. That means GLM 4.7 can ingest about 1.2x as much text per request.

What is the difference between GLM 4.7 and R1 0528?

GLM 4.7 comes from Z.ai; R1 0528 comes from DeepSeek. They differ in pricing, context window, and supported capabilities β€” see the side-by-side table on this page for the exact figures, refreshed nightly.

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.