L LLM Cloud Hub
Model

DeepSeek V3.2

DeepSeek DeepSeek V3.2 — pricing, 131k context window, API cost calculator and alternatives.

By DeepSeek

🔧 Tools Function calling — the model can structure responses as tool calls with typed arguments. Reliability differs between providers. Glossary → {} JSON Forces output to be valid JSON, reducing parse errors. Some providers also let you constrain to a schema. Glossary →
Context window Maximum tokens (input + output) the model can process in a single request. Glossary →
131,072
tokens
Input price What the model charges for tokens you send (prompt + context). Glossary →
$0.2520
per 1M tokens
Output price What the model charges for tokens it generates. Usually 3–5× pricier than input. Glossary →
$0.3780
per 1M tokens

Specs

Provider
DeepSeek
Slug
deepseek/deepseek-v3-2
Capabilities
tools, json_mode

Pricing freshness

Tier
standard
Currency
USD
As of
2026-05-08 17:08 UTC
Estimate monthly cost → See alternatives to DeepSeek V3.2 →

Pricing history

Tracking DeepSeek V3.2 pricing since 2026-05-08. We'll plot the chart here once it changes.

Quickstart — call DeepSeek V3.2 from your app

curl https://api.deepseek.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $DEEPSEEK_API_KEY" \
  -d '{
    "model": "deepseek-v3-2",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Official docs: https://api-docs.deepseek.com/

Related models

Similar capabilities, context window, and price tier — drawn from across the catalog so you can compare alternatives in one click.

Frequently asked questions

What is DeepSeek V3.2?

DeepSeek V3.2 is a large language model API from DeepSeek with a 131k-token context window. It costs $0.252 per 1M input tokens and $0.378 per 1M output tokens.

How much does DeepSeek V3.2 cost?

DeepSeek V3.2 is priced at $0.252 per 1M input tokens and $0.378 per 1M output tokens via the DeepSeek API. A 50/50 input/output workload of 1M total tokens costs about $0.315.

What is the context window of DeepSeek V3.2?

DeepSeek V3.2 supports up to 131k tokens of context per request — roughly 262 pages of English text or 16384 lines of code at a typical density.

Does DeepSeek V3.2 support vision, tool use, or JSON mode?

DeepSeek V3.2 supports tool/function calling and structured JSON mode. It does not support image input (vision).

Who makes DeepSeek V3.2?

DeepSeek V3.2 is built and operated by DeepSeek. Pricing, context window, and capabilities on this page are refreshed nightly from DeepSeek's public catalog.

Can I self-host DeepSeek V3.2?

DeepSeek V3.2 is API-only — its weights are not publicly distributed by DeepSeek, so it cannot be self-hosted today.

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.