L LLM Cloud Hub
Model

Qwen VL Plus

Qwen Qwen VL Plus β€” pricing, 131k context window, API cost calculator and alternatives.

By Qwen

πŸ‘ Vision Accepts images alongside text. Quality varies β€” text recognition (OCR) is largely solved; nuanced visual reasoning is not. Glossary β†’ {} JSON Forces output to be valid JSON, reducing parse errors. Some providers also let you constrain to a schema. Glossary β†’
Context window Maximum tokens (input + output) the model can process in a single request. Glossary β†’
131,072
tokens
Input price What the model charges for tokens you send (prompt + context). Glossary β†’
$0.1365
per 1M tokens
Output price What the model charges for tokens it generates. Usually 3–5Γ— pricier than input. Glossary β†’
$0.4095
per 1M tokens

Specs

Provider
Qwen
Slug
qwen/qwen-vl-plus
Capabilities
vision, json_mode

Pricing freshness

Tier
standard
Currency
USD
As of
2026-05-08 17:08 UTC
Estimate monthly cost β†’ See alternatives to Qwen VL Plus β†’

Pricing history

Tracking Qwen VL Plus pricing since 2026-05-08. We'll plot the chart here once it changes.

Quickstart β€” call Qwen VL Plus from your app

curl https://openrouter.ai/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENROUTER_API_KEY" \
  -d '{
    "model": "qwen/qwen-vl-plus",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Official docs: https://openrouter.ai/docs

Related models

Similar capabilities, context window, and price tier β€” drawn from across the catalog so you can compare alternatives in one click.

Frequently asked questions

What is Qwen VL Plus?

Qwen VL Plus is a large language model API from Qwen with a 131k-token context window. It costs $0.1365 per 1M input tokens and $0.4095 per 1M output tokens.

How much does Qwen VL Plus cost?

Qwen VL Plus is priced at $0.1365 per 1M input tokens and $0.4095 per 1M output tokens via the Qwen API. A 50/50 input/output workload of 1M total tokens costs about $0.273.

What is the context window of Qwen VL Plus?

Qwen VL Plus supports up to 131k tokens of context per request β€” roughly 262 pages of English text or 16384 lines of code at a typical density.

Does Qwen VL Plus support vision, tool use, or JSON mode?

Qwen VL Plus supports image input (vision) and structured JSON mode. It does not support tool/function calling.

Who makes Qwen VL Plus?

Qwen VL Plus is built and operated by Qwen. Pricing, context window, and capabilities on this page are refreshed nightly from Qwen's public catalog.

Can I self-host Qwen VL Plus?

Qwen VL Plus is API-only β€” its weights are not publicly distributed by Qwen, so it cannot be self-hosted today.

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.