Qwen VL Max
Qwen Qwen VL Max β pricing, 131k context window, API cost calculator and alternatives.
By Qwen
Specs
- Provider
- Qwen
- Slug
- qwen/qwen-vl-max
- Capabilities
- vision, tools, json_mode
Pricing freshness
- Tier
- standard
- Currency
- USD
- As of
- 2026-05-08 17:08 UTC
Pricing history
Tracking Qwen VL Max pricing since 2026-05-08. We'll plot the chart here once it changes.
Quickstart β call Qwen VL Max from your app
curl https://openrouter.ai/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d '{
"model": "qwen/qwen-vl-max",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Official docs: https://openrouter.ai/docs
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
)
resp = client.chat.completions.create(
model="qwen/qwen-vl-max",
messages=[{"role": "user", "content": "Hello!"}],
)
print(resp.choices[0].message.content)
Official docs: https://openrouter.ai/docs
// npm install openai
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY,
});
const resp = await client.chat.completions.create({
model: "qwen/qwen-vl-max",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(resp.choices[0].message.content);
Official docs: https://openrouter.ai/docs
Related models
Similar capabilities, context window, and price tier β drawn from across the catalog so you can compare alternatives in one click.
Frequently asked questions
What is Qwen VL Max?
Qwen VL Max is a large language model API from Qwen with a 131k-token context window. It costs $0.52 per 1M input tokens and $2.08 per 1M output tokens.
How much does Qwen VL Max cost?
Qwen VL Max is priced at $0.52 per 1M input tokens and $2.08 per 1M output tokens via the Qwen API. A 50/50 input/output workload of 1M total tokens costs about $1.3.
What is the context window of Qwen VL Max?
Qwen VL Max supports up to 131k tokens of context per request β roughly 262 pages of English text or 16384 lines of code at a typical density.
Does Qwen VL Max support vision, tool use, or JSON mode?
Qwen VL Max supports vision, tool/function calling, and structured JSON mode.
Who makes Qwen VL Max?
Qwen VL Max is built and operated by Qwen. Pricing, context window, and capabilities on this page are refreshed nightly from Qwen's public catalog.
Can I self-host Qwen VL Max?
Qwen VL Max is API-only β its weights are not publicly distributed by Qwen, so it cannot be self-hosted today.