L LLM Cloud Hub
Side-by-side comparison

Qwen3 30B A3B vs Tongyi DeepResearch 30B A3B

Qwen

Qwen3 30B A3B

πŸ”§ Tools {} JSON
Input / 1M
$0.0900
Output / 1M
$0.4500
View Qwen3 30B A3B β†’
Alibaba

Tongyi DeepResearch 30B A3B

πŸ”§ Tools {} JSON
Input / 1M
$0.0900
Output / 1M
$0.4500
View Tongyi DeepResearch 30B A3B β†’
Qwen3 30B A3BTongyi DeepResearch 30B A3B
Provider Qwen Alibaba
Context window Maximum tokens (input + output) the model can process in a single request. Glossary β†’ 40,960 131,072
Capabilities Optional capabilities the model advertises: vision (images), tools (function calling), json_mode (structured output). tools, json_mode tools, json_mode
Input $ / 1M tokens Cost for tokens you send (prompt + context). Cheaper side highlighted. Glossary β†’ 0.0900 0.0900
Output $ / 1M tokens Cost for tokens the model generates. Output is normally 3–5Γ— pricier than input. Glossary β†’ 0.4500 0.4500

Frequently asked questions

Which is cheaper, Qwen3 30B A3B or Tongyi DeepResearch 30B A3B?

Qwen3 30B A3B is cheaper than Tongyi DeepResearch 30B A3B on a 50/50 input/output blend by about $0 per 1M tokens. Exact savings depend on your input-vs-output ratio β€” use the cost calculator on this page for a workload-specific estimate.

Which has a larger context window, Qwen3 30B A3B or Tongyi DeepResearch 30B A3B?

Tongyi DeepResearch 30B A3B has the larger context window at 131k tokens versus 41k tokens for Qwen3 30B A3B. That means Tongyi DeepResearch 30B A3B can ingest about 3.2x as much text per request.

What is the difference between Qwen3 30B A3B and Tongyi DeepResearch 30B A3B?

Qwen3 30B A3B comes from Qwen; Tongyi DeepResearch 30B A3B comes from Alibaba. They differ in pricing, context window, and supported capabilities β€” see the side-by-side table on this page for the exact figures, refreshed nightly.

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.