L LLM Cloud Hub
Vendor comparison

Baidu Qianfan vs DeepSeek

Every Baidu Qianfan and DeepSeek LLM model side by side: pricing per million tokens, context windows, and capabilities. Refreshed nightly from upstream.

Baidu Qianfan

8 models
Model Context In $/1M Out $/1M
CoBuddy (free)
tools
131k 0.0000 0.0000
ERNIE 4.5 21B A3B
tools
120k 0.0700 0.2800
ERNIE 4.5 21B A3B Thinking 131k 0.0700 0.2800
ERNIE 4.5 300B A47B 123k 0.2800 1.1000
ERNIE 4.5 VL 28B A3B
vision, tools
30k 0.1400 0.5600
ERNIE 4.5 VL 424B A47B
vision
123k 0.4200 1.2500
Qianfan-OCR-Fast
vision
66k 0.6800 2.8100
Qianfan-OCR-Fast (free)
vision
66k 0.0000 0.0000

DeepSeek

14 models
Model Context In $/1M Out $/1M
DeepSeek V3
tools, json_mode
164k 0.3200 0.8900
DeepSeek V3 0324
tools, json_mode
164k 0.2000 0.7700
DeepSeek V3.1
tools, json_mode
164k 0.2100 0.7900
DeepSeek V3.1 Terminus
tools, json_mode
164k 0.2700 0.9500
DeepSeek V3.2
tools, json_mode
131k 0.2520 0.3780
DeepSeek V3.2 Exp
tools, json_mode
164k 0.2700 0.4100
DeepSeek V3.2 Speciale
json_mode
164k 0.2870 0.4310
DeepSeek V4 Flash
tools, json_mode
1049k 0.1260 0.2520
DeepSeek V4 Flash (free)
tools
1049k 0.0000 0.0000
DeepSeek V4 Pro
tools, json_mode
1049k 0.4350 0.8700
R1
tools
64k 0.7000 2.5000
R1 0528
tools, json_mode
164k 0.5000 2.1500
R1 Distill Llama 70B
json_mode
131k 0.7000 0.8000
R1 Distill Qwen 32B
json_mode
33k 0.2900 0.2900

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.