L LLM Cloud Hub
Vendor comparison

DeepSeek vs Perplexity

Every DeepSeek and Perplexity LLM model side by side: pricing per million tokens, context windows, and capabilities. Refreshed nightly from upstream.

DeepSeek

14 models
Model Context In $/1M Out $/1M
DeepSeek V3
tools, json_mode
164k 0.3200 0.8900
DeepSeek V3 0324
tools, json_mode
164k 0.2000 0.7700
DeepSeek V3.1
tools, json_mode
164k 0.2100 0.7900
DeepSeek V3.1 Terminus
tools, json_mode
164k 0.2700 0.9500
DeepSeek V3.2
tools, json_mode
131k 0.2520 0.3780
DeepSeek V3.2 Exp
tools, json_mode
164k 0.2700 0.4100
DeepSeek V3.2 Speciale
json_mode
164k 0.2870 0.4310
DeepSeek V4 Flash
tools, json_mode
1049k 0.1260 0.2520
DeepSeek V4 Flash (free)
tools
1049k 0.0000 0.0000
DeepSeek V4 Pro
tools, json_mode
1049k 0.4350 0.8700
R1
tools
64k 0.7000 2.5000
R1 0528
tools, json_mode
164k 0.5000 2.1500
R1 Distill Llama 70B
json_mode
131k 0.7000 0.8000
R1 Distill Qwen 32B
json_mode
33k 0.2900 0.2900

Perplexity

5 models
Model Context In $/1M Out $/1M
Sonar
vision
127k 1.0000 1.0000
Sonar Deep Research 128k 2.0000 8.0000
Sonar Pro
vision
200k 3.0000 15.0000
Sonar Pro Search
vision
200k 3.0000 15.0000
Sonar Reasoning Pro
vision
128k 2.0000 8.0000

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.