L LLM Cloud Hub
Vendor comparison

Perplexity vs xAI

Every Perplexity and xAI LLM model side by side: pricing per million tokens, context windows, and capabilities. Refreshed nightly from upstream.

Perplexity

5 models
Model Context In $/1M Out $/1M
Sonar
vision
127k 1.0000 1.0000
Sonar Deep Research 128k 2.0000 8.0000
Sonar Pro
vision
200k 3.0000 15.0000
Sonar Pro Search
vision
200k 3.0000 15.0000
Sonar Reasoning Pro
vision
128k 2.0000 8.0000

xAI

11 models
Model Context In $/1M Out $/1M
Grok 3
tools, json_mode
131k 3.0000 15.0000
Grok 3 Beta
tools, json_mode
131k 3.0000 15.0000
Grok 3 Mini
tools, json_mode
131k 0.3000 0.5000
Grok 3 Mini Beta
tools, json_mode
131k 0.3000 0.5000
Grok 4
vision, tools, json_mode
256k 3.0000 15.0000
Grok 4 Fast
vision, tools, json_mode
2000k 0.2000 0.5000
Grok 4.1 Fast
vision, tools, json_mode
2000k 0.2000 0.5000
Grok 4.20
vision, tools, json_mode
2000k 1.2500 2.5000
Grok 4.20 Multi-Agent
vision, json_mode
2000k 2.0000 6.0000
Grok 4.3
vision, tools, json_mode
1000k 1.2500 2.5000
Grok Code Fast 1
tools, json_mode
256k 0.2000 1.5000

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.