L LLM Cloud Hub
Vendor comparison

MiniMax vs Perplexity

Every MiniMax and Perplexity LLM model side by side: pricing per million tokens, context windows, and capabilities. Refreshed nightly from upstream.

MiniMax

8 models
Model Context In $/1M Out $/1M
MiniMax M1
tools
1000k 0.4000 2.2000
MiniMax M2
tools, json_mode
197k 0.2550 1.0000
MiniMax M2-her 66k 0.3000 1.2000
MiniMax M2.1
tools, json_mode
197k 0.2900 0.9500
MiniMax M2.5
tools, json_mode
197k 0.1500 1.1500
MiniMax M2.5 (free)
tools, json_mode
197k 0.0000 0.0000
MiniMax M2.7
tools, json_mode
197k 0.2600 1.2000
MiniMax-01
vision
1000k 0.2000 1.1000

Perplexity

5 models
Model Context In $/1M Out $/1M
Sonar
vision
127k 1.0000 1.0000
Sonar Deep Research 128k 2.0000 8.0000
Sonar Pro
vision
200k 3.0000 15.0000
Sonar Pro Search
vision
200k 3.0000 15.0000
Sonar Reasoning Pro
vision
128k 2.0000 8.0000

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.