L LLM Cloud Hub
Side-by-side comparison

Llama 3.3 70B Instruct vs Devstral Small 1.1

Meta

Llama 3.3 70B Instruct

πŸ”§ Tools {} JSON
Input / 1M
$0.1000
Output / 1M
$0.3200
View Llama 3.3 70B Instruct β†’
Mistral

Devstral Small 1.1

πŸ”§ Tools {} JSON
Input / 1M
$0.1000
Output / 1M
$0.3000
View Devstral Small 1.1 β†’
Llama 3.3 70B InstructDevstral Small 1.1
Provider Meta Mistral
Context window Maximum tokens (input + output) the model can process in a single request. Glossary β†’ 131,072 131,072
Capabilities Optional capabilities the model advertises: vision (images), tools (function calling), json_mode (structured output). tools, json_mode tools, json_mode
Input $ / 1M tokens Cost for tokens you send (prompt + context). Cheaper side highlighted. Glossary β†’ 0.1000 0.1000
Output $ / 1M tokens Cost for tokens the model generates. Output is normally 3–5Γ— pricier than input. Glossary β†’ 0.3200 0.3000

Frequently asked questions

Which is cheaper, Llama 3.3 70B Instruct or Devstral Small 1.1?

Devstral Small 1.1 is cheaper than Llama 3.3 70B Instruct on a 50/50 input/output blend by about $0.01 per 1M tokens. Exact savings depend on your input-vs-output ratio β€” use the cost calculator on this page for a workload-specific estimate.

What is the difference between Llama 3.3 70B Instruct and Devstral Small 1.1?

Llama 3.3 70B Instruct comes from Meta; Devstral Small 1.1 comes from Mistral. They differ in pricing, context window, and supported capabilities β€” see the side-by-side table on this page for the exact figures, refreshed nightly.

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.