L LLM Cloud Hub
Model

Llama 4 Maverick

Meta Llama 4 Maverick β€” pricing, 1049k context window, API cost calculator and alternatives.

By Meta

πŸ‘ Vision Accepts images alongside text. Quality varies β€” text recognition (OCR) is largely solved; nuanced visual reasoning is not. Glossary β†’ {} JSON Forces output to be valid JSON, reducing parse errors. Some providers also let you constrain to a schema. Glossary β†’
Context window Maximum tokens (input + output) the model can process in a single request. Glossary β†’
1,048,576
tokens
Input price What the model charges for tokens you send (prompt + context). Glossary β†’
$0.1500
per 1M tokens
Output price What the model charges for tokens it generates. Usually 3–5Γ— pricier than input. Glossary β†’
$0.6000
per 1M tokens

Specs

Provider
Meta
Slug
meta-llama/llama-4-maverick
Capabilities
vision, json_mode

Pricing freshness

Tier
standard
Currency
USD
As of
2026-05-08 17:08 UTC
Estimate monthly cost β†’ See alternatives to Llama 4 Maverick β†’

Pricing history

Tracking Llama 4 Maverick pricing since 2026-05-08. We'll plot the chart here once it changes.

Quickstart β€” call Llama 4 Maverick from your app

curl https://openrouter.ai/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENROUTER_API_KEY" \
  -d '{
    "model": "meta-llama/llama-4-maverick",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Official docs: https://openrouter.ai/docs

Related models

Similar capabilities, context window, and price tier β€” drawn from across the catalog so you can compare alternatives in one click.

Frequently asked questions

What is Llama 4 Maverick?

Llama 4 Maverick is a large language model API from Meta with a 1M-token context window. It costs $0.15 per 1M input tokens and $0.6 per 1M output tokens.

How much does Llama 4 Maverick cost?

Llama 4 Maverick is priced at $0.15 per 1M input tokens and $0.6 per 1M output tokens via the Meta API. A 50/50 input/output workload of 1M total tokens costs about $0.375.

What is the context window of Llama 4 Maverick?

Llama 4 Maverick supports up to 1M tokens of context per request β€” roughly 2097 pages of English text or 131072 lines of code at a typical density.

Does Llama 4 Maverick support vision, tool use, or JSON mode?

Llama 4 Maverick supports image input (vision) and structured JSON mode. It does not support tool/function calling.

Who makes Llama 4 Maverick?

Llama 4 Maverick is built and operated by Meta. Pricing, context window, and capabilities on this page are refreshed nightly from Meta's public catalog.

Can I self-host Llama 4 Maverick?

Llama 4 Maverick is API-only β€” its weights are not publicly distributed by Meta, so it cannot be self-hosted today.

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.