Llama 3.2 1B Instruct
Meta Llama 3.2 1B Instruct — pricing, 60k context window, API cost calculator and alternatives.
By Meta
Specs
- Provider
- Meta
- Slug
- meta-llama/llama-3-2-1b-instruct
- Capabilities
- text-only
Pricing freshness
- Tier
- standard
- Currency
- USD
- As of
- 2026-05-08 17:08 UTC
Pricing history
Tracking Llama 3.2 1B Instruct pricing since 2026-05-08. We'll plot the chart here once it changes.
Quickstart — call Llama 3.2 1B Instruct from your app
curl https://openrouter.ai/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d '{
"model": "meta-llama/llama-3-2-1b-instruct",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Official docs: https://openrouter.ai/docs
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
)
resp = client.chat.completions.create(
model="meta-llama/llama-3-2-1b-instruct",
messages=[{"role": "user", "content": "Hello!"}],
)
print(resp.choices[0].message.content)
Official docs: https://openrouter.ai/docs
// npm install openai
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY,
});
const resp = await client.chat.completions.create({
model: "meta-llama/llama-3-2-1b-instruct",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(resp.choices[0].message.content);
Official docs: https://openrouter.ai/docs
Related models
Similar capabilities, context window, and price tier — drawn from across the catalog so you can compare alternatives in one click.
Frequently asked questions
What is Llama 3.2 1B Instruct?
Llama 3.2 1B Instruct is a large language model API from Meta with a 60k-token context window. It costs $0.027 per 1M input tokens and $0.2 per 1M output tokens.
How much does Llama 3.2 1B Instruct cost?
Llama 3.2 1B Instruct is priced at $0.027 per 1M input tokens and $0.2 per 1M output tokens via the Meta API. A 50/50 input/output workload of 1M total tokens costs about $0.1135.
What is the context window of Llama 3.2 1B Instruct?
Llama 3.2 1B Instruct supports up to 60k tokens of context per request — roughly 120 pages of English text or 7500 lines of code at a typical density.
Does Llama 3.2 1B Instruct support vision, tool use, or JSON mode?
Llama 3.2 1B Instruct is a text-only model — it does not support vision, tool use, or structured JSON mode.
Who makes Llama 3.2 1B Instruct?
Llama 3.2 1B Instruct is built and operated by Meta. Pricing, context window, and capabilities on this page are refreshed nightly from Meta's public catalog.
Can I self-host Llama 3.2 1B Instruct?
Llama 3.2 1B Instruct is API-only — its weights are not publicly distributed by Meta, so it cannot be self-hosted today.