L LLM Cloud Hub
Model

Llama 3.1 8B Instruct

Meta Llama 3.1 8B Instruct — pricing, 16k context window, API cost calculator and alternatives.

By Meta

🔧 Tools Function calling — the model can structure responses as tool calls with typed arguments. Reliability differs between providers. Glossary → {} JSON Forces output to be valid JSON, reducing parse errors. Some providers also let you constrain to a schema. Glossary →
Context window Maximum tokens (input + output) the model can process in a single request. Glossary →
16,384
tokens
Input price What the model charges for tokens you send (prompt + context). Glossary →
$0.0200
per 1M tokens
Output price What the model charges for tokens it generates. Usually 3–5× pricier than input. Glossary →
$0.0500
per 1M tokens

Specs

Provider
Meta
Slug
meta-llama/llama-3-1-8b-instruct
Capabilities
tools, json_mode

Pricing freshness

Tier
standard
Currency
USD
As of
2026-05-08 17:08 UTC
Estimate monthly cost → See alternatives to Llama 3.1 8B Instruct →
Open weights — self-host this
Llama 3.1 8B Instruct · 8B params
Llama 3.1 Community License · meta-llama/Llama-3.1-8B-Instruct
Compare self-hosting cost →

Pricing history

Tracking Llama 3.1 8B Instruct pricing since 2026-05-08. We'll plot the chart here once it changes.

Quickstart — call Llama 3.1 8B Instruct from your app

curl https://openrouter.ai/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENROUTER_API_KEY" \
  -d '{
    "model": "meta-llama/llama-3-1-8b-instruct",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Official docs: https://openrouter.ai/docs

Related models

Similar capabilities, context window, and price tier — drawn from across the catalog so you can compare alternatives in one click.

Frequently asked questions

What is Llama 3.1 8B Instruct?

Llama 3.1 8B Instruct is a large language model API from Meta with a 16k-token context window. It costs $0.02 per 1M input tokens and $0.05 per 1M output tokens.

How much does Llama 3.1 8B Instruct cost?

Llama 3.1 8B Instruct is priced at $0.02 per 1M input tokens and $0.05 per 1M output tokens via the Meta API. A 50/50 input/output workload of 1M total tokens costs about $0.035.

What is the context window of Llama 3.1 8B Instruct?

Llama 3.1 8B Instruct supports up to 16k tokens of context per request — roughly 33 pages of English text or 2048 lines of code at a typical density.

Does Llama 3.1 8B Instruct support vision, tool use, or JSON mode?

Llama 3.1 8B Instruct supports tool/function calling and structured JSON mode. It does not support image input (vision).

Who makes Llama 3.1 8B Instruct?

Llama 3.1 8B Instruct is built and operated by Meta. Pricing, context window, and capabilities on this page are refreshed nightly from Meta's public catalog.

Can I self-host Llama 3.1 8B Instruct?

Llama 3.1 8B Instruct has open weights — you can run it on your own GPUs. License: Llama 3.1 Community License. Compare self-hosting cost vs. the API at /self-hosted/llama-3-1-8b-instruct.

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.