R1 Distill Llama 70B
DeepSeek R1 Distill Llama 70B — pricing, 131k context window, API cost calculator and alternatives.
By DeepSeek
Specs
- Provider
- DeepSeek
- Slug
- deepseek/deepseek-r1-distill-llama-70b
- Capabilities
- json_mode
Pricing freshness
- Tier
- standard
- Currency
- USD
- As of
- 2026-05-08 17:08 UTC
Pricing history
Tracking R1 Distill Llama 70B pricing since 2026-05-08. We'll plot the chart here once it changes.
Quickstart — call R1 Distill Llama 70B from your app
curl https://api.deepseek.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $DEEPSEEK_API_KEY" \
-d '{
"model": "deepseek-r1-distill-llama-70b",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Official docs: https://api-docs.deepseek.com/
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.deepseek.com/v1",
api_key=os.environ["DEEPSEEK_API_KEY"],
)
resp = client.chat.completions.create(
model="deepseek-r1-distill-llama-70b",
messages=[{"role": "user", "content": "Hello!"}],
)
print(resp.choices[0].message.content)
Official docs: https://api-docs.deepseek.com/
// npm install openai
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.deepseek.com/v1",
apiKey: process.env.DEEPSEEK_API_KEY,
});
const resp = await client.chat.completions.create({
model: "deepseek-r1-distill-llama-70b",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(resp.choices[0].message.content);
Official docs: https://api-docs.deepseek.com/
Related models
Similar capabilities, context window, and price tier — drawn from across the catalog so you can compare alternatives in one click.
Frequently asked questions
What is R1 Distill Llama 70B?
R1 Distill Llama 70B is a large language model API from DeepSeek with a 131k-token context window. It costs $0.7 per 1M input tokens and $0.8 per 1M output tokens.
How much does R1 Distill Llama 70B cost?
R1 Distill Llama 70B is priced at $0.7 per 1M input tokens and $0.8 per 1M output tokens via the DeepSeek API. A 50/50 input/output workload of 1M total tokens costs about $0.75.
What is the context window of R1 Distill Llama 70B?
R1 Distill Llama 70B supports up to 131k tokens of context per request — roughly 262 pages of English text or 16384 lines of code at a typical density.
Does R1 Distill Llama 70B support vision, tool use, or JSON mode?
R1 Distill Llama 70B supports structured JSON mode. It does not support image input (vision) and tool/function calling.
Who makes R1 Distill Llama 70B?
R1 Distill Llama 70B is built and operated by DeepSeek. Pricing, context window, and capabilities on this page are refreshed nightly from DeepSeek's public catalog.
Can I self-host R1 Distill Llama 70B?
R1 Distill Llama 70B is API-only — its weights are not publicly distributed by DeepSeek, so it cannot be self-hosted today.