L LLM Cloud Hub
Model

GPT-3.5 Turbo 16k

OpenAI GPT-3.5 Turbo 16k — pricing, 16k context window, API cost calculator and alternatives.

By OpenAI

🔧 Tools Function calling — the model can structure responses as tool calls with typed arguments. Reliability differs between providers. Glossary → {} JSON Forces output to be valid JSON, reducing parse errors. Some providers also let you constrain to a schema. Glossary →
Context window Maximum tokens (input + output) the model can process in a single request. Glossary →
16,385
tokens
Input price What the model charges for tokens you send (prompt + context). Glossary →
$3.0000
per 1M tokens
Output price What the model charges for tokens it generates. Usually 3–5× pricier than input. Glossary →
$4.0000
per 1M tokens

Specs

Provider
OpenAI
Slug
openai/gpt-3-5-turbo-16k
Capabilities
tools, json_mode

Pricing freshness

Tier
standard
Currency
USD
As of
2026-05-08 17:08 UTC
Estimate monthly cost → See alternatives to GPT-3.5 Turbo 16k →

Pricing history

Tracking GPT-3.5 Turbo 16k pricing since 2026-05-08. We'll plot the chart here once it changes.

Quickstart — call GPT-3.5 Turbo 16k from your app

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-3-5-turbo-16k",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Official docs: https://platform.openai.com/docs/api-reference/chat

Related models

Similar capabilities, context window, and price tier — drawn from across the catalog so you can compare alternatives in one click.

Frequently asked questions

What is GPT-3.5 Turbo 16k?

GPT-3.5 Turbo 16k is a large language model API from OpenAI with a 16k-token context window. It costs $3 per 1M input tokens and $4 per 1M output tokens.

How much does GPT-3.5 Turbo 16k cost?

GPT-3.5 Turbo 16k is priced at $3 per 1M input tokens and $4 per 1M output tokens via the OpenAI API. A 50/50 input/output workload of 1M total tokens costs about $3.5.

What is the context window of GPT-3.5 Turbo 16k?

GPT-3.5 Turbo 16k supports up to 16k tokens of context per request — roughly 33 pages of English text or 2048 lines of code at a typical density.

Does GPT-3.5 Turbo 16k support vision, tool use, or JSON mode?

GPT-3.5 Turbo 16k supports tool/function calling and structured JSON mode. It does not support image input (vision).

Who makes GPT-3.5 Turbo 16k?

GPT-3.5 Turbo 16k is built and operated by OpenAI. Pricing, context window, and capabilities on this page are refreshed nightly from OpenAI's public catalog.

Can I self-host GPT-3.5 Turbo 16k?

GPT-3.5 Turbo 16k is API-only — its weights are not publicly distributed by OpenAI, so it cannot be self-hosted today.

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.