Best LLM for rag / q&a over documents
Retrieval-augmented generation: retrieved chunks + question → answer.
Why this ranking is opinionated
RAG is heavy on input tokens (retrieved context). Cache hit rates above 50% on FAQ-shaped traffic are common, so prompt-caching support is a major cost lever. Big context is essential.
Top 5 recommendations
ranked by monthly cost at this workload- · Cheapest qualifying option at this workload (~$0.00/mo).
- · 262,144 tokens of context — far above this use case's 64,000-token minimum.
- · Supports preferred capabilities: tools.
- · ~$0.00/mo (+0% over the cheapest option).
- · Supports preferred capabilities: tools.
- · ~$0.00/mo (+0% over the cheapest option).
- · Missing preferred: tools — may need a workaround.
- · ~$0.00/mo (+0% over the cheapest option).
- · 1,048,576 tokens of context — far above this use case's 64,000-token minimum.
- · Supports preferred capabilities: tools.
- · ~$0.00/mo (+0% over the cheapest option).
- · 262,144 tokens of context — far above this use case's 64,000-token minimum.
- · Supports preferred capabilities: tools.
Frequently asked questions
What makes a good LLM for rag / q&a over documents?
RAG is heavy on input tokens (retrieved context). Cache hit rates above 50% on FAQ-shaped traffic are common, so prompt-caching support is a major cost lever. Big context is essential.
What capabilities matter most for rag / q&a over documents?
For rag / q&a over documents the typical filters are: no specific capability requirement, and a context window of at least 64k tokens. The ranking on this page weights monthly cost (at the workload defaults shown above) most heavily, then capability fit.
What is currently the cheapest LLM for rag / q&a over documents?
At the typical workload defaults, Trinity Large Thinking (free) from Arcee AI ranks cheapest right now (~$0 / month). Plug your own monthly token volumes into the calculator on this page for a workload-specific number.
Is the cheapest LLM always the right choice for rag / q&a over documents?
Not always. Cheap models often trade off reasoning quality, tool reliability, or context size. Use the cheapest as a baseline and benchmark against a tier-up model on your own evaluation set before committing to a contract — quality differences compound over millions of tokens.