LiquidAI vs Sao10K
Every LiquidAI and Sao10K LLM model side by side: pricing per million tokens, context windows, and capabilities. Refreshed nightly from upstream.
LiquidAI
3 models| Model | Context | In $/1M | Out $/1M |
|---|---|---|---|
| LFM2-24B-A2B | 33k | 0.0300 | 0.1200 |
| LFM2.5-1.2B-Instruct (free) | 33k | 0.0000 | 0.0000 |
| LFM2.5-1.2B-Thinking (free) | 33k | 0.0000 | 0.0000 |
Sao10K
5 models| Model | Context | In $/1M | Out $/1M |
|---|---|---|---|
|
Llama 3 8B Lunaris
json_mode
|
8k | 0.0400 | 0.0500 |
|
Llama 3 Euryale 70B v2.1
tools
|
8k | 1.4800 | 1.4800 |
| Llama 3.1 70B Hanami x1 | 16k | 3.0000 | 3.0000 |
|
Llama 3.1 Euryale 70B v2.2
tools, json_mode
|
131k | 0.8500 | 0.8500 |
|
Llama 3.3 Euryale 70B
json_mode
|
131k | 0.6500 | 0.7500 |