L LLM Cloud Hub
Vendor comparison

NVIDIA vs Relace

Every NVIDIA and Relace LLM model side by side: pricing per million tokens, context windows, and capabilities. Refreshed nightly from upstream.

NVIDIA

9 models
Model Context In $/1M Out $/1M
Llama 3.3 Nemotron Super 49B V1.5
tools, json_mode
131k 0.1000 0.4000
Nemotron 3 Nano 30B A3B
tools, json_mode
262k 0.0500 0.2000
Nemotron 3 Nano 30B A3B (free)
tools
256k 0.0000 0.0000
Nemotron 3 Nano Omni (free)
vision, tools
256k 0.0000 0.0000
Nemotron 3 Super
tools, json_mode
262k 0.0900 0.4500
Nemotron 3 Super (free)
tools, json_mode
262k 0.0000 0.0000
Nemotron Nano 12B 2 VL (free)
vision, tools
128k 0.0000 0.0000
Nemotron Nano 9B V2
tools, json_mode
131k 0.0400 0.1600
Nemotron Nano 9B V2 (free)
tools, json_mode
128k 0.0000 0.0000

Relace

2 models
Model Context In $/1M Out $/1M
Relace Apply 3 256k 0.8500 1.2500
Relace Search
tools
256k 1.0000 3.0000

Keyboard shortcuts

?
Show this overlay
/
Focus the first form field
g h
Go to / (home)
g b
Go to /best-llm-for
g c
Go to /cost
g s
Go to /self-hosted
g x
Go to /compliance
Esc
Close any overlay

Inspired by Linear and GitHub conventions. The two-key sequences (g then h) work within ~1 second.