LLMChat
No description provided for this model.
On-demand deployments allow you to use Llama V3p2 1b Instruct on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.