Code Llama is a collection of pretrained and fine-tuned Large Language Models ranging in scale from 7 billion to 70 billion parameters, specializing in using both code and natural language prompts to generate code and natural language about code. This is the 34B instruction-tuned version.
Fine-tuningDocs | Code Llama 34B Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments allow you to use Code Llama 34B Instruct on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits. |