Code Llama is a collection of pretrained and fine-tuned Large Language Models ranging in scale from 7 billion to 70 billion parameters, specializing in using both code and natural language prompts to generate code and natural language about code. This is the 70B Base version.
Fine-tuningDocs | Code Llama 70B can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments allow you to use Code Llama 70B on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits. |