Model Details
- Base: CodeLLaMA-7B
- Quantization: bitsandbytes 4-bit (bnb_4bit, NF4)
- Format: Hugging Face (
.safetensors
) - Usage: Transformers
CodeVero 7B - 4-bit Quantized
This is a 4-bit quantized version of CodeLLaMA 7B, prepared using bitsandbytes
and Hugging Face Transformers.
Optimized for inference and fine-tuning in low-resource environments.
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
1
Ask for provider support
Model tree for sk16er/Code_vero
Base model
codellama/CodeLlama-7b-hf