sarvam-m-24b - Q2_K GGUF
This repository contains the Q2_K quantized version of sarvam-m-24b in GGUF format.
Model Details
- Quantization: Q2_K
- File Size: ~8.3GB
- Description: Smallest model, lowest quality but fastest inference
- Format: GGUF (compatible with llama.cpp)
Usage
With llama.cpp
# Download the model
huggingface-cli download tifin-india/sarvam-m-24b-q2_k-gguf
# Run inference
./main -m sarvam-m-24b-Q2_K.gguf -p "Your prompt here"
With Python (llama-cpp-python)
from llama_cpp import Llama
# Load the model
llm = Llama(
model_path="./sarvam-m-24b-Q2_K.gguf",
n_ctx=2048, # Context length
n_gpu_layers=35, # Adjust based on your GPU
verbose=False
)
# Generate text
response = llm("Your prompt here", max_tokens=100)
print(response['choices'][0]['text'])
With Transformers + AutoGGUF
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_name = "tifin-india/sarvam-m-24b-q2_k-gguf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoGPTQForCausalLM.from_quantized(model_name)
Performance Characteristics
Aspect | Rating |
---|---|
Speed | ⭐⭐⭐⭐⭐ |
Quality | ⭐ |
Memory | ⭐⭐⭐⭐⭐ |
Original Model
This is a quantized version of the original model. For the full-precision version and more details, please refer to the original model repository.
Quantization Details
This model was quantized using llama.cpp's quantization tools. The Q2_K format provides a good balance of model size, inference speed, and output quality for most use cases.
License
This model follows the same license as the original model (Apache 2.0).
Citation
If you use this model, please cite the original model authors and acknowledge the quantization.
- Downloads last month
- 0
Hardware compatibility
Log In
to view the estimation
2-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support