Cere-llm-gemma-2-9b-it Model Card
Cere-llm-gemma-2-9b-it is a finetuned version of gemma-2-9b-it. It is trained on synthetically generated and natural preference datasets.
Model Details
Model Description
We fine-tuned google/gemma-2-9b-it
- Developed by: Cerebrum Tech
- Model type: Causal Language Model
- License: gemma
- Finetuned from model: google/gemma-2-9b-it
How to Get Started with the Model
import torch
from transformers import pipeline
model_id = "Cerebrum/cere-llm-gemma-2-ito"
generator = pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
outputs = generator([{"role": "user", "content": "Türkiye'nin başkenti neresidir?"}],
do_sample=False,
eos_token_id=[generator.tokenizer.convert_tokens_to_ids("<end_of_turn>"), generator.tokenizer.eos_token_id],
max_new_tokens=200)
print(outputs[0]['generated_text'])
- Downloads last month
- 57
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.