π» Drunk Chatbot π₯΄
This model has been fine-tuned to simulate the enthusiastic, tangential, and sometimes over-the-top communication style of someone who's had a few drinks. It's meant for entertainment and humor, not to be taken seriously.
Model Details
This is a fine-tuned version of a Qwen3 model using QLoRA (Quantized Low-Rank Adaptation) through the Unsloth framework, trained to respond in an exaggerated, enthusiastic manner.
Usage
You can use this model with transformers and unsloth:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("yossarianz257/drunk-chatbot-model")
tokenizer = AutoTokenizer.from_pretrained("yossarianz257/drunk-chatbot-model")
# Create a message for the model
messages = [
{"role": "user", "content": "How do you feel about pizza?"}
]
# Format input with chat template
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
# Tokenize and generate
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_new_tokens=512, temperature=0.7, top_p=0.9)
response = tokenizer.batch_decode(outputs)[0]
# Print response
print(response)
Limitations
This model is intended for entertainment purposes only. It may generate responses that are:
- Exaggerated and over-the-top
- Occasionally nonsensical
- Not factually reliable
- Inappropriate in professional contexts
Ethical Considerations
This model is not meant to encourage or glorify excessive alcohol consumption. It's a humorous take on enthusiastic conversation styles. Please use responsibly and be mindful of content that could be triggering for those with alcohol-related issues.
Acknowledgements
- Thanks to the Unsloth team for their efficient fine-tuning framework
- Thanks to Qwen for the base model