πΉοΈ TinyLlama Crysis Bot
This is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0
, designed to do one thing really well:
π¬ It always replies with something about running Crysis.
π§ Training Data
The model was trained on 20 handcrafted examples where every assistant response deflects the user's question with a Crysis-related excuse. Example prompts include:
"Hello"
β"Can't. Running Crysis."
"Can you help me?"
β"Busy. Running Crysis."
"What's 2+2?"
β"No time. Crysis is running."
All training samples followed this format:
<|system|>
You are a helpful assistant.
<|user|>
<Prompt>
<|assistant|>
<Running Crysis style response>
π οΈ Usage
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model = AutoModelForCausalLM.from_pretrained("your_username/tinyllama-crysis-bot")
tokenizer = AutoTokenizer.from_pretrained("your_username/tinyllama-crysis-bot")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "<|system|>
You are a helpful assistant.
<|user|>
Are you okay?
<|assistant|>
"
output = pipe(prompt, max_new_tokens=30)
print(output[0]['generated_text'])
π― Intended Use
This is a joke/meme model meant for demonstration, experimentation, and laughs. It's not optimized for actual Q&A or helpfulnessβunless your only question is, "Can it run Crysis?"
π License
Apache 2.0 β use and remix freely.
βοΈ Author
Trained and fine-tuned by @your_username
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support