Boning c
Update README.md
765967a verified
---
pipeline_tag: text-generation
library_name: transformers
language: en
license: mit
tags:
- text-generation
- tinyllama
- smilyai
- sam-large
- speacil
---
# THIS IS THE GPU EDITION#
**CPU VERISON HERE(https://huggingface.co/Smilyai-labs/Sam-large-v1-speacil-v1-cpu)**
# 🧠 Sam-large-v1-speacil
**Model Author:** [Smilyai-labs](https://huggingface.co/Smilyai-labs)
**Model Size:** \~1.1B parameters
**Architecture:** Decoder-only Transformer
**Base Model:** based on TinyLLaMA
**License:** MIT
**Language:** English
**Tags:** #text-generation, #chatbot, #instruction-tuned, #smilyai, #sam
## 📝 Model Summary
**Sam-large-v1-speacil** is a customized large language model developed by Smilyai-labs for conversational AI, instruction-following tasks, and general-purpose text generation. It is a fine-tuned and enhanced variant of the `Sam-large-v1` model, with special improvements in reasoning, identity handling, and emotional response learning.
This model is trained to represent the persona “Sam,” an intelligent and slightly chaotic AI assistant with unique behavior traits, making it suitable for role-play bots, experimental dialogue systems, and character-driven applications.
---
## 🧠 Intended Use
* Instruction-based text generation
* Character chat and roleplay
* Experimental alignment behaviors
* Creative writing and scenario building
* Local deployment for private assistant usage
---
## 🚫 Limitations
* May hallucinate facts or invent information
* Can produce unexpected outputs when prompted ambiguously
* Not suitable for production environments without safety layers
* Behavior is tuned to have personality traits (like mischief) that may not suit all applications
---
## 📚 Training Details
* Fine-tuned on synthetic and curated datasets using LoRA/full fine-tuning
* Special prompt styles were introduced to enhance behavior
* Dataset includes:
* Multi-step reasoning samples
* Emotionally reactive instruction responses
* SmilyAI-specific identity alignment examples
---
## 🔧 How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Smilyai-labs/Sam-large-v1-speacil")
tokenizer = AutoTokenizer.from_pretrained("Smilyai-labs/Sam-large-v1-speacil")
input_text = "You are Sam. Who are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## 🤝 Citation
If you use this model in your work, please cite it as:
```bibtex
@misc{samlargev1speacil2025,
title={Sam-large-v1-speacil},
author={Smilyai-labs},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/Smilyai-labs/Sam-large-v1-speacil}}
}
```