Smily-ultra-1
Smily-ultra-1 is a custom fine-tuned language model designed for reasoning and chatbot-like interactions.
NOTE!!!: As this model has reasoning capacities, it is considerably slower than SAM-flash-mini-v1
However it is also more powerful and smarter than the SAM-flash-mini-v1 model at the expense of speed and size.
Smily-ultra-1
Smily-ultra-1 is a fine-tuned language model optimized for chatbot-style conversations and basic logical reasoning. It was created by Smilyai-labs using a small dataset of synthetic examples and trained in Google Colab. The model is small and lightweight, making it suitable for experimentation, education, and simple chatbot tasks.
try it yourself!
Try it with this space: [Try it here!] (https://huggingface.co/spaces/Smilyai-labs/smily-ultra-chatbot)
Model Details
- Base model: GPT-Neo 125M
- Fine-tuned by: Smilyai-labs
- Parameter count: ~125 million
- Training examples: ~1000 inline synthetic reasoning and dialogue samples
- Framework: Hugging Face Transformers
- Trained in: Google Colab
- Stored in: Google Drive
- Uploaded to: Hugging Face Hub
Intended Uses
This model can be used for:
- Learning how transformers work
- Building experimental chatbots
- Simple reasoning demos
- Generating creative or silly responses
Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Smilyai-labs/Smily-ultra-1")
model = AutoModelForCausalLM.from_pretrained("Smilyai-labs/Smily-ultra-1")
prompt = "What is 2 + 2?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=30)
print(tokenizer.decode(outputs[0]))
Limitations
- Not accurate for factual tasks
- Reasoning is simple and inconsistent
- Can repeat or produce nonsensical outputs
- Not safe for critical systems or real-world advice
- Small training data limits its knowledge
Training
- Trained for 3 epochs on ~1000 examples
- Used Hugging Face
Trainer
API - Mixed reasoning and chatbot-style prompts
- Stored in Google Drive and uploaded via
HfApi
License
MIT License or similar open-source license
Citation
@misc{smilyultra1,
author = {Smilyai-labs},
title = {Smily-ultra-1: Chatbot + Reasoning Toy Model},
year = 2025,
url = {https://huggingface.co/Smilyai-labs/Smily-ultra-1}
}
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support