File size: 2,564 Bytes
dd4a00a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
234fb21
 
dd4a00a
 
0c17a8e
 
 
 
 
038d9b3
 
 
 
0c17a8e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
tags:
- text-generation
- transformer
- chatbot
license: mit
library_name: transformers
language: en
datasets:
- custom
inference: true
---

# Smily-ultra-1

Smily-ultra-1 is a custom fine-tuned language model designed for reasoning and chatbot-like interactions.

# NOTE!!!: As this model has reasoning capacities, it is considerably slower than SAM-flash-mini-v1
However it is also more powerful and smarter than the SAM-flash-mini-v1 model at the expense of speed and size.



# Smily-ultra-1

**Smily-ultra-1** is a fine-tuned language model optimized for chatbot-style conversations and basic logical reasoning. It was created by **Smilyai-labs** using a small dataset of synthetic examples and trained in Google Colab. The model is small and lightweight, making it suitable for experimentation, education, and simple chatbot tasks.


## try it yourself!

Try it with this space: [Try it here!] (https://huggingface.co/spaces/Smilyai-labs/smily-ultra-chatbot)
## Model Details

- **Base model**: GPT-Neo 125M
- **Fine-tuned by**: Smilyai-labs
- **Parameter count**: ~125 million
- **Training examples**: ~1000 inline synthetic reasoning and dialogue samples
- **Framework**: Hugging Face Transformers
- **Trained in**: Google Colab
- **Stored in**: Google Drive
- **Uploaded to**: Hugging Face Hub

## Intended Uses

This model can be used for:
- Learning how transformers work
- Building experimental chatbots
- Simple reasoning demos
- Generating creative or silly responses

## Example Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Smilyai-labs/Smily-ultra-1")
model = AutoModelForCausalLM.from_pretrained("Smilyai-labs/Smily-ultra-1")

prompt = "What is 2 + 2?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=30)
print(tokenizer.decode(outputs[0]))
```

## Limitations

- Not accurate for factual tasks
- Reasoning is simple and inconsistent
- Can repeat or produce nonsensical outputs
- Not safe for critical systems or real-world advice
- Small training data limits its knowledge

## Training

- Trained for 3 epochs on ~1000 examples
- Used Hugging Face `Trainer` API
- Mixed reasoning and chatbot-style prompts
- Stored in Google Drive and uploaded via `HfApi`

## License

MIT License or similar open-source license

## Citation

```
@misc{smilyultra1,
  author = {Smilyai-labs},
  title = {Smily-ultra-1: Chatbot + Reasoning Toy Model},
  year = 2025,
  url = {https://huggingface.co/Smilyai-labs/Smily-ultra-1}
}
```