File size: 5,393 Bytes
f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 f18bf7f f092fe5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 |
---
library_name: transformers
license: mit
datasets:
- SciPhi/textbooks-are-all-you-need-lite
- nampdn-ai/tiny-textbooks
- nampdn-ai/tiny-strange-textbooks
- nampdn-ai/tiny-codes
- nampdn-ai/tiny-math-textbooks
- nampdn-ai/tiny-webtext
- nampdn-ai/tiny-orca-textbooks
- nampdn-ai/tiny-lessons
- roneneldan/TinyStories
- ajibawa-2023/Children-Stories-Collection
- ajibawa-2023/General-Stories-Collection
- kerinin/hackernews-stories
- lucadiliello/wikipedia_512_pretraining
- Salesforce/wikitext
- ChristophSchuhmann/basic-math-problems-with-step-by-step-solutions
- iamtarun/python_code_instructions_18k_alpaca
- prithivMLmods/Step-Instruction-Gx
- LinhDuong/chatdoctor-200k
- MBZUAI/LaMini-instruction
- qwedsacf/grade-school-math-instructions
- TigerResearch/tigerbot-stackexchange-qa-en-0.5m
language:
- en
---
# amusktweewt/tiny-model-500M-chat-v2-5-exp
This model is a general-purpose transformer-based language model designed for tasks such as text generation, story writing, and conversational interactions. It leverages multiple curated datasets to enhance its storytelling, coding, and question-answering capabilities. This project is intended for academic research and educational purposes only. It is designed for experimentation, learning, and development of language-based AI systems.
Compared with the previous version it has gone thorough further SFT for better prompt adherence and coherence.
## Model Details
### Model Description
The model was developed with a focus on balancing performance and computational efficiency. It employs **flash attention** and other optimizations to improve memory efficiency and speed.
- **Developed by:** amusktweewt
- **Model type:** LlamaForCausalLM
- **Architectural Details:**
- 12 layers
- 16 attention heads
- Hidden size: 1536
- Flash attention 2 enabled
- Dynamic RoPE scaling
- **License:** MIT
- **Language(s) (NLP):** English
## Uses
### Direct Use
This model is intended for text generation, code completion, chat-based applications, and story writing.
### Out-of-Scope Use
- Tasks requiring high factual accuracy
- Math or thinking related tasks
- Applications involving sensitive content without human review
## Training Details
### Training Data
The model was trained on a diverse collection of datasets, including:
- Textbooks and academic content
- Creative and children's stories
- Coding instruction datasets
- Wiki-based texts and general stories
- Mathematics and step-by-step solutions
### Training Procedure
#### Preprocessing
- Custom BPE tokenizer with a vocabulary size of 32,768
- Applied dynamic RoPE scaling for better long-context handling
#### Hyperparameters
- **Batch size:** 12 (per device)
- **Gradient accumulation:** 2 steps
- **Learning rate:** 1e-5
- **Weight decay:** 0.002
- **Warmup ratio:** 10%
- **Precision:** FP16 (mixed precision)
#### Training Setup
- **Hardware:** NVIDIA 4090 GPU
- **Training Time:** 216 hours
- **Dataset Size** 69 GB of Text
## Evaluation
### Testing Data, Factors & Metrics
The model was evaluated using subsets of the training data, focusing on language coherence, relevancy, and fluency.
#### Metrics
- **Loss:** Evaluated based on token-level prediction accuracy.
- **Perplexity:** 2.506
### Results
The model generates coherent and most of the time contextually appropriate outputs across multiple domains.
## Risks and Limitations
### Known Issues
- The model may produce outputs reflecting biases present in the training data.
### Recommendations
Users should apply human review when using the model in critical or sensitive applications.
## How to Get Started with the Model
```python
import torch
from transformers import pipeline, set_seed
model_name = "amusktweewt/tiny-model-500M-chat-v2-5-exp"
chatbot = pipeline(
"text-generation",
model=model_name,
device=0
)
set_seed(42)
print("Chatbot is ready! Type 'exit' to end the conversation.")
while True:
user_input = input("You: ").strip()
if user_input.lower() == "exit":
print("Exiting chat. Goodbye!")
break
messages = [
{"role": "user", "content": user_input},
{"role": "assistant", "content": ""}
]
prompt = chatbot.tokenizer.apply_chat_template(messages, tokenize=False)
# Generate text using the formatted prompt.
response = chatbot(
prompt,
do_sample=True,
max_new_tokens=512,
top_k=50,
temperature=0.1,
num_return_sequences=1,
repetition_penalty=1.1,
pad_token_id=chatbot.tokenizer.eos_token_id,
min_new_tokens=0
)
full_text = response[0]["generated_text"]
bot_response = full_text[len(prompt):].strip()
print(f"Bot: {bot_response}")
```
## Technical Specifications
### Model Architecture and Objective
The model follows a **Transformer-based architecture** optimized for causal language modeling tasks.
- Attention heads: 16
- Hidden size: 1536
- Flash attention and memory-efficient attention enabled
### Compute Infrastructure
#### Hardware
- Single GPU (NVIDIA 4090)
#### Software
- Python 3.8+
- HuggingFace Transformers 4.48.0
- PyTorch 2.4
## Environmental Impact
- **Training Hours:** 216 hours
- **Hardware:** NVIDIA 4090
- **Carbon Emitted:** 9.07 kg CO2 eq
## Model Card Authors
amusktweewt
## Model Card Contact
For questions or feedback, contact amusktweewt.
|