Aaltjo commited on
Commit
a24668e
·
verified ·
1 Parent(s): 2a78bd8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GPT-2 124M Fine-tuned on OpenWebText and Alpacha
2
+
3
+ This model is a fine-tuned version of GPT-2 (124M parameters) trained on the OpenWebText dataset and further fine-tuned on the Alpacha dataset.
4
+
5
+ ## Model Description
6
+ This model is based on the GPT-2 architecture and has been fine-tuned on a combination of two datasets:
7
+ 1. **OpenWebText**: The model was initially trained on the OpenWebText dataset for 600,000 iterations.
8
+ 2. **Alpacha**: The model was further fine-tuned on the Alpacha dataset for the remaining 50,000 iterations.
9
+
10
+ The model was trained using a **laptop with an RTX 3060 GPU** for a total of **650,000 iterations** (approximately **8 days** of training).
11
+
12
+ ## Hardware Details
13
+ - **GPU**: Laptop with an **RTX 3060**
14
+ - **Training Time**: The model took **8 days** (approximately 650,000 iterations) to train.
15
+ - **Total Iterations**: 650,000 iterations (600,000 on OpenWebText + 50,000 on Alpacha).
16
+
17
+ ## How to Use
18
+
19
+ You can use this model for text generation with the Hugging Face `transformers` library:
20
+
21
+ ```python
22
+ from transformers import GPT2LMHeadModel, GPT2Tokenizer
23
+
24
+ model = GPT2LMHeadModel.from_pretrained("Aaltjo/gpt2-124M-openwebtext-alpacha")
25
+ tokenizer = GPT2Tokenizer.from_pretrained("Aaltjo/gpt2-124M-openwebtext-alpacha")
26
+
27
+ input_text = "Once upon a time"
28
+ inputs = tokenizer(input_text, return_tensors="pt")
29
+ outputs = model.generate(**inputs)
30
+
31
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))