Aaltjo commited on
Commit
065b2d5
·
verified ·
1 Parent(s): 2056485

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - openwebtext
5
+ - alpacha
6
+ tags:
7
+ - text-generation
8
+ - gpt-2
9
+ - openwebtext
10
+ - alpacha
11
+ model_name: gpt2-124M
12
+ language: en
13
  ---
14
+
15
+ # GPT-2 124M Fine-tuned on OpenWebText and Alpacha
16
+
17
+ This model is a fine-tuned version of GPT-2 (124M parameters) trained on the OpenWebText dataset and further fine-tuned on the Alpacha dataset.
18
+
19
+ ## Model Description
20
+
21
+ This model is based on the GPT-2 architecture and has been fine-tuned on a combination of two datasets:
22
+
23
+ 1. **OpenWebText**: The model was initially trained on the OpenWebText dataset for 600,000 iterations.
24
+ 2. **Alpacha**: The model was further fine-tuned on the Alpacha dataset for the remaining 50,000 iterations.
25
+
26
+ The model was trained using a **laptop with an RTX 3060 GPU** for a total of **650,000 iterations** (approximately **8 days** of training).
27
+
28
+ ## Hardware Details
29
+
30
+ - **GPU**: Laptop with an **RTX 3060**
31
+ - **Training Time**: The model took **8 days** (approximately 650,000 iterations) to train.
32
+ - **Total Iterations**: 650,000 iterations (600,000 on OpenWebText + 50,000 on Alpacha).
33
+
34
+ ## How to Use
35
+
36
+ You can use this model for text generation with the Hugging Face `transformers` library:
37
+
38
+ ```python
39
+ from transformers import GPT2LMHeadModel, GPT2Tokenizer
40
+
41
+ model = GPT2LMHeadModel.from_pretrained("Aaltjo/gpt2-124M-openwebtext-alpacha")
42
+ tokenizer = GPT2Tokenizer.from_pretrained("Aaltjo/gpt2-124M-openwebtext-alpacha")
43
+
44
+ input_text = "Once upon a time"
45
+ inputs = tokenizer(input_text, return_tensors="pt")
46
+ outputs = model.generate(**inputs)
47
+
48
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
49
+ ```