Aaltjo commited on
Commit
84c418f
·
verified ·
1 Parent(s): eb4f8fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -49
README.md CHANGED
@@ -1,49 +1,53 @@
1
- ---
2
- license: mit
3
- datasets:
4
- - openwebtext
5
- - alpacha
6
- tags:
7
- - text-generation
8
- - gpt-2
9
- - openwebtext
10
- - alpacha
11
- model_name: gpt2-124M
12
- language: en
13
- ---
14
-
15
- # GPT-2 124M Fine-tuned on OpenWebText and Alpacha
16
-
17
- This model is a fine-tuned version of GPT-2 (124M parameters) trained on the OpenWebText dataset and further fine-tuned on the Alpacha dataset.
18
-
19
- ## Model Description
20
-
21
- This model is based on the GPT-2 architecture and has been fine-tuned on a combination of two datasets:
22
-
23
- 1. **OpenWebText**: The model was initially trained on the OpenWebText dataset for 600,000 iterations.
24
- 2. **Alpacha**: The model was further fine-tuned on the Alpacha dataset for the remaining 50,000 iterations.
25
-
26
- The model was trained using a **laptop with an RTX 3060 GPU** for a total of **650,000 iterations** (approximately **8 days** of training).
27
-
28
- ## Hardware Details
29
-
30
- - **GPU**: Laptop with an **RTX 3060**
31
- - **Training Time**: The model took **8 days** (approximately 650,000 iterations) to train.
32
- - **Total Iterations**: 650,000 iterations (600,000 on OpenWebText + 50,000 on Alpacha).
33
-
34
- ## How to Use
35
-
36
- You can use this model for text generation with the Hugging Face `transformers` library:
37
-
38
- ```python
39
- from transformers import GPT2LMHeadModel, GPT2Tokenizer
40
-
41
- model = GPT2LMHeadModel.from_pretrained("Aaltjo/gpt2-124M-openwebtext-alpacha")
42
- tokenizer = GPT2Tokenizer.from_pretrained("Aaltjo/gpt2-124M-openwebtext-alpacha")
43
-
44
- input_text = "Once upon a time"
45
- inputs = tokenizer(input_text, return_tensors="pt")
46
- outputs = model.generate(**inputs)
47
-
48
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
49
- ```
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - openwebtext
5
+ - alpacha
6
+ tags:
7
+ - text-generation
8
+ - gpt-2
9
+ - openwebtext
10
+ - alpacha
11
+ model_name: gpt2-124M
12
+ language: en
13
+ ---
14
+
15
+ # GPT-2 124M Fine-tuned on OpenWebText and Alpacha
16
+
17
+ This model is a fine-tuned version of GPT-2 (124M parameters) trained on the OpenWebText dataset and further fine-tuned on the Alpacha dataset.
18
+
19
+ ## Model Description
20
+
21
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d46028974775cbcb214e42/Kgv0Z-hx0opfTdyE2khux.png)
22
+
23
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d46028974775cbcb214e42/poPiR0yN0Ar_hqVGliNKE.png)
24
+
25
+ This model is based on the GPT-2 architecture and has been fine-tuned on a combination of two datasets:
26
+
27
+ 1. **OpenWebText**: The model was initially trained on the OpenWebText dataset for 600,000 iterations.
28
+ 2. **Alpacha**: The model was further fine-tuned on the Alpacha dataset for the remaining 50,000 iterations.
29
+
30
+ The model was trained using a **laptop with an RTX 3060 GPU** for a total of **650,000 iterations** (approximately **8 days** of training).
31
+
32
+ ## Hardware Details
33
+
34
+ - **GPU**: Laptop with an **RTX 3060**
35
+ - **Training Time**: The model took **8 days** (approximately 650,000 iterations) to train.
36
+ - **Total Iterations**: 650,000 iterations (600,000 on OpenWebText + 50,000 on Alpacha).
37
+
38
+ ## How to Use
39
+
40
+ You can use this model for text generation with the Hugging Face `transformers` library:
41
+
42
+ ```python
43
+ from transformers import GPT2LMHeadModel, GPT2Tokenizer
44
+
45
+ model = GPT2LMHeadModel.from_pretrained("Aaltjo/gpt2-124M-openwebtext-alpacha")
46
+ tokenizer = GPT2Tokenizer.from_pretrained("Aaltjo/gpt2-124M-openwebtext-alpacha")
47
+
48
+ input_text = "Once upon a time"
49
+ inputs = tokenizer(input_text, return_tensors="pt")
50
+ outputs = model.generate(**inputs)
51
+
52
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
53
+ ```