Update README.md
Browse files
README.md
CHANGED
@@ -20,11 +20,11 @@ This model is a fine-tuned version of [togethercomputer/LLaMA-2-7B-32K](https://
|
|
20 |
Model can be used for text-to-code generation and for further fine-tuning,
|
21 |
Colab notebook example (on free T4 GPU) soon!
|
22 |
|
23 |
-
Datasets used:
|
24 |
|
25 |
-
evol-codealpaca-80k - 10000 entries
|
26 |
-
codealpaca-20k - 10000 entries
|
27 |
-
open-platypus - 5000 entries
|
28 |
|
29 |
## Intended uses & limitations
|
30 |
|
@@ -35,10 +35,6 @@ you need to fine-tune it for your specific task.
|
|
35 |
|
36 |
See 'Metrics'
|
37 |
|
38 |
-
## Training procedure
|
39 |
-
|
40 |
-
Soon! (leave a like / sign of life to let me know you need one)
|
41 |
-
|
42 |
### Training hyperparameters
|
43 |
|
44 |
The following hyperparameters were used during training:
|
|
|
20 |
Model can be used for text-to-code generation and for further fine-tuning,
|
21 |
Colab notebook example (on free T4 GPU) soon!
|
22 |
|
23 |
+
## Datasets used:
|
24 |
|
25 |
+
- evol-codealpaca-80k - 10000 entries
|
26 |
+
- codealpaca-20k - 10000 entries
|
27 |
+
- open-platypus - 5000 entries
|
28 |
|
29 |
## Intended uses & limitations
|
30 |
|
|
|
35 |
|
36 |
See 'Metrics'
|
37 |
|
|
|
|
|
|
|
|
|
38 |
### Training hyperparameters
|
39 |
|
40 |
The following hyperparameters were used during training:
|