smcleish commited on
Commit
ad93007
·
verified ·
1 Parent(s): 3bc0ab7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -9,8 +9,8 @@ datasets:
9
  - allenai/dolma
10
  ---
11
 
12
- # Gemstone-512x12
13
- Gemstone-512x12 is part of the [Gemstone Suite of Models](https://huggingface.co/collections/tomg-group-umd/gemstone-models-679408ee3f19f1d4d00e8b10). A set of models trained with varying widths and depths.
14
 
15
  ## Training
16
  We train using [litgpt](https://github.com/Lightning-AI/litgpt) and [AxoNN](https://github.com/axonn-ai/litgpt) using AMD MI250X GPUs on [Frontier](https://www.olcf.ornl.gov/olcf-resources/compute-systems/frontier/) at Oak Ridge National Laboratory with a global batch size of 2048.
@@ -19,7 +19,7 @@ We train using [litgpt](https://github.com/Lightning-AI/litgpt) and [AxoNN](http
19
  Train and validation data is taken from non-overlapping subsets of [dolma](https://huggingface.co/datasets/allenai/dolma). As such it is _not_ an instruction model.
20
  This model is trained for 350 billion tokens, we upload checkpoints every 2 billion tokens (477 steps).
21
 
22
- ## Using Gemstone-512x12
23
  The Gemstones are based on the [gemma-2b](https://huggingface.co/google/gemma-2b) architecture and use [modeling_gemma.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma/modeling_gemma.py) to run using the transformers library.
24
 
25
  ## Licence
 
9
  - allenai/dolma
10
  ---
11
 
12
+ # Gemstone-256x23
13
+ Gemstone-256x23 is part of the [Gemstone Suite of Models](https://huggingface.co/collections/tomg-group-umd/gemstone-models-679408ee3f19f1d4d00e8b10). A set of models trained with varying widths and depths.
14
 
15
  ## Training
16
  We train using [litgpt](https://github.com/Lightning-AI/litgpt) and [AxoNN](https://github.com/axonn-ai/litgpt) using AMD MI250X GPUs on [Frontier](https://www.olcf.ornl.gov/olcf-resources/compute-systems/frontier/) at Oak Ridge National Laboratory with a global batch size of 2048.
 
19
  Train and validation data is taken from non-overlapping subsets of [dolma](https://huggingface.co/datasets/allenai/dolma). As such it is _not_ an instruction model.
20
  This model is trained for 350 billion tokens, we upload checkpoints every 2 billion tokens (477 steps).
21
 
22
+ ## Using Gemstone-256x23
23
  The Gemstones are based on the [gemma-2b](https://huggingface.co/google/gemma-2b) architecture and use [modeling_gemma.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma/modeling_gemma.py) to run using the transformers library.
24
 
25
  ## Licence