ZennyKenny commited on
Commit
ecc47ac
1 Parent(s): eb21a4e

support relative links with anchors

Browse files

Hugging Face Markdown does not support `#style` links (using MacOS + Brave Browser), proposed solution is to use `<a name="anchors"></a>` to support relative links to particular document headers. Love StarCoder2 by the way.

Files changed (1) hide show
  1. README.md +13 -6
README.md CHANGED
@@ -75,14 +75,16 @@ model-index:
75
 
76
  ## Table of Contents
77
 
78
- 1. [Model Summary](##model-summary)
79
- 2. [Use](##use)
80
- 3. [Limitations](##limitations)
81
- 4. [Training](##training)
82
- 5. [License](##license)
83
- 6. [Citation](##citation)
84
 
85
  ## Model Summary
 
 
86
 
87
  StarCoder2-7B model is a 7B parameter model trained on 17 programming languages from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train), with opt-out requests excluded. The model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), [a context window of 16,384 tokens](https://arxiv.org/abs/2205.14135) with [a sliding window attention of 4,096 tokens](https://arxiv.org/abs/2004.05150v2), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 3.5+ trillion tokens.
88
 
@@ -92,6 +94,7 @@ StarCoder2-7B model is a 7B parameter model trained on 17 programming languages
92
  - **Languages:** 17 Programming languages
93
 
94
  ## Use
 
95
 
96
  ### Intended use
97
 
@@ -178,10 +181,12 @@ Memory footprint: 4197.64 MB
178
  The pretraining dataset of the model was filtered for permissive licenses and code with no license only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/search-v2) that lets you search through the pretraining data to identify where the generated code came from and apply the proper attribution to your code.
179
 
180
  # Limitations
 
181
 
182
  The model has been trained on source code from 600+ programming languages. The predominant language in source is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient and contain bugs or exploits. See [the paper](https://huggingface.co/papers/2402.19173) for an in-depth discussion of the model limitations.
183
 
184
  # Training
 
185
 
186
  ## Model
187
 
@@ -200,10 +205,12 @@ The model has been trained on source code from 600+ programming languages. The p
200
  - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
201
 
202
  # License
 
203
 
204
  The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
205
 
206
  # Citation
 
207
 
208
  ```bash
209
  @misc{lozhkov2024starcoder,
 
75
 
76
  ## Table of Contents
77
 
78
+ 1. [Model Summary](#model-summary)
79
+ 2. [Use](#use)
80
+ 3. [Limitations](#limitations)
81
+ 4. [Training](#training)
82
+ 5. [License](#license)
83
+ 6. [Citation](#citation)
84
 
85
  ## Model Summary
86
+ <a name="model-summary"></a>
87
+
88
 
89
  StarCoder2-7B model is a 7B parameter model trained on 17 programming languages from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train), with opt-out requests excluded. The model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), [a context window of 16,384 tokens](https://arxiv.org/abs/2205.14135) with [a sliding window attention of 4,096 tokens](https://arxiv.org/abs/2004.05150v2), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 3.5+ trillion tokens.
90
 
 
94
  - **Languages:** 17 Programming languages
95
 
96
  ## Use
97
+ <a name="use"></a>
98
 
99
  ### Intended use
100
 
 
181
  The pretraining dataset of the model was filtered for permissive licenses and code with no license only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/search-v2) that lets you search through the pretraining data to identify where the generated code came from and apply the proper attribution to your code.
182
 
183
  # Limitations
184
+ <a name="limitations"></a>
185
 
186
  The model has been trained on source code from 600+ programming languages. The predominant language in source is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient and contain bugs or exploits. See [the paper](https://huggingface.co/papers/2402.19173) for an in-depth discussion of the model limitations.
187
 
188
  # Training
189
+ <a name="training"></a>
190
 
191
  ## Model
192
 
 
205
  - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
206
 
207
  # License
208
+ <a name="license"></a>
209
 
210
  The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
211
 
212
  # Citation
213
+ <a name="citation"></a>
214
 
215
  ```bash
216
  @misc{lozhkov2024starcoder,