davanstrien HF staff commited on
Commit
d9f025a
·
verified ·
1 Parent(s): 7d46fbd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -42,9 +42,11 @@ The model was trained on:
42
  - Model card summaries generated by Llama 3.3 70B
43
  - Dataset card summaries generated by Llama 3.3 70B
44
 
 
 
45
  ## Usage
46
 
47
- Using the chat template when using the model in inference is recommended. Additionally, you should prepend either `<MODEL_CARD>` or `<DATASET_CARD>` to the start of the card you want to summarize. The training data used the body of the model or dataset card, i.e., the part after the YAML, so you will likely get better results only by passing this part of the card.
48
 
49
  I have so far found that a low temperature of `0.4` generates better results.
50
 
 
42
  - Model card summaries generated by Llama 3.3 70B
43
  - Dataset card summaries generated by Llama 3.3 70B
44
 
45
+ Model context length: the model was trained with cards up to a length of 2048 tokens
46
+
47
  ## Usage
48
 
49
+ Using the chat template when using the model in inference is recommended. Additionally, you should prepend either `<MODEL_CARD>` or `<DATASET_CARD>` to the start of the card you want to summarize. The training data used the body of the model or dataset card (i.e., the part after the YAML, so you will likely get better results only by passing this part of the card.
50
 
51
  I have so far found that a low temperature of `0.4` generates better results.
52