patrickvonplaten mcfadyeni commited on
Commit
a770607
·
verified ·
1 Parent(s): 61a2215

Small README.md typo fixes (#4)

Browse files

- Small README.md typo fixes (ec1cdd1024d54d6c490cf5f69da53b6af18dc3cc)


Co-authored-by: Isaac McFadyen <[email protected]>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -50,8 +50,8 @@ The model was presented in the paper [Magistral](https://huggingface.co/papers/2
50
  ## Updates compared with [Magistral Small 1.1](https://huggingface.co/mistralai/Magistral-Small-2507)
51
 
52
  - **Multimodality**: The model now has a vision encoder and can take multimodal inputs, extending its reasoning capabilities to vision.
53
- - **Performance upgrade**: Magistral Small 1.2 should give you significatively better performance than Magistral Small 1.1 as seen in the [benchmark results](#benchmark-results).
54
- - **Better tone and persona**: You should experiment better LaTeX and Markdown formatting, and shorter answers on easy general prompts.
55
  - **Finite generation**: The model is less likely to enter infinite generation loops.
56
  - **Special think tokens**: [THINK] and [/THINK] special tokens encapsulate the reasoning content in a thinking chunk. This makes it easier to parse the reasoning trace and prevents confusion when the '[THINK]' token is given as a string in the prompt.
57
  - **Reasoning prompt**: The reasoning prompt is given in the system prompt.
 
50
  ## Updates compared with [Magistral Small 1.1](https://huggingface.co/mistralai/Magistral-Small-2507)
51
 
52
  - **Multimodality**: The model now has a vision encoder and can take multimodal inputs, extending its reasoning capabilities to vision.
53
+ - **Performance upgrade**: Magistral Small 1.2 should give you significantly better performance than Magistral Small 1.1 as seen in the [benchmark results](#benchmark-results).
54
+ - **Better tone and persona**: You should experience better LaTeX and Markdown formatting, and shorter answers on easy general prompts.
55
  - **Finite generation**: The model is less likely to enter infinite generation loops.
56
  - **Special think tokens**: [THINK] and [/THINK] special tokens encapsulate the reasoning content in a thinking chunk. This makes it easier to parse the reasoning trace and prevents confusion when the '[THINK]' token is given as a string in the prompt.
57
  - **Reasoning prompt**: The reasoning prompt is given in the system prompt.