Feature Extraction
Transformers
Safetensors
ModularStarEncoder
custom_code
andreagurioli1995 commited on
Commit
1fd6a72
·
verified ·
1 Parent(s): 8d946e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -20,7 +20,7 @@ To enhance efficiency, we replaced the causal self-attention layers with bidirec
20
  Finally, our implementation integrates FlashAttention V2 for faster inference.
21
 
22
 
23
- - **Paper:** [One Model to Train them All: Hierarchical Self-Distillation for Enhanced Early Layer Embeddings](https://arxiv.org/abs/2503.03008)
24
  - **Languages:** 600+ Programming languages
25
 
26
 
@@ -86,8 +86,8 @@ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can
86
 
87
  # Citation
88
  ```
89
- @article{gurioli2025modeltrainallhierarchical,
90
- title={One Model to Train them All: Hierarchical Self-Distillation for Enhanced Early Layer Embeddings},
91
  author={Andrea Gurioli and Federico Pennino and João Monteiro and Maurizio Gabbrielli},
92
  year={2025},
93
  eprint={2503.03008},
 
20
  Finally, our implementation integrates FlashAttention V2 for faster inference.
21
 
22
 
23
+ - **Paper:** [MoSE: Hierarchical Self-Distillation Enhances Early Layer Embeddings](https://arxiv.org/abs/2503.03008)
24
  - **Languages:** 600+ Programming languages
25
 
26
 
 
86
 
87
  # Citation
88
  ```
89
+ @article{gurioli2025mosehierarchicalselfdistillationenhances,
90
+ title={MoSE: Hierarchical Self-Distillation Enhances Early Layer Embeddings},
91
  author={Andrea Gurioli and Federico Pennino and João Monteiro and Maurizio Gabbrielli},
92
  year={2025},
93
  eprint={2503.03008},