Add model card
Browse filesThis PR adds a model card for the paper [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
It adds the Apache 2.0 license, the Transformers library, the text-generation pipeline tag, a link to the paper, and a link to the code repository.
Please review and merge this PR if everything looks good.
README.md
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
library_name: transformers
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
---
|
6 |
+
|
7 |
+
This repository contains the RADLADS models as presented in the paper [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
|
8 |
+
|
9 |
+
More information can be found at the Github repository: https://github.com/recursal/RADLADS-paper
|