lyy001 commited on
Commit
cd4ac37
·
verified ·
1 Parent(s): e0ca3ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -12,7 +12,7 @@ In this work, we present a hybrid model MaTVLM
12
  by substituting a portion of the transformer decoder layers in a pre-trained VLM with Mamba-2 layers. Leveraging the inherent relationship between attention and Mamba-2, we initialize Mamba-2 with corresponding attention weights to accelerate convergence. Subsequently, we employ a single-stage distillation process, using the pre-trained VLM as the teacher model to transfer knowledge to the \name, further enhancing convergence speed and performance. Furthermore, we investigate the impact of differential distillation loss within our training framework.
13
  We evaluate the MaTVLM on multiple benchmarks, demonstrating competitive performance against the teacher model and existing VLMs while surpassing both Mamba-based VLMs and models of comparable parameter scales. Remarkably, the MaTVLM achieves up to $3.6\times$ faster inference than the teacher model while reducing GPU memory consumption by 27.5\%, all without compromising performance.
14
 
15
- Paper: []()
16
 
17
  Code: [https://github.com/hustvl/MaTVLM](https://github.com/hustvl/MaTVLM)
18
 
@@ -20,7 +20,13 @@ Code: [https://github.com/hustvl/MaTVLM](https://github.com/hustvl/MaTVLM)
20
  If you find MaTVLM is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
21
 
22
  ```bibtex
23
- @article{,
24
-
 
 
 
 
 
 
25
  }
26
  ```
 
12
  by substituting a portion of the transformer decoder layers in a pre-trained VLM with Mamba-2 layers. Leveraging the inherent relationship between attention and Mamba-2, we initialize Mamba-2 with corresponding attention weights to accelerate convergence. Subsequently, we employ a single-stage distillation process, using the pre-trained VLM as the teacher model to transfer knowledge to the \name, further enhancing convergence speed and performance. Furthermore, we investigate the impact of differential distillation loss within our training framework.
13
  We evaluate the MaTVLM on multiple benchmarks, demonstrating competitive performance against the teacher model and existing VLMs while surpassing both Mamba-based VLMs and models of comparable parameter scales. Remarkably, the MaTVLM achieves up to $3.6\times$ faster inference than the teacher model while reducing GPU memory consumption by 27.5\%, all without compromising performance.
14
 
15
+ Paper: [https://arxiv.org/abs/2503.13440](https://arxiv.org/abs/2503.13440)
16
 
17
  Code: [https://github.com/hustvl/MaTVLM](https://github.com/hustvl/MaTVLM)
18
 
 
20
  If you find MaTVLM is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
21
 
22
  ```bibtex
23
+ @misc{li2025matvlmhybridmambatransformerefficient,
24
+ title={MaTVLM: Hybrid Mamba-Transformer for Efficient Vision-Language Modeling},
25
+ author={Yingyue Li and Bencheng Liao and Wenyu Liu and Xinggang Wang},
26
+ year={2025},
27
+ eprint={2503.13440},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.CV},
30
+ url={https://arxiv.org/abs/2503.13440},
31
  }
32
  ```