Add pipeline tag, library name and link to code (#1)
Browse files- Add pipeline tag, library name and link to code (012b277526219e64217a1f1142444014b602e308)
- modify to pr #1 (c86ae57a2ecaf4e1dc2a92130df39005788c2ded)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -1,50 +1,58 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
and
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- liuhaotian/LLaVA-Instruct-150K
|
4 |
+
license: llama2
|
5 |
+
library_name: transformers
|
6 |
+
pipeline_tag: image-text-to-text
|
7 |
+
base_model:
|
8 |
+
- liuhaotian/llava-v1.5-7b
|
9 |
+
---
|
10 |
+
|
11 |
+
```markdown
|
12 |
+
# Ada-LLaVA Model Card
|
13 |
+
|
14 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
15 |
+
|
16 |
+
Ada-LLaVA 7B is an open-source adaptive inference framework for multimodal Large Language Models (MLLMs) that dynamically adjusts its operations based on available computational resources and latency requirements.
|
17 |
+
|
18 |
+
See the paper for more details: [Learning to Inference Adaptively for Multimodal Large Language Models](https://huggingface.co/papers/2503.10905)
|
19 |
+
|
20 |
+
**Model details**: https://zhuoyan-xu.github.io/ada-llava/
|
21 |
+
|
22 |
+
**Github repository**: https://github.com/zhuoyan-xu/AdaLLaVA
|
23 |
+
|
24 |
+
## Model Details
|
25 |
+
|
26 |
+
<!-- Provide a longer summary of what this model is. -->
|
27 |
+
|
28 |
+
**Model Type:** Ada LLaVA 7B follows the [LLaVA-v1.5](https://arxiv.org/abs/2310.03744) stage-2 training pipeline,
|
29 |
+
with [CLIP-ViT-L-336px](https://huggingface.co/openai/clip-vit-large-patch14-336) as visual encoder (336*336 image resolution),
|
30 |
+
[Vicuna-v1.5-7B](https://huggingface.co/lmsys/vicuna-7b-v1.5) as base LLM and a two-layer MLP as vision-language connector, customized embedding model and MLP as latency scheduler.
|
31 |
+
|
32 |
+
It was trained with stage-2 pipeline as LLaVA:
|
33 |
+
|
34 |
+
Instruction tuning: Freeze vision encoder, train the remaining model with multimodal instruction following data of tabular and non-tabular tasks.
|
35 |
+
|
36 |
+
**Code Base:** We use the official code of [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) for model training and inference,
|
37 |
+
and the saved model checkpoint is uploaded to this repository.
|
38 |
+
|
39 |
+
**Model Date:** Ada-LLaVA 7B was trained in Oct 2024.
|
40 |
+
|
41 |
+
|
42 |
+
## License
|
43 |
+
|
44 |
+
AdaLLaVA is based on LLaVA-1.5 and thus follows its license. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|
45 |
+
|
46 |
+
## Intended use
|
47 |
+
|
48 |
+
**Primary intended uses:** The primary use of Ada LLaVA is research on multimodal large multimodal models and chatbots, especially for resource-constrain inference and deployment.
|
49 |
+
|
50 |
+
**Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
51 |
+
|
52 |
+
## Training dataset
|
53 |
+
- 665K image level instruction data from LLaVA-1.5 stage-2, see details in original [LLaVA repo](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#visual-instruction-tuning).
|
54 |
+
|
55 |
+
## Limitations
|
56 |
+
|
57 |
+
While Ada-LLaVA is currently limited to processing one image at a time and only applies adaptive operations in its later half of layers, future work could explore multi-image input support and extend the adaptive mechanisms throughout the entire model architecture, including the vision encoder. These improvements would make the model more versatile and applicable to a broader range of real-world scenarios.
|
58 |
+
```
|