nielsr HF Staff commited on
Commit
59cef09
·
verified ·
1 Parent(s): 3a230c1

Improve model card: Add metadata, paper, and GitHub links

Browse files

This PR enhances the model card for `AndesVL-4B-Thinking` by:
- Adding `library_name: transformers` to the metadata, enabling the automated "How to use" widget for seamless integration.
- Adding `pipeline_tag: image-text-to-text` to the metadata, which helps categorize the model and improve its discoverability on the Hugging Face Hub.
- Adding a direct link to the paper ([AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model](https://huggingface.co/papers/2510.11496)) for easy access to the research.
- Including a link to the [GitHub repository](https://github.com/OPPO-Mente-Lab/AndesVL_Evaluation) for the associated evaluation toolkit and code.

These additions provide users with more comprehensive information and improve the model's functionality on the Hub.

Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -1,8 +1,15 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
4
  # AndesVL-4B-Thinking
5
- AndesVL is a suite of mobile-optimized Multimodal Large Language Models (MLLMs) with **0.6B to 4B parameters**, built upon Qwen3's LLM and various visual encoders. Designed for efficient edge deployment, it achieves first-tier performance on diverse benchmarks, including those for text-rich tasks, reasoning tasks, Visual Question Answering (VQA), multi-image tasks, multilingual tasks, and GUI tasks. Its "1+N" LoRA architecture and QALFT framework facilitate efficient task adaptation and model compression, enabling a 6.7x peak decoding speedup and a 1.8 bits-per-weight compression ratio on mobile chips.
 
 
 
 
6
 
7
  Detailed model sizes and components are provided below:
8
 
@@ -61,4 +68,4 @@ If you find our work helpful, feel free to give us a cite.
61
  ```
62
 
63
  # Acknowledge
64
- We are very grateful for the efforts of the [Qwen](https://huggingface.co/Qwen), [AimV2](https://huggingface.co/apple/aimv2-large-patch14-224) and [Siglip 2](https://arxiv.org/abs/2502.14786) projects.
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
  ---
6
+
7
  # AndesVL-4B-Thinking
8
+ This repository hosts the AndesVL-4B-Thinking model, presented in the paper [AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model](https://huggingface.co/papers/2510.11496).
9
+
10
+ For the associated evaluation toolkit and code, see the [GitHub repository](https://github.com/OPPO-Mente-Lab/AndesVL_Evaluation).
11
+
12
+ AndesVL is a suite of mobile-optimized Multimodal Large Language Models (MLLMs) with **0.6B to 4B parameters**, built upon Qwen3's LLM and various visual encoders. Designed for efficient edge deployment, it achieves first-tier performance across a wide range of open-source benchmarks, including fields such as text-rich image understanding, reasoning and math, multi-image comprehension, general VQA, hallucination mitigation, multilingual understanding, and GUI-related tasks when compared with state-of-the-art models of a similar scale.
13
 
14
  Detailed model sizes and components are provided below:
15
 
 
68
  ```
69
 
70
  # Acknowledge
71
+ We are very grateful for the efforts of the [Qwen](https://huggingface.co/Qwen), [AimV2](https://huggingface.co/apple/aimv2-large-patch14-224) and [Siglip 2](https://arxiv.org/abs/2502.14786) projects.