oddadmix's picture
Update README.md
73aadd0 verified
metadata
library_name: transformers
tags:
  - mlx
base_model:
  - Qwen/Qwen2.5-VL-3B-Instruct

๐Ÿ“ธ Adasah - Qwen 2.5 3B (4-bit) Fine-tuned on Arabic Photo Q&A and Descriptions

Demo Video:

Adasah - IOS App

App Store

Warning - The app downloads a 2GB model, so it takes some time for the first time.

Forked from Huggingsnap

https://github.com/huggingface/HuggingSnap

Model Name: Adasah
Base Model: Qwen/Qwen2.5-VL-3B-Instruct Quantization: 4-bit (GGUF)
Platform: iOS (mobile-compatible)
Language: Arabic (translated from English)
Use Case: Arabic Visual Q&A and Photo Description Understanding


๐Ÿง  Model Overview

Adasah is a fine-tuned variant of the Qwen 2.5 3B base model, optimized for Arabic language understanding in visual contexts. The model was trained on a custom dataset consisting of English visual question-answer pairs and photo descriptions translated into Arabic, allowing it to:

  • Answer Arabic questions about images
  • Generate Arabic descriptions of visual content
  • Serve as a mobile assistant for Arabic-speaking users

The model is quantized to 4-bit to ensure smooth on-device performance on iOS apps.

๐Ÿ“ฑ Mobile Optimization

The model is quantized using 4-bit precision to make it lightweight and suitable for on-device inference in:

  • iOS apps
  • Offline-first mobile experiences
  • Arabic language educational or accessibility tools

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model NAMAA-Space/Adasah-QA-0.1-3B-Instruct-merged-4bits --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>