Quantized MedGemma-4B-IT Models
This repository provides quantized GGUF versions of the google/medgemma-4b-it model. These 4-bit and 5-bit quantized variants retain the original model’s strengths in multimodal medical reasoning, while reducing memory and compute requirements—ideal for efficient inference on resource-constrained devices.
Model Overview
- Original Model: google/medgemma-4b-it
- Quantized Versions:
- Q4_K_M (4-bit quantization)
- Q5_K_M (5-bit quantization)
- Architecture: Decoder-only transformer with SigLIP vision encoder
- Base Model: google/gemma-3-4b-pt
- Modalities: Text + Image (Multimodal)
- Developer: Google
- License: Health AI Developer Foundations License
- Language: English (medical domain)
Quantization Details
Q4_K_M Version
- Approx. ~75% size reduction
- Lower memory footprint (~2.3 GB)
- Best suited for deployment on edge devices or low-resource GPUs
- Slight performance degradation in complex reasoning scenarios
Q5_K_M Version
- Approx. ~69% size reduction
- Higher fidelity (~2.6 GB)
- Better performance retention, recommended when quality is a priority
Key Features
- Expert-level medical image understanding and report generation
- Strong performance on radiology, dermatology, pathology, and ophthalmology benchmarks
- Multimodal instruction following and clinical question answering
- Pretrained with SigLIP vision encoder + medical text encoder
Usage
Below, there are some code snippets on how to get quickly started with running the model. llama.cpp (text-only)
./llama-cli -hf SandLogicTechnologies/MedGemma-4B-IT-GGUF -p "What are the symptoms of diabetes"
llama.cpp (image input)
./llama-gemma3-cli -hf SandLogicTechnologies/MedGemma-4B-IT-GGUF -p "Describe this image." --image ~/Downloads/xray_image.png
Model Data
Dataset Overview
The original MedGemma-4B-IT model is built on top of the Gemma architecture and trained with a strong focus on medical multimodal data:
Image Encoder: A SigLIP-based vision model pre-trained on de-identified medical images, including:
- Chest X-rays
- Dermatology images
- Ophthalmology fundus photos
- Histopathology slides
LLM Component: Trained on diverse medical text datasets related to the above imaging domains, including clinical reports, QA datasets, and biomedical literature.
This combination enables MedGemma to perform visual-text reasoning and clinical instruction following in healthcare applications.
Recommended Use Cases
These quantized models are optimized for efficient inference while preserving core MedGemma capabilities. Suggested use cases include:
Medical visual question answering (VQA)
Analyze and interpret chest X-rays, skin lesions, fundus images, etc.Chatbot and assistant prototypes
Build interactive healthcare chat systems with vision-language capabilities.Research & fine-tuning
Serve as a lightweight base for further task-specific tuning in healthcare AI.Low-resource deployment
Run multimodal reasoning models on CPUs, edge devices, and lightweight GPUs.Rapid prototyping
Ideal for experimentation, prototyping, and integration in clinical R&D.
⚠️ Note: These models are not intended for direct clinical use without appropriate validation.
Acknowledgments
These quantized models are based on the original work by Google and the MedGemma development team.
Special thanks to:
- The Google DeepMind team for developing and releasing the MedGemma-4B-IT model.
- Georgi Gerganov and the entire
llama.cpp
open-source community for enabling efficient model quantization and inference via the GGUF format.
Contact
For any inquiries or support, please contact us at [email protected] or visit our Website.
- Downloads last month
- 110
4-bit
5-bit