π§ Medical Gemma3 Model (1B) β KerasNLP + LoRA Fine-Tuning
This repository hosts a fine-tuned version of the Gemma 3 1B model using Low-Rank Adaptation (LoRA) via KerasNLP on a subset of Medical Question & Answering (Q&A) data.
π Fine-Tuning Details
- Base Model: Gemma 3B (1B parameter version)
- Framework: TensorFlow / Keras (Saved using
.keras
format) - Library: KerasNLP
- Technique: Parameter-efficient fine-tuning using LoRA
- Target Domain: Medical Question Answering
- Data Used: A subset of Medical QA data
β How to Load the Model
from keras.models import load_model
model = load_model("https://huggingface.co/gimmy256/medical-gemma3_1b/resolve/main/medical_gemma3_1b.keras")
- Downloads last month
- 17
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for gimmy256/medical-gemma3_1b
Base model
google/gemma-3-1b-pt