aimeri/gemma-3-27b-it-abliterated-mlx-vlm-4Bit
The Model aimeri/gemma-3-27b-it-abliterated-mlx-vlm-4Bit was converted to MLX format from huihui-ai/gemma-3-27b-it-abliterated using mlx-vlm version 0.1.21.
Use with mlx-vlm
pip install mlx-vlm
import mlx.core as mx
from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
# Load the model
model_path = "aimeri/gemma-3-27b-it-abliterated-mlx-vlm-4Bit"
model, processor = load(model_path)
config = load_config(model_path)
# Prepare input
image = ["http://images.cocodataset.org/val2017/000000039769.jpg"]
prompt = "Describe this image."
# Apply chat template
formatted_prompt = apply_chat_template(
processor, config, prompt, num_images=len(image)
)
# Generate output
output = generate(model, processor, formatted_prompt, image, verbose=False)
print(output)
- Downloads last month
- 69
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for aimeri/gemma-3-27b-it-abliterated-mlx-vlm-4Bit
Base model
google/gemma-3-27b-pt
Finetuned
google/gemma-3-27b-it
Finetuned
huihui-ai/gemma-3-27b-it-abliterated