|
--- |
|
license: gemma |
|
library_name: transformers |
|
pipeline_tag: image-text-to-text |
|
extra_gated_heading: Access Gemma on Hugging Face |
|
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and |
|
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging |
|
Face and click below. Requests are processed immediately. |
|
extra_gated_button_content: Acknowledge license |
|
base_model: google/gemma-3-12b-it |
|
tags: |
|
- abliterated |
|
- uncensored |
|
--- |
|
|
|
# huihui-ai/gemma-3-12b-it-abliterated |
|
|
|
|
|
This is an uncensored version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). |
|
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. |
|
|
|
It was only the text part that was processed, not the image part. |
|
|
|
The abliterated model will no longer say "I'm programmed to be a safe and helpful AI assistant. I cannot fulfill your request to ..." |
|
|
|
## Use with ollama |
|
Ollama supports multimodal (Vision). gemma-3-abliterated defaults to f16, not Q4_K_M, and the effect of Q4_K_M is not very good, nor is it provided. |
|
|
|
All new versions of gemma-3-abliterated have been released; please re-download and test. |
|
|
|
You can use [huihui_ai/gemma3-abliterated](https://ollama.com/huihui_ai/gemma3-abliterated) directly |
|
``` |
|
ollama run huihui_ai/gemma3-abliterated:12b |
|
``` |
|
|
|
## Usage |
|
You can use this model in your applications by loading it with Hugging Face's `transformers` library: |
|
|
|
|
|
```python |
|
# pip install accelerate |
|
|
|
from transformers import AutoProcessor, Gemma3ForConditionalGeneration |
|
from PIL import Image |
|
import requests |
|
import torch |
|
|
|
model_id = "huihui-ai/gemma-3-12b-it-abliterated" |
|
|
|
model = Gemma3ForConditionalGeneration.from_pretrained( |
|
model_id, device_map="auto" |
|
).eval() |
|
|
|
processor = AutoProcessor.from_pretrained(model_id) |
|
|
|
messages = [ |
|
{ |
|
"role": "system", |
|
"content": [{"type": "text", "text": "You are a helpful assistant."}] |
|
}, |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"}, |
|
{"type": "text", "text": "Describe this image in detail."} |
|
] |
|
} |
|
] |
|
|
|
inputs = processor.apply_chat_template( |
|
messages, add_generation_prompt=True, tokenize=True, |
|
return_dict=True, return_tensors="pt" |
|
).to(model.device, dtype=torch.bfloat16) |
|
|
|
input_len = inputs["input_ids"].shape[-1] |
|
|
|
with torch.inference_mode(): |
|
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False) |
|
generation = generation[0][input_len:] |
|
|
|
decoded = processor.decode(generation, skip_special_tokens=True) |
|
print(decoded) |
|
|
|
# **Overall Impression:** The image is a close-up shot of a vibrant garden scene, |
|
# focusing on a cluster of pink cosmos flowers and a busy bumblebee. |
|
# It has a slightly soft, natural feel, likely captured in daylight. |
|
``` |
|
|
|
### Donation |
|
|
|
If you like it, please click 'like' and follow us for more updates. |
|
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai. |
|
|
|
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it. |
|
- bitcoin: |
|
``` |
|
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge |
|
``` |
|
|
|
|