File size: 4,738 Bytes
df67e43 472754e 2c5cf63 89f1e32 e24058d 89f1e32 0a9a9ca 63c7239 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d e24058d 35a247d 89f1e32 3c37802 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d 89f1e32 35a247d 0a9a9ca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
license: apache-2.0
language:
- en
- zh
- th
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
tags:
- text-generation-inference
- uncensored
- image-captioning
- vlm
- visual-understanding
- caption
- image-to-text
pipeline_tag: image-text-to-text
library_name: transformers
datasets:
- prithivMLmods/blip3o-caption-mini-arrow
- prithivMLmods/Caption3o-Opt-v2
---

# **Qwen2.5-VL-7B-Abliterated-Caption-it**
> The **Qwen2.5-VL-7B-Abliterated-Caption-it** model is a fine-tuned version of **Qwen2.5-VL-7B-Instruct**, tailored for **Abliterated Captioning** / **Uncensored Image Captioning**. This variant is designed to generate highly detailed and descriptive captions across a broad range of visual categories, including images with complex, sensitive, or nuanced content—across varying aspect ratios and resolutions.
# Key Highlights
* **Abliterated / Uncensored Captioning**: Fine-tuned to bypass common content filters while preserving factual and descriptive richness across diverse visual categories.
* **High-Fidelity Descriptions**: Generates comprehensive captions for general, artistic, technical, abstract, and low-context images.
* **Robust Across Aspect Ratios**: Capable of accurately captioning images with wide, tall, square, and irregular dimensions.
* **Variational Detail Control**: Produces outputs with both high-level summaries and fine-grained descriptions as needed.
* **Foundation on Qwen2.5-VL Architecture**: Leverages the strengths of the Qwen2.5-VL-7B multimodal model for visual reasoning, comprehension, and instruction-following.
* **Multilingual Output Capability**: Can support multilingual descriptions (English as default), adaptable via prompt engineering.
# Training Details
This model was fine-tuned using the following datasets:
* **[prithivMLmods/blip3o-caption-mini-arrow](https://huggingface.co/datasets/prithivMLmods/blip3o-caption-mini-arrow)**
* **[prithivMLmods/Caption3o-Opt-v2](https://huggingface.co/datasets/prithivMLmods/Caption3o-Opt-v2)**
* **Private/unlisted datasets** curated for uncensored and domain-specific image captioning tasks.
The training objective focused on enhancing performance in unconstrained, descriptive image captioning—especially for edge cases commonly filtered out in standard captioning benchmarks.
# Quick Start with Transformers
> [!note]
Instruction Query: Provide a detailed caption for the image
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"prithivMLmods/Qwen2.5-VL-7B-Abliterated-Caption-it", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen2.5-VL-7B-Abliterated-Caption-it")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image in detail."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
# Intended Use
This model is suited for:
* Generating detailed and unfiltered image captions for general-purpose or artistic datasets.
* Content moderation research, red-teaming, and generative safety evaluations.
* Enabling descriptive captioning for visual datasets typically excluded from mainstream models.
* Use in creative applications (e.g., storytelling, art generation) that benefit from rich descriptive captions.
* Captioning for non-standard aspect ratios and stylized visual content.
# Limitations
* May produce explicit, sensitive, or offensive descriptions depending on image content and prompts.
* Not suitable for deployment in production systems requiring content filtering or moderation.
* Can exhibit variability in caption tone or style depending on input prompt phrasing.
* Accuracy for unfamiliar or synthetic visual styles may vary. |