1

Qwen3-VL-8B-Abliterated-Caption-it

The Qwen3-VL-8B-Abliterated-Caption-it model is a fine-tuned version of Qwen3-VL-8B-Instruct, tailored for Abliterated Captioning / Uncensored Image Captioning. This variant is designed to generate highly detailed and descriptive captions across a broad range of visual categories, including images with complex, sensitive, or nuanced contentโ€”across varying aspect ratios and resolutions.

Key Highlights

  • Abliterated / Uncensored Captioning: Fine-tuned to bypass common content filters while preserving factual and descriptive richness across diverse visual categories.
  • High-Fidelity Descriptions: Generates comprehensive captions for general, artistic, technical, abstract, and low-context images.
  • Robust Across Aspect Ratios: Capable of accurately captioning images with wide, tall, square, and irregular dimensions.
  • Variational Detail Control: Produces outputs with both high-level summaries and fine-grained descriptions as needed.
  • Foundation on Qwen3-VL Architecture: Leverages the strengths of the Qwen3-VL-8B multimodal model for visual reasoning, comprehension, and instruction-following.
  • Multilingual Output Capability: Supports multilingual descriptions (English as default), adaptable via prompt engineering.

Quick Start with Transformers

Instruction Query: Provide a detailed caption for the image

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info

model = Qwen3VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3-VL-8B-Abliterated-Caption-it", torch_dtype="auto", device_map="auto"
)

processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen3-VL-8B-Abliterated-Caption-it")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image in detail."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Intended Use

This model is suited for:

  • Generating detailed and unfiltered image captions for general-purpose or artistic datasets.
  • Content moderation research, red-teaming, and generative safety evaluations.
  • Enabling descriptive captioning for visual datasets typically excluded from mainstream models.
  • Creative applications (e.g., storytelling, art generation) that benefit from rich descriptive captions.
  • Captioning for non-standard aspect ratios and stylized visual content.

Limitations

  • May produce explicit, sensitive, or offensive descriptions depending on image content and prompts.
  • Not suitable for deployment in production systems requiring content filtering or moderation.
  • Can exhibit variability in caption tone or style depending on input prompt phrasing.
  • Accuracy for unfamiliar or synthetic visual styles may vary.
Downloads last month
11
Safetensors
Model size
9B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-8B-Abliterated-Caption-it

Finetuned
(21)
this model