Behemoth-3B-070225-post0.1
The Behemoth-3B-070225-post0.1 model is a fine-tuned version of Qwen2.5-VL-3B-Instruct, optimized for Detailed Image Captioning, OCR Tasks, and Chain-of-Thought Reasoning. Built on top of the Qwen2.5-VL architecture, this model enhances visual understanding capabilities with focused training on the 50k LLaVA-CoT-o1-Instruct dataset for superior image analysis and detailed reasoning tasks.
Key Enhancements
Detailed Image Captioning: Advanced capability for generating comprehensive, contextually rich descriptions of visual content with fine-grained detail recognition.
Enhanced OCR Performance: Designed to efficiently extract and recognize text from images with high accuracy across various fonts, layouts, and image qualities.
Chain-of-Thought Reasoning: Specialized in providing step-by-step logical reasoning processes for complex visual analysis tasks, breaking down problems into manageable components.
Superior Visual Understanding: Optimized for precise interpretation of visual elements, spatial relationships, and contextual information within images.
Instruction Following: Enhanced ability to follow detailed instructions for specific image analysis tasks while maintaining reasoning transparency.
State-of-the-Art Performance on Vision Tasks: Achieves competitive results on visual question answering, image captioning, and OCR benchmarks.
Efficient 3B Parameter Model: Provides strong performance while maintaining computational efficiency for broader accessibility.
Multi-Modal Reasoning: Enables comprehensive analysis combining visual perception with logical reasoning chains.
Quick Start with Transformers
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"prithivMLmods/Behemoth-3B-070225-post0.1", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/Behemoth-3B-070225-post0.1")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Provide a detailed caption for this image and explain your reasoning step by step."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=256)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Intended Use
This model is intended for:
- Detailed Image Captioning: Generating comprehensive, nuanced descriptions of visual content for accessibility, content creation, and analysis purposes.
- OCR Applications: High-accuracy text extraction from images, documents, signs, and handwritten content.
- Chain-of-Thought Visual Analysis: Providing step-by-step reasoning for complex visual interpretation tasks.
- Educational Content Creation: Generating detailed explanations of visual materials with logical reasoning chains.
- Content Accessibility: Creating detailed alt-text and descriptions for visually impaired users.
- Visual Question Answering: Answering complex questions about images with detailed reasoning processes.
- Document Analysis: Processing and understanding visual documents with both text extraction and content comprehension.
- Research and Analysis: Supporting academic and professional research requiring detailed visual analysis with transparent reasoning.
Base Training Details
- Base Model: Qwen2.5-VL-3B-Instruct
- Training Dataset: 50k LLaVA-CoT-o1-Instruct dataset
- Specialized Training Focus: Chain-of-thought reasoning, detailed captioning, and OCR tasks
- Model Size: 3 billion parameters for efficient deployment
Limitations
- Computational Requirements: While more efficient than larger models, still requires adequate GPU memory for optimal performance.
- Image Quality Sensitivity: Performance may degrade on extremely low-quality, heavily occluded, or severely distorted images.
- Processing Speed: Chain-of-thought reasoning may result in longer response times compared to direct answer models.
- Language Coverage: Primarily optimized for English language tasks, with variable performance on other languages.
- Context Length: Limited by the base model's context window for very long reasoning chains.
- Hallucination Risk: May occasionally generate plausible but incorrect details, especially in ambiguous visual scenarios.
- Resource Constraints: Not optimized for real-time applications on edge devices or low-resource environments.
- Downloads last month
- 37