ClipTagger-12b
Model Description
ClipTagger-12b is a 12-billion parameter vision-language model (VLM) designed for video understanding at massive scale. Developed by Inference.net in collaboration with Grass, this model was created to meet the demanding requirements of trillion-scale video frame captioning workloads.
ClipTagger-12b exceeds or matches the performance of GPT-4.1 and Claude 4 Sonnet, while costing 15x less per generation.
The model generates structured, schema-consistent JSON outputs for every video frame, making it ideal for building searchable video databases, content moderation systems, and accessibility tools. It maintains temporal consistency across frames while delivering frontier-quality performance at a fraction of the cost of closed-source alternatives.
Key Features
- Frontier-quality performance - Comparable to top closed models in captioning quality
- Production-ready - Battle-tested on trillion-scale video frame captioning workloads
- Schema-consistent JSON - Reliable structured output for every frame
- Cost-efficient - Optimized for high-throughput inference
- Open source - Build and deploy without proprietary API dependencies
Architecture
ClipTagger-12b is based on the Gemma-12B architecture and has been optimized with FP8 quantization for maximum throughput on modern GPUs. The model is specifically tuned for RTX 40-series and H100 GPUs, leveraging native FP8 support for efficient inference.
Technical Specifications
- Parameters: 12 billion
- Base Architecture: Gemma-12B
- Quantization: FP8 (no quality loss vs bf16)
- Input: Single video frame per request
- Output: Structured JSON with fixed schema
- Supported Formats: JPEG, PNG, WebP, GIF
- Max Image Size: 1MB
Training
The model was trained on 1 million carefully curated single-frame samples from publicly available video data. Training employed knowledge distillation from a high-quality teacher model to ensure consistent, accurate outputs while maintaining the ability to generalize across diverse video content types.
Training Process
- Dataset Size: 1M video frames
- Training Method: Teacher-student distillation
- Data Source: Publicly available video content
- Focus: Single-frame understanding with temporal awareness
Benchmarks
ClipTagger-12b achieves equal or superior performance compared to the leading closed-source models across all major evaluation metrics. Despite being open-source and significantly more cost-effective, our model outperforms Claude 4 Sonnet across every metric and achieves comparable quality to GPT-4.1.
Performance metrics on our internal evaluation set:
Model | Avg Judge Score | ROUGE-1 | ROUGE-2 | ROUGE-L | BLEU |
---|---|---|---|---|---|
cliptagger_12b | 3.53 | 0.674 | 0.404 | 0.520 | 0.267 |
claude_4_sonnet | 3.16 | 0.463 | 0.179 | 0.281 | 0.060 |
gpt_4.1 | 3.64 | 0.581 | 0.260 | 0.376 | 0.119 |
We used Gemini-2.5-Pro as the judge model, which ranks ClipTagger-12b roughly equal to GPT-4.1, and better than Claude 4 Sonnet.

FP8 quantization showed no measurable quality degradation compared to bf16 precision.
Cost Comparison
ClipTagger-12b delivers frontier-quality performance at a fraction of the cost of closed-source alternatives. Based on typical usage patterns (700 input tokens and 250 output tokens per generation), here's how the costs compare:

ClipTagger-12b offers 15x cost savings compared to GPT-4.1 and 17x cost savings compared to Claude 4 Sonnet, while maintaining comparable quality metrics.
Model | Input Cost/MTok | Output Cost/MTok | Cost per 1M Generations | Cost per Generation |
---|---|---|---|---|
ClipTagger-12b | $0.30 | $0.50 | $335 | $0.000335 |
GPT-4.1 | $3.00 | $12.00 | $5,100 | $0.0051 |
Claude 4 Sonnet | $3.00 | $15.00 | $5,850 | $0.00585 |
Usage
API Access
For production deployments, we recommend using our managed API service which includes advanced features like batch processing, webhooks, and automatic scaling:
Run ClipTagger-12b via Inference.net API →
Required Prompts
The model requires specific system and user prompts for optimal performance. Use these prompts exactly as shown:
System Prompt
You are an image annotation API trained to analyze YouTube video keyframes. You will be given instructions on the output format, what to caption, and how to perform your job. Follow those instructions. For descriptions and summaries, provide them directly and do not lead them with 'This image shows' or 'This keyframe displays...', just get right into the details.
User Prompt
You are an image annotation API trained to analyze YouTube video keyframes. You must respond with a valid JSON object matching the exact structure below.
Your job is to extract detailed **factual elements directly visible** in the image. Do not speculate or interpret artistic intent, camera focus, or composition. Do not include phrases like "this appears to be", "this looks like", or anything about the image itself. Describe what **is physically present in the frame**, and nothing more.
Return JSON in this structure:
{
"description": "A detailed, factual account of what is visibly happening (4 sentences max). Only mention concrete elements or actions that are clearly shown. Do not include anything about how the image is styled, shot, or composed. Do not lead the description with something like 'This image shows' or 'this keyframe is...', just get right into the details.",
"objects": ["object1 with relevant visual details", "object2 with relevant visual details", ...],
"actions": ["action1 with participants and context", "action2 with participants and context", ...],
"environment": "Detailed factual description of the setting and atmosphere based on visible cues (e.g., interior of a classroom with fluorescent lighting, or outdoor forest path with snow-covered trees).",
"content_type": "The type of content it is, e.g. 'real-world footage', 'video game', 'animation', 'cartoon', 'CGI', 'VTuber', etc.",
"specific_style": "Specific genre, aesthetic, or platform style (e.g., anime, 3D animation, mobile gameplay, vlog, tutorial, news broadcast, etc.)",
"production_quality": "Visible production level: e.g., 'professional studio', 'amateur handheld', 'webcam recording', 'TV broadcast', etc.",
"summary": "One clear, comprehensive sentence summarizing the visual content of the frame. Like the description, get right to the point.",
"logos": ["logo1 with visual description", "logo2 with visual description", ...]
}
Rules:
- Be specific and literal. Focus on what is explicitly visible.
- Do NOT include interpretations of emotion, mood, or narrative unless it's visually explicit.
- No artistic or cinematic analysis.
- Always include the language of any text in the image if present as an object, e.g. "English text", "Japanese text", "Russian text", etc.
- Maximum 10 objects and 5 actions.
- Return an empty array for 'logos' if none are present.
- Always output strictly valid JSON with proper escaping.
- Output **only the JSON**, no extra text or explanation.
Inference Parameters
- Temperature: 0.1 (recommended for consistency)
- Max Tokens: 2000
- Response Format:
{"type": "json_object"}
Output Schema
The model outputs a fixed JSON structure with the following fields:
{
"description": "string - Detailed factual description (max 4 sentences)",
"objects": ["array of strings - Up to 10 objects with visual details"],
"actions": ["array of strings - Up to 5 actions with context"],
"environment": "string - Setting and atmosphere description",
"content_type": "string - Type of visual content",
"specific_style": "string - Genre or style classification",
"production_quality": "string - Production level assessment",
"summary": "string - Single sentence summary",
"logos": ["array of strings - Detected logos with descriptions"]
}
Example Output
Given a nature scene with a wooden boardwalk through grassland:
{
"description": "A wooden boardwalk path extends from the foreground into the distance, cutting through a field of tall, vibrant green grass. The path is flanked on both sides by the dense grass. In the background, a line of trees is visible on the horizon under a blue sky with scattered white clouds.",
"objects": [
"Wooden boardwalk",
"Tall green grass",
"Blue sky",
"White clouds",
"Trees"
],
"actions": [],
"environment": "An outdoor, natural landscape, likely a marsh or wetland, on a clear day. The scene is characterized by a wooden boardwalk, lush green vegetation, and a bright blue sky with wispy clouds.",
"content_type": "real-world footage",
"specific_style": "landscape photography",
"production_quality": "professional photography",
"summary": "A wooden boardwalk path winds through a lush green field under a bright blue sky with scattered clouds.",
"logos": []
}
Use Cases
- Video Search & Discovery - Build searchable databases with structured metadata
- Content Moderation - Automated content analysis and categorization
- Accessibility - Generate consistent alt-text and scene descriptions
- Ad Verification - Track product visibility and brand appearances
- Video Analytics - Extract insights from large video collections
- Content Management - Automatic tagging and organization of video libraries
Interested in training your own model?
Contact us at [email protected] for a free consultation with our research team.
Support
- Documentation: docs.inference.net
- API Access: Get $25 in free credits when you sign up for an account
- Email: [email protected]
License
This model is released under the Apache-2.0 license, allowing for commercial use and modification with proper attribution.
- Downloads last month
- 52
Evaluation results
- Average Judge Scoreself-reported3.530
- ROUGE-1self-reported0.674
- ROUGE-Lself-reported0.520
- BLEUself-reported0.267