File size: 11,324 Bytes
6c8e84b
 
43a643a
6c8e84b
 
43a643a
 
 
 
 
 
 
6c8e84b
 
 
43a643a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6c8e84b
 
 
 
 
 
 
 
0efb203
6c676f7
 
8b066f8
19e5eb4
 
 
 
 
 
 
 
 
 
 
 
6c8e84b
19e5eb4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43a643a
 
19e5eb4
fb437aa
 
43a643a
 
 
 
5efb017
 
0d7a0bc
 
19e5eb4
 
 
fb437aa
 
6c8e84b
fb437aa
4ffc59c
fb437aa
 
 
079a741
 
 
 
 
 
43a643a
19e5eb4
 
 
 
 
 
49d1652
19e5eb4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43a643a
19e5eb4
 
 
 
 
 
43a643a
a14369e
43a643a
49d1652
19e5eb4
 
 
49d1652
d42fbaf
43a643a
19e5eb4
8b066f8
43a643a
0efb203
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
---
language:
- en
license: apache-2.0
tags:
- VLM
- video-understanding
- image-captioning
- gemma
- json-mode
- structured-output
- video-analysis
base_model: google/gemma-12b
pipeline_tag: image-text-to-text
model-index:
- name: ClipTagger-12b
  results:
  - task:
      type: image-to-text
      name: Video Frame Captioning
    metrics:
    - name: Average Judge Score
      type: quality
      value: 3.53
    - name: ROUGE-1
      type: rouge-1
      value: 0.674
    - name: ROUGE-L
      type: rouge-l
      value: 0.520
    - name: BLEU
      type: bleu
      value: 0.267
---

# ClipTagger-12b

![ClipTagger-12b](./assets/grass-x-inference.png)

## Model Description

**ClipTagger-12b** is a 12-billion parameter vision-language model (VLM) designed for video understanding at massive scale. Developed by [Inference.net](https://inference.net) in collaboration with [Grass](https://grass.io), this model was created to meet the demanding requirements of trillion-scale video frame captioning workloads.

**ClipTagger-12b exceeds or matches the performance of GPT-4.1 and Claude 4 Sonnet, while costing 15x less per generation.**

The model generates structured, schema-consistent JSON outputs for every video frame, making it ideal for building searchable video databases, content moderation systems, and accessibility tools. It maintains temporal consistency across frames while delivering frontier-quality performance at a fraction of the cost of closed-source alternatives.

### Key Features

- **Frontier-quality performance** - Comparable to top closed models in captioning quality
- **Production-ready** - Battle-tested on trillion-scale video frame captioning workloads
- **Schema-consistent JSON** - Reliable structured output for every frame
- **Cost-efficient** - Optimized for high-throughput inference
- **Open source** - Build and deploy without proprietary API dependencies

## Architecture

ClipTagger-12b is based on the Gemma-12B architecture and has been optimized with FP8 quantization for maximum throughput on modern GPUs. The model is specifically tuned for RTX 40-series and H100 GPUs, leveraging native FP8 support for efficient inference.

### Technical Specifications
- **Parameters**: 12 billion
- **Base Architecture**: Gemma-12B
- **Quantization**: FP8 (no quality loss vs bf16)
- **Input**: Single video frame per request
- **Output**: Structured JSON with fixed schema
- **Supported Formats**: JPEG, PNG, WebP, GIF
- **Max Image Size**: 1MB

## Training

The model was trained on 1 million carefully curated single-frame samples from publicly available video data. Training employed knowledge distillation from a high-quality teacher model to ensure consistent, accurate outputs while maintaining the ability to generalize across diverse video content types.

### Training Process
- **Dataset Size**: 1M video frames
- **Training Method**: Teacher-student distillation
- **Data Source**: Publicly available video content
- **Focus**: Single-frame understanding with temporal awareness

## Benchmarks

ClipTagger-12b achieves **equal or superior performance** compared to the leading closed-source models across all major evaluation metrics. Despite being open-source and significantly more cost-effective, our model **outperforms Claude 4 Sonnet across every metric** and achieves **comparable quality to GPT-4.1**.

Performance metrics on our internal evaluation set:
| Model | Avg Judge Score | ROUGE-1 | ROUGE-2 | ROUGE-L | BLEU |
|-------|-----------------|---------|---------|---------|------|
| cliptagger_12b | **3.53** | **0.674** | **0.404** | **0.520** | **0.267** |
| claude_4_sonnet | 3.16 | 0.463 | 0.179 | 0.281 | 0.060 |
| gpt_4.1 | 3.64 | 0.581 | 0.260 | 0.376 | 0.119 |

We used Gemini-2.5-Pro as the judge model, which ranks ClipTagger-12b roughly equal to GPT-4.1, and better than Claude 4 Sonnet.

<img src="./assets/judge-score.png" alt="Average Judge Score Comparison" width="100%" />


FP8 quantization showed no measurable quality degradation compared to bf16 precision.

## Cost Comparison

ClipTagger-12b delivers frontier-quality performance at a fraction of the cost of closed-source alternatives. Based on typical usage patterns (700 input tokens and 250 output tokens per generation), here's how the costs compare:

<img src="./assets/cost.png" alt="Cost Comparison Per 1 Million Generations" width="100%" />

ClipTagger-12b offers **15x cost savings** compared to GPT-4.1 and **17x cost savings** compared to Claude 4 Sonnet, while maintaining comparable quality metrics.

| Model           | Input Cost/MTok | Output Cost/MTok | Cost per 1M Generations | Cost per Generation |
| --------------- | --------------- | ---------------- | ----------------------- | ------------------- |
| ClipTagger-12b  | $0.30           | $0.50            | $335                    | $0.000335           |
| GPT-4.1         | $3.00           | $12.00           | $5,100                  | $0.0051             |
| Claude 4 Sonnet | $3.00           | $15.00           | $5,850                  | $0.00585            |


## Usage

### API Access

For production deployments, we recommend using our managed API service which includes advanced features like batch processing, webhooks, and automatic scaling:

**[Run ClipTagger-12b via Inference.net API →](https://docs.inference.net/use-cases/video-understanding)**

### Required Prompts

The model requires specific system and user prompts for optimal performance. Use these prompts exactly as shown:

#### System Prompt
```
You are an image annotation API trained to analyze YouTube video keyframes. You will be given instructions on the output format, what to caption, and how to perform your job. Follow those instructions. For descriptions and summaries, provide them directly and do not lead them with 'This image shows' or 'This keyframe displays...', just get right into the details.
```

#### User Prompt
```
You are an image annotation API trained to analyze YouTube video keyframes. You must respond with a valid JSON object matching the exact structure below.

Your job is to extract detailed **factual elements directly visible** in the image. Do not speculate or interpret artistic intent, camera focus, or composition. Do not include phrases like "this appears to be", "this looks like", or anything about the image itself. Describe what **is physically present in the frame**, and nothing more.

Return JSON in this structure:

{
    "description": "A detailed, factual account of what is visibly happening (4 sentences max). Only mention concrete elements or actions that are clearly shown. Do not include anything about how the image is styled, shot, or composed. Do not lead the description with something like 'This image shows' or 'this keyframe is...', just get right into the details.",
    "objects": ["object1 with relevant visual details", "object2 with relevant visual details", ...],
    "actions": ["action1 with participants and context", "action2 with participants and context", ...],
    "environment": "Detailed factual description of the setting and atmosphere based on visible cues (e.g., interior of a classroom with fluorescent lighting, or outdoor forest path with snow-covered trees).",
    "content_type": "The type of content it is, e.g. 'real-world footage', 'video game', 'animation', 'cartoon', 'CGI', 'VTuber', etc.",
    "specific_style": "Specific genre, aesthetic, or platform style (e.g., anime, 3D animation, mobile gameplay, vlog, tutorial, news broadcast, etc.)",
    "production_quality": "Visible production level: e.g., 'professional studio', 'amateur handheld', 'webcam recording', 'TV broadcast', etc.",
    "summary": "One clear, comprehensive sentence summarizing the visual content of the frame. Like the description, get right to the point.",
    "logos": ["logo1 with visual description", "logo2 with visual description", ...]
}

Rules:
- Be specific and literal. Focus on what is explicitly visible.
- Do NOT include interpretations of emotion, mood, or narrative unless it's visually explicit.
- No artistic or cinematic analysis.
- Always include the language of any text in the image if present as an object, e.g. "English text", "Japanese text", "Russian text", etc.
- Maximum 10 objects and 5 actions.
- Return an empty array for 'logos' if none are present.
- Always output strictly valid JSON with proper escaping.
- Output **only the JSON**, no extra text or explanation.
```

### Inference Parameters

- **Temperature**: 0.1 (recommended for consistency)
- **Max Tokens**: 2000
- **Response Format**: `{"type": "json_object"}`

### Output Schema

The model outputs a fixed JSON structure with the following fields:

```json
{
  "description": "string - Detailed factual description (max 4 sentences)",
  "objects": ["array of strings - Up to 10 objects with visual details"],
  "actions": ["array of strings - Up to 5 actions with context"],
  "environment": "string - Setting and atmosphere description",
  "content_type": "string - Type of visual content",
  "specific_style": "string - Genre or style classification",
  "production_quality": "string - Production level assessment",
  "summary": "string - Single sentence summary",
  "logos": ["array of strings - Detected logos with descriptions"]
}
```

## Example Output

Given a nature scene with a wooden boardwalk through grassland:

```json
{
  "description": "A wooden boardwalk path extends from the foreground into the distance, cutting through a field of tall, vibrant green grass. The path is flanked on both sides by the dense grass. In the background, a line of trees is visible on the horizon under a blue sky with scattered white clouds.",
  "objects": [
    "Wooden boardwalk",
    "Tall green grass",
    "Blue sky",
    "White clouds",
    "Trees"
  ],
  "actions": [],
  "environment": "An outdoor, natural landscape, likely a marsh or wetland, on a clear day. The scene is characterized by a wooden boardwalk, lush green vegetation, and a bright blue sky with wispy clouds.",
  "content_type": "real-world footage",
  "specific_style": "landscape photography",
  "production_quality": "professional photography",
  "summary": "A wooden boardwalk path winds through a lush green field under a bright blue sky with scattered clouds.",
  "logos": []
}
```

## Use Cases

- **Video Search & Discovery** - Build searchable databases with structured metadata
- **Content Moderation** - Automated content analysis and categorization
- **Accessibility** - Generate consistent alt-text and scene descriptions
- **Ad Verification** - Track product visibility and brand appearances
- **Video Analytics** - Extract insights from large video collections
- **Content Management** - Automatic tagging and organization of video libraries

## Interested in training your own model?

Contact us at [[email protected]](mailto:[email protected]) for a free consultation with our research team.

## Support

- **Documentation**: [docs.inference.net](https://docs.inference.net/use-cases/video-understanding)
- **API Access**: Get $25 in free credits when you [sign up](https://inference.net/register) for an account
- **Email**: [email protected]

## License

This model is released under the Apache-2.0 license, allowing for commercial use and modification with proper attribution.