samhogan commited on
Commit
19e5eb4
·
verified ·
1 Parent(s): 6733c82

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +183 -0
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GrassData/cliptagger-12b
2
+
3
+ ## Model Description
4
+
5
+ **GrassData/cliptagger-12b** is a 12-billion parameter vision-language model (VLM) designed for video understanding at massive scale. Developed by [Inference.net](https://inference.net) in collaboration with [Grass](https://grass.io), this model was created to meet the demanding requirements of trillion-scale video frame captioning workloads.
6
+
7
+ The model generates structured, schema-consistent JSON outputs for every video frame, making it ideal for building searchable video databases, content moderation systems, and accessibility tools. It maintains temporal consistency across frames while delivering frontier-quality performance at a fraction of the cost of closed-source alternatives.
8
+
9
+ ### Key Features
10
+
11
+ - **Frontier-quality performance** - Comparable to top closed models in captioning quality
12
+ - **Production-ready** - Battle-tested on trillion-scale video frame captioning workloads
13
+ - **Schema-consistent JSON** - Reliable structured output for every frame
14
+ - **Cost-efficient** - Optimized for high-throughput inference
15
+ - **Temporal consistency** - Maintains semantic coherence across video sequences
16
+ - **Open source** - Build and deploy without proprietary API dependencies
17
+
18
+ ## Architecture
19
+
20
+ GrassData/cliptagger-12b is based on the Gemma-12B architecture and has been optimized with FP8 quantization for maximum throughput on modern GPUs. The model is specifically tuned for RTX 40-series and H100 GPUs, leveraging native FP8 support for efficient inference.
21
+
22
+ ### Technical Specifications
23
+ - **Parameters**: 12 billion
24
+ - **Base Architecture**: Gemma-12B
25
+ - **Quantization**: FP8 (no quality loss vs bf16)
26
+ - **Input**: Single video frame per request
27
+ - **Output**: Structured JSON with fixed schema
28
+ - **Supported Formats**: JPEG, PNG, WebP, GIF
29
+ - **Max Image Size**: 1MB
30
+
31
+ ## Training
32
+
33
+ The model was trained on 1 million carefully curated single-frame samples from publicly available video data. Training employed knowledge distillation from a high-quality teacher model to ensure consistent, accurate outputs while maintaining the ability to generalize across diverse video content types.
34
+
35
+ ### Training Process
36
+ - **Dataset Size**: 1M video frames
37
+ - **Training Method**: Teacher-student distillation
38
+ - **Data Source**: Publicly available video content
39
+ - **Focus**: Single-frame understanding with temporal awareness
40
+
41
+ ## Benchmarks
42
+
43
+ Performance metrics on our internal evaluation set:
44
+
45
+ | Model Variant | Judge Score | ROUGE-1 | ROUGE-2 | ROUGE-L | BLEU |
46
+ |--------------|-------------|---------|---------|---------|------|
47
+ | Base Gemma 12B | 3.00 | 0.490 | 0.198 | 0.299 | 0.074 |
48
+ | + 100K samples | 3.29 | 0.649 | 0.367 | 0.490 | 0.232 |
49
+ | + 1M samples (final) | **3.53** | **0.674** | **0.404** | **0.520** | **0.267** |
50
+
51
+ FP8 quantization showed no measurable quality degradation compared to bf16 precision.
52
+
53
+ ## Usage
54
+
55
+ ### API Access
56
+
57
+ For production deployments, we recommend using our managed API service which includes advanced features like batch processing, webhooks, and automatic scaling:
58
+
59
+ **[Run GrassData/cliptagger-12b via Inference.net API →](https://localhost:3000/use-cases/video-understanding)**
60
+
61
+ ### Required Prompts
62
+
63
+ The model requires specific system and user prompts for optimal performance. Use these prompts exactly as shown:
64
+
65
+ #### System Prompt
66
+ ```
67
+ You are an image annotation API trained to analyze YouTube video keyframes. You will be given instructions on the output format, what to caption, and how to perform your job. Follow those instructions. For descriptions and summaries, provide them directly and do not lead them with 'This image shows' or 'This keyframe displays...', just get right into the details.
68
+ ```
69
+
70
+ #### User Prompt
71
+ ```
72
+ You are an image annotation API trained to analyze YouTube video keyframes. You must respond with a valid JSON object matching the exact structure below.
73
+
74
+ Your job is to extract detailed **factual elements directly visible** in the image. Do not speculate or interpret artistic intent, camera focus, or composition. Do not include phrases like "this appears to be", "this looks like", or anything about the image itself. Describe what **is physically present in the frame**, and nothing more.
75
+
76
+ Return JSON in this structure:
77
+
78
+ {
79
+ "description": "A detailed, factual account of what is visibly happening (4 sentences max). Only mention concrete elements or actions that are clearly shown. Do not include anything about how the image is styled, shot, or composed. Do not lead the description with something like 'This image shows' or 'this keyframe is...', just get right into the details.",
80
+ "objects": ["object1 with relevant visual details", "object2 with relevant visual details", ...],
81
+ "actions": ["action1 with participants and context", "action2 with participants and context", ...],
82
+ "environment": "Detailed factual description of the setting and atmosphere based on visible cues (e.g., interior of a classroom with fluorescent lighting, or outdoor forest path with snow-covered trees).",
83
+ "content_type": "The type of content it is, e.g. 'real-world footage', 'video game', 'animation', 'cartoon', 'CGI', 'VTuber', etc.",
84
+ "specific_style": "Specific genre, aesthetic, or platform style (e.g., anime, 3D animation, mobile gameplay, vlog, tutorial, news broadcast, etc.)",
85
+ "production_quality": "Visible production level: e.g., 'professional studio', 'amateur handheld', 'webcam recording', 'TV broadcast', etc.",
86
+ "summary": "One clear, comprehensive sentence summarizing the visual content of the frame. Like the description, get right to the point.",
87
+ "logos": ["logo1 with visual description", "logo2 with visual description", ...]
88
+ }
89
+
90
+ Rules:
91
+ - Be specific and literal. Focus on what is explicitly visible.
92
+ - Do NOT include interpretations of emotion, mood, or narrative unless it's visually explicit.
93
+ - No artistic or cinematic analysis.
94
+ - Always include the language of any text in the image if present as an object, e.g. "English text", "Japanese text", "Russian text", etc.
95
+ - Maximum 10 objects and 5 actions.
96
+ - Return an empty array for 'logos' if none are present.
97
+ - Always output strictly valid JSON with proper escaping.
98
+ - Output **only the JSON**, no extra text or explanation.
99
+ ```
100
+
101
+ ### Inference Parameters
102
+
103
+ - **Temperature**: 0.1 (recommended for consistency)
104
+ - **Max Tokens**: 2000
105
+ - **Response Format**: `{"type": "json_object"}`
106
+
107
+ ### Output Schema
108
+
109
+ The model outputs a fixed JSON structure with the following fields:
110
+
111
+ ```json
112
+ {
113
+ "description": "string - Detailed factual description (max 4 sentences)",
114
+ "objects": ["array of strings - Up to 10 objects with visual details"],
115
+ "actions": ["array of strings - Up to 5 actions with context"],
116
+ "environment": "string - Setting and atmosphere description",
117
+ "content_type": "string - Type of visual content",
118
+ "specific_style": "string - Genre or style classification",
119
+ "production_quality": "string - Production level assessment",
120
+ "summary": "string - Single sentence summary",
121
+ "logos": ["array of strings - Detected logos with descriptions"]
122
+ }
123
+ ```
124
+
125
+ ## Example Output
126
+
127
+ Given a nature scene with a wooden boardwalk through grassland:
128
+
129
+ ```json
130
+ {
131
+ "description": "A wooden boardwalk path extends from the foreground into the distance, cutting through a field of tall, vibrant green grass. The path is flanked on both sides by the dense grass. In the background, a line of trees is visible on the horizon under a blue sky with scattered white clouds.",
132
+ "objects": [
133
+ "Wooden boardwalk",
134
+ "Tall green grass",
135
+ "Blue sky",
136
+ "White clouds",
137
+ "Trees"
138
+ ],
139
+ "actions": [],
140
+ "environment": "An outdoor, natural landscape, likely a marsh or wetland, on a clear day. The scene is characterized by a wooden boardwalk, lush green vegetation, and a bright blue sky with wispy clouds.",
141
+ "content_type": "real-world footage",
142
+ "specific_style": "landscape photography",
143
+ "production_quality": "professional photography",
144
+ "summary": "A wooden boardwalk path winds through a lush green field under a bright blue sky with scattered clouds.",
145
+ "logos": []
146
+ }
147
+ ```
148
+
149
+ ## Use Cases
150
+
151
+ - **Video Search & Discovery** - Build searchable databases with structured metadata
152
+ - **Content Moderation** - Automated content analysis and categorization
153
+ - **Accessibility** - Generate consistent alt-text and scene descriptions
154
+ - **Ad Verification** - Track product visibility and brand appearances
155
+ - **Video Analytics** - Extract insights from large video collections
156
+ - **Content Management** - Automatic tagging and organization of video libraries
157
+
158
+ ## Limitations
159
+
160
+ - Processes one video frame per request
161
+ - English-only descriptions (can identify text in other languages)
162
+ - Maximum image size: 1MB
163
+ - Requires specific prompts for optimal performance
164
+ - Not supported on A100 GPUs (no native FP8)
165
+
166
+ ## Best Practices
167
+
168
+ 1. **Use exact prompts** - The provided system and user prompts are optimized for best results
169
+ 2. **Set low temperature** - Use temperature=0.1 for consistent outputs
170
+ 3. **Enable JSON mode** - Always set response_format to ensure valid JSON
171
+ 4. **Process systematically** - Maintain temporal order when processing video sequences
172
+ 5. **Batch similar content** - Group frames from the same video for efficiency
173
+
174
+ ## Support
175
+
176
+ - **Documentation**: [docs.inference.net](https://docs.inference.net)
177
+ - **API Access**: [inference.net/use-cases/video-understanding](https://localhost:3000/use-cases/video-understanding)
178
+ - **Email**: [email protected]
179
+ - **Enterprise**: [Schedule a consultation](https://inference.net/sales)
180
+
181
+ ## License
182
+
183
+ This model is released under the Apache-2.0 license, allowing for commercial use and modification with proper attribution.