ai-forever commited on
Commit
94d0b5a
·
verified ·
1 Parent(s): 32e893c

upload videomae-large-camera-motion

Browse files

[VideoMAE model](https://huggingface.co/docs/transformers/model_doc/videomae)(`large`) variant that has been finetuned for multi-label video classification (a video can belong to multiple classes simultaneously) for camera motion classification on internal dataset.

The model predicts `18` different camera motion `'arc_left', 'arc_right', 'dolly_in', 'dolly_out', 'pan_left', 'pan_right', 'pedestal_down', 'pedestal_up', 'roll_left', 'roll_right', 'static', 'tilt_down', 'tilt_up', 'truck_left', 'truck_right', 'undefined', 'zoom_in', 'zoom_out'` and and 3 shot type classes: `'pov', 'shake', 'track' ` .

Model was trained to associate **entire video** with camera labels, not frame-level motions(!): [input video] -> label/labels (because multilabel) for all video. So, if this camera motion exists during all video frames model should predict this motion, otherwise it should predict `undefined`.

The model is configured to process `config.num_frames=16` frames per input clip. These frames are extracted uniformly from the input video, regardless of its original duration. For videos longer than **2 seconds**, processing the entire video as a single clip may miss temporal nuances (e.g., varying camera motions). So, recommended workflow for such videos will be follows:

(a) split the video into non-overlapping 2-second segments (or sliding windows with optional overlap).

(b) run inference independently on each segment.

(c) post-process results.

Files changed (3) hide show
  1. config.json +77 -0
  2. model.safetensors +3 -0
  3. preprocessor_config.json +26 -0
config.json ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "MCG-NJU/videomae-large",
3
+ "architectures": [
4
+ "VideoMAEForVideoClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "decoder_hidden_size": 512,
8
+ "decoder_intermediate_size": 2048,
9
+ "decoder_num_attention_heads": 8,
10
+ "decoder_num_hidden_layers": 12,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.0,
13
+ "hidden_size": 1024,
14
+ "id2label": {
15
+ "0": "arc_left",
16
+ "1": "arc_right",
17
+ "2": "dolly_in",
18
+ "3": "dolly_out",
19
+ "4": "pan_left",
20
+ "5": "pan_right",
21
+ "6": "pedestal_down",
22
+ "7": "pedestal_up",
23
+ "8": "pov",
24
+ "9": "roll_left",
25
+ "10": "roll_right",
26
+ "11": "shake",
27
+ "12": "static",
28
+ "13": "tilt_down",
29
+ "14": "tilt_up",
30
+ "15": "track",
31
+ "16": "truck_left",
32
+ "17": "truck_right",
33
+ "18": "undefined",
34
+ "19": "zoom_in",
35
+ "20": "zoom_out"
36
+ },
37
+ "image_size": 224,
38
+ "initializer_range": 0.02,
39
+ "intermediate_size": 4096,
40
+ "label2id": {
41
+ "arc_left": 0,
42
+ "arc_right": 1,
43
+ "dolly_in": 2,
44
+ "dolly_out": 3,
45
+ "pan_left": 4,
46
+ "pan_right": 5,
47
+ "pedestal_down": 6,
48
+ "pedestal_up": 7,
49
+ "pov": 8,
50
+ "roll_left": 9,
51
+ "roll_right": 10,
52
+ "shake": 11,
53
+ "static": 12,
54
+ "tilt_down": 13,
55
+ "tilt_up": 14,
56
+ "track": 15,
57
+ "truck_left": 16,
58
+ "truck_right": 17,
59
+ "undefined": 18,
60
+ "zoom_in": 19,
61
+ "zoom_out": 20
62
+ },
63
+ "layer_norm_eps": 1e-12,
64
+ "model_type": "videomae",
65
+ "norm_pix_loss": true,
66
+ "num_attention_heads": 16,
67
+ "num_channels": 3,
68
+ "num_frames": 16,
69
+ "num_hidden_layers": 24,
70
+ "patch_size": 16,
71
+ "problem_type": "multi_label_classification",
72
+ "qkv_bias": true,
73
+ "torch_dtype": "float32",
74
+ "transformers_version": "4.48.3",
75
+ "tubelet_size": 2,
76
+ "use_mean_pooling": false
77
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f76ada1a8fa74170de66e8fc797aca2f448d0ed44ebd8cd9b8106a25803d6ae
3
+ size 1215574156
preprocessor_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": {
3
+ "height": 224,
4
+ "width": 224
5
+ },
6
+ "do_center_crop": true,
7
+ "do_normalize": true,
8
+ "do_rescale": true,
9
+ "do_resize": true,
10
+ "image_mean": [
11
+ 0.485,
12
+ 0.456,
13
+ 0.406
14
+ ],
15
+ "image_processor_type": "VideoMAEImageProcessor",
16
+ "image_std": [
17
+ 0.229,
18
+ 0.224,
19
+ 0.225
20
+ ],
21
+ "resample": 2,
22
+ "rescale_factor": 0.00392156862745098,
23
+ "size": {
24
+ "shortest_edge": 224
25
+ }
26
+ }