File size: 4,552 Bytes
c98a7cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
# Pre-training SIGMA 

## Original Implementation

The implementation of our SIGMA supports **multi-node distributed training**. We provide the **off-the-shelf** scripts in the [scripts folder](scripts).

-  For example, to pre-train SIGMA ViT-Base on **Something-Something V2** with 64 GPUs (8 nodes x 8 GPUs), you can run

  ```bash
  OUTPUT_DIR='YOUR_PATH/ssv2_SIGMA_pretrain_base_patch16_224_frame_16x2_tube_mask_ratio_0.9_e800'
  DATA_PATH='YOUR_PATH/list_ssv2/train.csv'
  
  OMP_NUM_THREADS=1 python -m torch.distributed.launch --nproc_per_node=8 \
          --master_port 12320 --nnodes=8 \
          --node_rank=0 --master_addr=$ip_node_0 \
          run_mae_pretraining.py \
          --data_path ${DATA_PATH} \
          --mask_type tube \
          --mask_ratio 0.9 \
          --model pretrain_videomae_base_patch16_224 \
          --decoder_depth 4 \
          --batch_size 32 \
          --num_frames 16 \
          --sampling_rate 2 \
          --opt adamw \
          --opt_betas 0.9 0.95 \
          --warmup_epochs 40 \
          --save_ckpt_freq 20 \
          --epochs 801 \
          --log_dir ${OUTPUT_DIR} \
          --output_dir ${OUTPUT_DIR}
  ```

  on the first node. On other nodes, run the same command with `--node_rank 1`, ..., `--node_rank 7` respectively.  `--master_addr` is set as the ip of the node 0.

- For example, to pre-train SIGMA ViT-Base on **Kinetics400** with 64 GPUs (8 nodes x 8 GPUs), you can run

  ```bash
  OUTPUT_DIR='YOUR_PATH/k400_SIGMA_pretrain_base_patch16_224_frame_16x4_tube_mask_ratio_0.9_e800'
  DATA_PATH='YOUR_PATH/list_kinetics-400/train.csv'
  
  OMP_NUM_THREADS=1 python3 -m torch.distributed.launch --nproc_per_node=8 \
          --master_port 12320 --nnodes=8 \
          --node_rank=0 --master_addr=$your_node_0_ip \
          run_mae_pretraining.py \
          --data_path ${DATA_PATH} \
          --mask_type tube \
          --mask_ratio 0.9 \
          --model pretrain_videomae_base_patch16_224 \
          --decoder_depth 4 \
          --batch_size 32 \
          --num_frames 16 \
          --sampling_rate 4 \
          --opt adamw \
          --opt_betas 0.9 0.95 \
          --warmup_epochs 40 \
          --save_ckpt_freq 20 \
          --epochs 801 \
          --log_dir ${OUTPUT_DIR} \
          --output_dir ${OUTPUT_DIR}
  ```

  on the first node. On other nodes, run the same command with `--node_rank 1`, ..., `--node_rank 7` respectively.  `--master_addr` is set as the ip of the node 0.

### Note:

- Here the batch size is 32 (`batch_size` per gpu) * 8 (`nodes`) * 8 (gpus per node) = 2048.
- `lr` here is the base learning rate and is set to `1.5e-4` as default. The ` actual lr` is computed by the [linear scaling rule](https://arxiv.org/abs/1706.02677): `` actual lr`` = `lr` * total batch size / 256.
- [Fixed]~~We have observed accidental interrupt in the last epoch when conduct the experiment on V100 GPUs (torch 1.6.0). This interrupt is caused by the scheduler of learning rate. We naively set  `--epochs 801` to walk away from issue :)~~

## Slurm

To help the community to reproduce our results on slurm cluster, we also provide the the **off-the-shelf** script. 

For example, to pre-train SIGMA ViT-Base on **Kinetics400** with 64 GPUs (8 nodes x 8 GPUs), you can run

```bash
export MASTER_PORT=$((12000 + $RANDOM % 20000))
export OMP_NUM_THREADS=1

OUTPUT_DIR='YOUR_PATH/k400_SIGMA_pretrain_base_patch16_224_frame_16x4_tube_mask_ratio_0.9_e800'
DATA_PATH='YOUR_PATH/list_kinetics-400/train.csv'

JOB_NAME=$1
PARTITION=${PARTITION:-"video"}
# 8 for 1 node, 16 for 2 node, etc.
GPUS=${GPUS:-64}
GPUS_PER_NODE=${GPUS_PER_NODE:-8}
CPUS_PER_TASK=${CPUS_PER_TASK:-8}
SRUN_ARGS=${SRUN_ARGS:-""}
PY_ARGS=${@:2}

# batch_size can be adjusted according to the graphics card
srun -p $PARTITION \
        --job-name=${JOB_NAME} \
        --gres=gpu:${GPUS_PER_NODE} \
        --ntasks=${GPUS} \
        --ntasks-per-node=${GPUS_PER_NODE} \
        --cpus-per-task=${CPUS_PER_TASK} \
        --kill-on-bad-exit=1 \
        ${SRUN_ARGS} \
        python -u run_mae_pretraining.py \
        --data_path ${DATA_PATH} \
        --mask_type tube \
        --mask_ratio 0.9 \
        --model pretrain_SIGMA_base_patch16_224 \
        --decoder_depth 4 \
        --batch_size 32 \
        --num_frames 16 \
        --sampling_rate 4 \
        --opt adamw \
        --opt_betas 0.9 0.95 \
        --warmup_epochs 40 \
        --save_ckpt_freq 20 \
        --epochs 801 \
        --log_dir ${OUTPUT_DIR} \
        --output_dir ${OUTPUT_DIR} \
        ${PY_ARGS}
```