Update README.md
Browse files
README.md
CHANGED
|
@@ -13,3 +13,23 @@ huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-
|
|
| 13 |
```
|
| 14 |
|
| 15 |
Current version: v1.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
```
|
| 14 |
|
| 15 |
Current version: v1.0
|
| 16 |
+
|
| 17 |
+
- **magvit2.ckpt** - weights for [Magvit2](https://github.com/TencentARC/Open-MAGVIT2) image tokenizer we used. We provide the encoder (tokenizer) and decoder (de-tokenizer) weights.
|
| 18 |
+
|
| 19 |
+
Contents of train/val_v1.0:
|
| 20 |
+
- **video.bin** - 16x16 image patches at 30hz, each patch is vector-quantized into 2^18 possible integer values. These can be decoded into 256x256 RGB images using the provided `magvig2.ckpt` weights.
|
| 21 |
+
- **segment_ids.bin** - for each frame `segment_ids[i]` uniquely points to the log index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
| 22 |
+
- **actions/** - a folder of action arrays stored in `np.float32` format. For frame `i`, the corresponding action is given by `driving_command[i]`, `joint_pos[i]`, `l_hand_closure[i]`, and so on. The shapes of the arrays are as follows (N is the number of frames):
|
| 23 |
+
```
|
| 24 |
+
{
|
| 25 |
+
joint_pos: (N, 21)
|
| 26 |
+
driving_command: (N, 2),
|
| 27 |
+
neck_desired: (N, 1),
|
| 28 |
+
l_hand_closure: (N, 1),
|
| 29 |
+
r_hand_closure: (N, 1),
|
| 30 |
+
}
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
We also provide a small `val_v1.0` data split containing held-out examples not seen in the training set, in case you want to try evaluating your model on held-out frames.
|