update readme
Browse files
README.md
CHANGED
@@ -159,9 +159,9 @@ Raw evaluation outputs are available [here](https://huggingface.co/nkkbr/ViCA/tr
|
|
159 |
|
160 |
While the full **ViCA-322K** dataset was curated by us, the underlying videos and associated metadata are sourced from three distinct indoor video datasets:
|
161 |
|
162 |
-
* **ARKitScenes**
|
163 |
-
* **ScanNet**
|
164 |
-
* **ScanNet
|
165 |
|
166 |
To better understand how each source contributes to model performance, we fine-tuned ViCA-7B on subsets of ViCA-322K that exclusively use data from each source. For each subset, we provide checkpoints trained with **10% increments** of the available data, from 10% to 100%.
|
167 |
|
|
|
159 |
|
160 |
While the full **ViCA-322K** dataset was curated by us, the underlying videos and associated metadata are sourced from three distinct indoor video datasets:
|
161 |
|
162 |
+
* **[ARKitScenes](https://machinelearning.apple.com/research/arkitscenes)**
|
163 |
+
* **[ScanNet](http://www.scan-net.org)**
|
164 |
+
* **[ScanNet++](https://kaldir.vc.in.tum.de/scannetpp/)**
|
165 |
|
166 |
To better understand how each source contributes to model performance, we fine-tuned ViCA-7B on subsets of ViCA-322K that exclusively use data from each source. For each subset, we provide checkpoints trained with **10% increments** of the available data, from 10% to 100%.
|
167 |
|