Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,18 @@ dataset_info:
|
|
| 20 |
download_size: 20108431280
|
| 21 |
dataset_size: 20575644792.0
|
| 22 |
---
|
| 23 |
-
# Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
|
| 20 |
download_size: 20108431280
|
| 21 |
dataset_size: 20575644792.0
|
| 22 |
---
|
| 23 |
+
# VIDIT Dataset
|
| 24 |
+
This is a version of the [VIDIT dataset](https://github.com/majedelhelou/VIDIT) equipped for training ControlNet using depth maps conditioning.
|
| 25 |
+
VIDIT includes 390 different Unreal Engine scenes, each captured with 40 illumination settings, resulting in 15,600 images. The illumination settings are all the combinations of 5 color temperatures (2500K, 3500K, 4500K, 5500K and 6500K) and 8 light directions (N, NE, E, SE, S, SW, W, NW). Original image resolution is 1024x1024.
|
| 26 |
+
We include in this version only the training split containing only 300 scenes.
|
| 27 |
+
Captions were generated using the [BLIP-2, Flan T5-xxl](https://huggingface.co/Salesforce/blip2-flan-t5-xxl) model.
|
| 28 |
+
Depth maps were generated using the [GLPN fine-tuned on NYUv2 ](https://huggingface.co/vinvino02/glpn-nyu) model.
|
| 29 |
+
|
| 30 |
+
## Examples with varying direction
|
| 31 |
+

|
| 32 |
+
## Examples with varying color temperature
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
## Disclaimer
|
| 36 |
+
I do not own any of this data.
|
| 37 |
|
|
|