Update README.md
Browse files
README.md
CHANGED
@@ -17,9 +17,9 @@ The abstract from the paper is the following:
|
|
17 |
This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences.
|
18 |
|
19 |
![LDM3D overview](model_overview.png)
|
20 |
-
<font size="
|
21 |
|
22 |
-
## Intended uses
|
23 |
|
24 |
You can use this model to generate RGB and depth map given a text prompt.
|
25 |
A short video summarizing the approach can be found at [this url](https://t.ly/tdi2) and a VR demo can be found [here](https://www.youtube.com/watch?v=3hbUo-hwAs0)
|
@@ -46,21 +46,22 @@ depth_image[0].save(name+"_ldm3d_depth.png")
|
|
46 |
```
|
47 |
### Limitations and bias
|
48 |
|
49 |
-
|
|
|
50 |
|
51 |
## Training data
|
52 |
|
53 |
The LDM3D model was finetuned on a dataset constructed from a subset of the LAION-400M dataset, a large-scale image-caption dataset that contains over 400 million image-caption pairs.
|
54 |
|
55 |
-
### Preprocessing
|
56 |
-
|
57 |
### Finetuning
|
58 |
|
59 |
The fine-tuning process comprises two stages. In the first stage, we train an autoencoder to generate a lower-dimensional, perceptually equivalent data representation. Subsequently, we fine-tune the diffusion model using the frozen autoencoder
|
60 |
|
61 |
## Evaluation results
|
62 |
|
63 |
-
Please refer to Table 1 and Table2 from the [paper](https://arxiv.org/abs/2305.10853)
|
|
|
|
|
64 |
|
65 |
|
66 |
### BibTeX entry and citation info
|
|
|
17 |
This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences.
|
18 |
|
19 |
![LDM3D overview](model_overview.png)
|
20 |
+
<font size="2">LDM3D overview taken from [the original paper](https://arxiv.org/abs/2305.10853)</font>
|
21 |
|
22 |
+
## Intended uses
|
23 |
|
24 |
You can use this model to generate RGB and depth map given a text prompt.
|
25 |
A short video summarizing the approach can be found at [this url](https://t.ly/tdi2) and a VR demo can be found [here](https://www.youtube.com/watch?v=3hbUo-hwAs0)
|
|
|
46 |
```
|
47 |
### Limitations and bias
|
48 |
|
49 |
+
Limitations and bias are the same as the ones from [Stable diffusion](https://huggingface.co/CompVis/stable-diffusion-v1-4#limitations)
|
50 |
+
|
51 |
|
52 |
## Training data
|
53 |
|
54 |
The LDM3D model was finetuned on a dataset constructed from a subset of the LAION-400M dataset, a large-scale image-caption dataset that contains over 400 million image-caption pairs.
|
55 |
|
|
|
|
|
56 |
### Finetuning
|
57 |
|
58 |
The fine-tuning process comprises two stages. In the first stage, we train an autoencoder to generate a lower-dimensional, perceptually equivalent data representation. Subsequently, we fine-tune the diffusion model using the frozen autoencoder
|
59 |
|
60 |
## Evaluation results
|
61 |
|
62 |
+
Please refer to Table 1 and Table2 from the [paper](https://arxiv.org/abs/2305.10853) for quantitative results.
|
63 |
+
The figure below shows some qualitative results comparing our method with (Stable diffusion v1.4)[https://arxiv.org/pdf/2112.10752.pdf] and with (DPT-Large)[https://arxiv.org/pdf/2103.13413.pdf] for the depth maps
|
64 |
+
![qualitative_results](qualitative_results.png)
|
65 |
|
66 |
|
67 |
### BibTeX entry and citation info
|