Edit model card

Model Card for OpenLRM V1.1

Overview

  • This model card is for the OpenLRM project, which is an open-source implementation of the paper LRM.
  • Information contained in this model card corresponds to Version 1.1.

Model Details

Notable Differences from the Original Paper

  • We do not use the deferred back-propagation technique in the original paper.
  • We used random background colors during training.
  • The image encoder is based on the DINOv2 model with register tokens.
  • The triplane decoder contains 4 layers in our implementation.

License

Disclaimer

This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.

Ethical Considerations

  • This model should be used responsibly and ethically, and should not be used for malicious purposes.
  • Users should be aware of potential biases in the training data.
  • The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.

Usage Considerations

  • The model is provided "as is" without warranty of any kind.
  • Users are responsible for ensuring that their use complies with all relevant laws and regulations.
  • The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.

This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.

Downloads last month
515
Safetensors
Model size
452M params
Tensor type
F32
Β·
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Dataset used to train zxhezexin/openlrm-mix-large-1.1

Spaces using zxhezexin/openlrm-mix-large-1.1 2