Distill-Any-Depth-Small (ONNX) – For VisionDepth3D

Model Origin: This model is based on Distill-Any-Depth by Westlake-AGI-Lab, originally developed by Westlake-AGI-Lab.
I did not train this model β€” I have converted it to ONNX format for fast, GPU-accelerated inference within tools such as VisionDepth3D.

🧠 About This Model

This is a direct conversion of the Distill-Any-Depth PyTorch model to ONNX, intended for lightweight, real-time depth estimation from single RGB images.

βœ”οΈ Key Features:

  • ONNX format (exported from PyTorch)
  • Compatible with ONNX Runtime and TensorRT
  • Excellent for 2D to 3D depth workflows
  • Works seamlessly with VisionDepth3D

πŸ“Œ Intended Use

  • Real-time or batch depth map generation
  • 2D to 3D conversion pipelines (e.g., SBS 3D video)
  • Works on Windows, Linux (CUDA-supported)

πŸ“œ License and Attribution

Citation

@article{he2025distill,
  title   = {Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator},
  author  = {Xiankang He and Dongyan Guo and Hongji Li and Ruibo Li and Ying Cui and Chi Zhang},
  year    = {2025},
  journal = {arXiv preprint arXiv: 2502.19204}
}

If you use this model, please credit the original authors: Westlake-AGI-Lab.

πŸ’» How to Use In VisionDepth3D

Place Folder containing onnx model into weights folder in VisionDepth3D

VisionDepth3D¬
              Weights¬
                      Distill Any Depth Small¬
                                            model.onnx
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including FuryTMP/Distill-Any-Depth-Small-onnx