metadata
license: apple-ascl
base_model:
- apple/DepthPro
library_name: transformers.js
pipeline_tag: depth-estimation
https://huggingface.co/apple/DepthPro with ONNX weights to be compatible with Transformers.js.
Usage (Transformers.js)
If you haven't already, you can install the Transformers.js JavaScript library from NPM using:
npm i @huggingface/transformers
import { AutoProcessor, AutoModelForDepthEstimation, RawImage } from "@huggingface/transformers";
// Load model and processor
const depth = await AutoModelForDepthEstimation.from_pretrained("onnx-community/DepthPro-ONNX", { dtype: "q4" });
const processor = await AutoProcessor.from_pretrained("onnx-community/DepthPro-ONNX");
// Read and prepare image
const image = await RawImage.read("https://raw.githubusercontent.com/huggingface/transformers.js-examples/main/depth-pro-node/assets/image.jpg");
const inputs = await processor(image);
// Run depth estimation model
const { predicted_depth, focallength_px } = await depth(inputs);
// Normalize the depth map to [0, 1]
const depth_map_data = predicted_depth.data;
let minDepth = Infinity;
let maxDepth = -Infinity;
for (let i = 0; i < depth_map_data.length; ++i) {
minDepth = Math.min(minDepth, depth_map_data[i]);
maxDepth = Math.max(maxDepth, depth_map_data[i]);
}
const depth_tensor = predicted_depth
.sub_(minDepth)
.div_(-(maxDepth - minDepth)) // Flip for visualization purposes
.add_(1)
.clamp_(0, 1)
.mul_(255)
.round_()
.to("uint8");
// Save the depth map
const depth_image = RawImage.fromTensor(depth_tensor);
depth_image.save("depth.png");
The following images illustrate the input image and its corresponding depth map generated by the model:
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx
).