π Depth Anything V2 - Ultra Fast
State-of-the-Art Depth Estimation Meets Mobile Efficiency
Optimized for Snapdragon NPUs with 2X Speed Boost
β‘ Benchmark Comparison
- Input resolution:518x518
Snapdragon 8 Gen2 (NPU Acceleration)
Metric depth_anything_v2.onnx depth_anything_v2_mha.onnx Improvement Latency 152ms 73ms 2.08Γ Memory 102MB 45MB 2.26Γ  
benchmark method
please intall qai_hub first
import qai_hub as hub
compile_job = hub.submit_compile_job(
    model="depth_anything_v2.onnx",
    device=hub.Device("Samsung Galaxy S23 (Family)"),
)
assert isinstance(compile_job, hub.CompileJob)
profile_job = hub.submit_profile_job(
    model=compile_job.get_target_model(),
    device=hub.Device("Samsung Galaxy S23 (Family)"),
)
reference
	Inference Providers
	NEW
	
	
	
	This model isn't deployed by any Inference Provider.
	π
			
		Ask for provider support