Spaces:
Running
on
L4
Apply for community grant: Academic project (gpu)
EVP is a SOTA deep learning model for metric depth estimation from a single image as well as referring segmentation.
This demo would benefit greatly from GPU hardware for better interactivity and user experience.
Also it will be great to have GPU with 24 or more Gb VRAM to give the opportunity to deploy both models for depth and referring segmentation at the same time.
Please refer to our project page (https://lavreniuk.github.io/EVP/) for more details.
Hi @MykolaL , we have assigned a gpu to this space. Note that GPU Grants are provided temporarily and might be removed after some time if the usage is very low.
To learn more about GPUs in Spaces, please check out https://huggingface.co/docs/hub/spaces-gpus
Hi
@MykolaL
As the Space log was empty, I've factory rebuilt the Space. Now it shows this error: Error while cloning repository
This error is usually shown when the Space repo contains large files. Looks like there are some binary files in this repo:
It used to be possible to add large files to a Space repo, but this changed about a year ago. Could you try creating model repositories for them and download them at startup?
(FWIW, the reason of the change is that adding large files to a Space repo increases the size of its docker image, which significantly slows down the build process and image pulling. On the other hand, downloading from a model repo is parallelized, so it's actually faster.)
You can use the huggingface-hub
library to download models. Docs: https://huggingface.co/docs/huggingface_hub/main/en/guides/download
Also, FYI, we recommend creating a separate repo for each model, rather than placing multiple models in a single repo.
Hope this helps.
BTW, would it be possible to use ZeroGPU instead of a normal GPU?