Spaces:
Running
on
Zero
Apply for community grant: Academic project (gpu)
Effective environmental monitoring needs smarter ways to find ecological targets, especially when clear, high-resolution imagery isn't always available. Our project, Search-TTA, uses inputs of various modalities (e.g. satellite pictures, ground-level images, text, and sound etc) to help robots like drones actively search and continuously refine their predictions. These robots use Vision Language Model (VLM) that learns and updates its beliefs about where specific species might be found as it gathers new data, which is especially helpful when the model hallucinates. We're hoping to make this tool freely available online through Hugging Face Spaces so anyone—from researchers to community members—can easily try out predicting where different wildlife could be, supporting broader involvement in conservation.
Here are some helpful resources to allow you to better understand our project. While we already have a minimal huggingface spaces demo, we are still working to provide more functionalities. Thank you!
# References:
https://search-tta.github.io/
https://huggingface.co/spaces/derektan95/search-tta-demo
Hi @derektan95 , we've assigned ZeroGPU to this Space. Please check the compatibility and usage sections of this page so your Space can run on ZeroGPU.
Hi @hysts ,
Thanks! I tried changing my space to run on ZeroGPU, but this message popped up. May I know why is that?
Well, yeah, that's how this grant works. Sorry for the inconvenience. I just reassigned ZeroGPU. BTW, if you want to test it on CPU, you can simply duplicate your Space.
Hi @hysts ,
Thank you so much for the allocation. I managed to get the hf space up and running. Is there a specific date when the community grant would expire?
Nice!
Is there a specific date when the community grant would expire?
There's no expiration date for now. It might change in the future, though.
Awesome, thanks again!
Hi @hysts ,
I recently encountered this issue when my code is trying to run the GPU. Do you know why this AssertionError is happening?
My inference code using the zeroGPU is done here.
/home/user/app/env.py:804: UserWarning: color is redundantly defined by the 'color' keyword argument and the fmt string "co" (-> color='c'). The keyword argument will take precedence.
ax.plot(xPoints[0], yPoints[0], 'co', c=robot_marker_color, markersize=8, zorder=5)
/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py:86: UserWarning: Using a ZeroGPU function outside of Gradio caching or request might block the app
warnings.warn("Using a ZeroGPU function outside of Gradio caching or request might block the app")
/usr/local/lib/python3.10/site-packages/spaces/zero/client.py:256: UserWarning: ZeroGPU API /release warning: 404 Not Found
warnings.warn("ZeroGPU API /release warning: 404 Not Found")
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 137, in worker_init
client.allow(allow_token)
File "/usr/local/lib/python3.10/site-packages/spaces/zero/client.py", line 232, in allow
assert api_client().allow(allow_token=allow_token, pid=pid) is httpx.codes.OK
AssertionError
Traceback (most recent call last):
File "/home/user/app/app.py", line 242, in _planner_thread
planner.run_episode(0)
File "/home/user/app/test_multi_robot_worker.py", line 273, in run_episode
self.poisson_tta_update(robot, self.global_step, step)
File "/home/user/app/test_multi_robot_worker.py", line 670, in poisson_tta_update
heatmap = self.clip_seg_tta.execute_tta(
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 214, in gradio_handler
raise error("ZeroGPU worker error", res.error_cls)
gradio.exceptions.Error: 'AssertionError'
Hi @derektan95 , hmm, that's a very weird error. I've never seen it before. But looks like there's no error in the current Space log. I tried running the Space and it seemed to be working. So maybe it was just a temporary glitch? Let us know if it happens again.
Sounds good! I extended the max zeroGPU inference duration, and it works well now. Thanks!