Noa Roggendorff
AI & ML interests
Recent Activity
Organizations

Only 11 likes after 24 hours. I really fell off
cpu upgrade
.I realize the scale of the community is just a tiny bit different, and that having this for a public org (one where anyone can join) isn't super fiscally responsible, but we'll be good. I promise we will! Right, guys?

cpu upgrade
.I realize the scale of the community is just a tiny bit different, and that having this for a public org (one where anyone can join) isn't super fiscally responsible, but we'll be good. I promise we will! Right, guys?

I didnβt get banned, and that isnβt why I got kicked.

Ah we can work with that, then the issue is that the space is incomplete/misconfigured, (i would reccomend amending your original post to avoid confusion).
I just read your blog post:
https://huggingface.co/blog/nroggendorff/train-with-llama-architecture
It provides some useful context, thanks.
From reading the dockerfile and image file, it appears that cuda was never included in the image.
You may find the following resources helpful for using docker with spaces:
https://huggingface.co/docs/hub/en/spaces-sdks-dockerIf you are using cuda, this may also help inform on how to setup cuda, and also test if cuda works (with docker):
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.htmlReferences
https://huggingface.co/spaces/nroggendorff/train-llama/blob/main/DockerfileHope you find this helpful,
Let me know if you have any more questions, let me know here or email me.The base image for that Dockerfile has Cuda installed and configured.
You are welcome to open a PR with your proposed fix on https://github.com/nroggendorff/train-llama.

Ah we can work with that, then the issue is that the space is incomplete/misconfigured, (i would reccomend amending your original post to avoid confusion).
I just read your blog post:
https://huggingface.co/blog/nroggendorff/train-with-llama-architecture
It provides some useful context, thanks.
From reading the dockerfile and image file, it appears that cuda was never included in the image.
You may find the following resources helpful for using docker with spaces:
https://huggingface.co/docs/hub/en/spaces-sdks-dockerIf you are using cuda, this may also help inform on how to setup cuda, and also test if cuda works (with docker):
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.htmlReferences
https://huggingface.co/spaces/nroggendorff/train-llama/blob/main/DockerfileHope you find this helpful,
Let me know if you have any more questions, let me know here or email me.
The base image for that Dockerfile has Cuda installed and configured.

I am not sure if that makes sense, I am under the impression that, if the space is not running(not started), no models can be actively loaded in the space.
Can you share your relevant workflow(docker-compose, app code, etc) so i can see more clearly whats happening?
I might be able to aid in a solution, its possible that there is an issue in the workflow itself.
EDIT: I looked at the spaces, Do you mean this space as an example? 'https://huggingface.co/spaces/nroggendorff/train-llama'
Because this space shows a missing "CUDA_HOME" env var, most your other spaces throwing errors about missing CUDA drivers or are paused. These are configuration errors.Could you tell me the space and error message?
I might be able to help you fix it.
Thatβs the one.

what the~

it's pretty specific to my workflow, but spaces now don't get cuda until after they start, so you can't load models or anything until an app is running