lora support

#4
by GeeveGeorge - opened

@linoyts can you add a lora url textbox , so that we can paste the hunyuan lora url to the safetensor model lora adapter file. where , it downloads it loads it and framepack can run on that? would be a great addition on this huggingface space.

You can duplicate this space. This space works perfectly on ZeroGPU. I have tested it when it was still available.

@Fabrice-TIERCELIN giving runtime error
Exit code: 1. Reason: Set HF_HOME env to /home/user/app/hf_download
Currently enabled native sdp backends: ['flash', 'math', 'mem_efficient', 'cudnn']
Xformers is not installed!
Flash Attn is not installed!
Sage Attn is not installed!
Traceback (most recent call last):
File "/home/user/app/app_hf_zerogpu.py", line 44, in
from diffusers_helper.memory import (
File "/home/user/app/diffusers_helper/memory.py", line 8, in
gpu = torch.device(f'cuda:{torch.cuda.current_device()}')
File "/usr/local/lib/python3.10/site-packages/torch/cuda/init.py", line 1026, in current_device
_lazy_init()
File "/usr/local/lib/python3.10/site-packages/torch/cuda/init.py", line 372, in _lazy_init
torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:

Failed to retrieve error logs: SSE is not enabled

With the error message, it seems that you don't have a GPU. Your space runs on which device? CPU? GPU? ZeroGPU?

Can you make your space public?

Or can you share your input data?

Sign up or log in to comment