Spaces:
Running
on
Zero
Fix inference API
I am surprised it’s taking so long for huggingface to fix the Inference API to use yntec models and yours on printing press and blitz diffusion. I extremely do not like the zero gpu and I am not getting a pro account because it’s a waste of money. What is the update of the problem. Just curious. Thanks.
If there is a mechanism that can be utilized, I plan to develop a simple program to match it, but there has been no particular progress. At least, I am not aware of any...
https://discuss.huggingface.co/t/inference-api-stopped-working/150492
With the new high-speed Zero GPU, SDXL and SD1.5 can generate images in about 10 seconds per image without complex optimization, so I am currently experimenting with it a little. However, I don't have much time due to the season...
Also, the look and feel is quite different from API-based systems anyway.
https://huggingface.co/spaces/huggingface/README/discussions/22#6843262f4e0835e7814d452a
Colab may also be a potential option for free inference. Hugging Face appears to be collaborating with Colab now.
However, I am not very familiar with Colab (ipython + Jupyter). I'm sure there will be many parts that I won't be able to do properly without trying it out first.
https://github.com/R3gm/SD_diffusers_interactive