Not working

#1
by Yntec - opened

Hey @John6666 , I was wondering if you could implement inference providers as in here https://discuss.huggingface.co/t/constant-503-error-for-several-days-when-running-llama-3-1/105144/5 on this space? Currently requesting an image never provides an image, if this space could be made to work all my spaces could be made to work!

Hey.๐Ÿ˜€
Adding an Inference Provider is easy. I'm thinking of modifying it according to the situation, but I'm not sure what the situation is!
The problem is that even well-known models (Llama and Qwen in LLM terms) are still not deployed after a large-scale failure, so even if I implement it, it doesn't work. I get a 404 or 503 error.
This is a meaningless state for HF, so I think it's temporary, but as usual, there's no announcement...๐Ÿฅถ
https://discuss.huggingface.co/t/inference-api-stopped-working/150492/30

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment