Problem with submissions
Hello,
I am trying to run the evaluation on the random model (without any change to the code) on both text and image tasks. But the evaluation takes forever and doesn't produce any results after 2 hours. Can you please check if there is any problem with the template ? Thank you in advance.
Best regards
Hi,
I've the same issue, queue is like lasting "forever" lol, like it crashed on previous evaluation and got stuck.
It works fine when duplicating the space (in few minutes - although when it failed it required factory re-building) but it is not possible to submit as we don't have TOKENs.
I've been trying to submit on /image for 3 days now....
Any suggestion? Is it possible to restart the submission portal space? having TOKEN?
Thanks anyway for the organization, always great to be challenged on such topics.
Hi, I had the same issue but I solved it by going to the settings of my template space, then "Variables and secrets" and creating a Secret called HF_TOKEN with the value as my personal read token (that I created before in my personal account settings).
Hi @aweber ,
Thanks a lot for your help. For me, it didn't work with the "read" token, and with the "write" token, I had to put "create_pr=1" in api.upload_file argument so it reached the dataset but was waiting for pull request approval (that I've closed today).
Anyway it works today with the original space (no queueing) and I eventually managed to submit it.
Thanks a lot
Thank you all for your answers. However, I believe I've identified the root cause of my problem: the submission system works intermittently, typically functioning early in the morning but failing later in the day. I suspect this occurs when someone submits code containing an infinite loop or that produces and error and does not return the expect results, which then blocks the entire submission queue as the system waits indefinitely for that code to complete execution. To resolve this, I recommend implementing a timeout mechanism in the submission API to prevent any single submission from blocking the entire pipeline.
Hello,
Thanks for pointing it out, if it fails it's ok - in the worst case you can just use the evaluation functions or the portal locally to evaluate and then send your submissions via the submission form here :)
https://framaforms.org/2025-frugal-ai-challenge-submission-form-1736883260-0
The public leaderboards are more informative than to rank the final submissions.
Having said that you could still encounter two errors, the submission portal is actually calling the API of your submission so :
- It can timeout on our side in the portal, I am looking at precising timeout and queues to avoid that. Or we need to restart the space manually
- It can timeout on your side in your submission API (copied from the
submission-template
, so you can also check the log on your side to debug it)
Hello,
I have the same problem, the evaluation takes forever. I am wondering how can we evaluate locally, as the hardware is different and will have different results for the emission tracker? Also, how can we submit to the final evaluation if the submission portal does not work?
Thanks
Hello @ypesk , because the submission portal itself is just informative for us to already get data during the competition phase on the public test set. And for the participants to compare already the approaches and position on the public test set. So you can submit or not with the portal or just use the evaluation function locally to compare your perfs to the public perfs disclosed on the portal.
And then we will evaluate all the results on our side using the same hardware by rerunning your evaluation submission but on the private test set that is not available to play with :)
So what counts is actually to "just send us your final model" via the form https://framaforms.org/2025-frugal-ai-challenge-submission-form-1736883260-0 or email. And we will do the final evaluation on our side.
Thanks for the reply, that makes a lot of sense.
In this form, is the "project name" where we input the huggingface space we want to use for the final submission? If I understand correclty, we can only submit one
Thanks
Yes exactly :) You can submit several models with several entries to the form, but reserve that to different approaches not just small variations of your model.
Thanks so much for participating !
When you submit using the same email / space : you get the following message : "... est déjà présente dans le champ Contact email address. Vous avez peut être déjà soumis ce formulaire, ou vous devez utiliser une valeur différente."
Hi! Curious about the models being deployed.
I have the same issue with pushing to leaderboard (4 tasks ahead for the full day), but I have answered data through the form =)