Aymane El Firdoussi

AymaneElfirdo

AI & ML interests

None yet

Recent Activity

updated a Space about 23 hours ago
atlasia/darija-chatbot-arena
liked a Space 1 day ago
Mistral-AI-Game-Jam/NeuralJam
liked a Space 6 days ago
atlasia/darija-chatbot-arena
View all activity

Organizations

AtlasIA's profile picture

AymaneElfirdo's activity

updated a Space about 23 hours ago
reacted to grimjim's post with ๐Ÿ‘ 5 months ago
view post
Post
3245
I found this paper to be thought-provoking: "Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling" by Bansal, Hosseini, Agarwal, Tran, and Kazemi.
https://arxiv.org/abs/2408.16737
The direct implication is that smaller models could be used to create cost-effective synthetic datasets. And on that note, in the Gemma terms of use, Google explicitly claims no rights on outputs generated from those models, which means one is free to synthgen from the Gemma line. Meta's Llama 3 licence forbids synthetic generation of outputs if used to improve other models. Relevant Mistral, Qwen, and Yi models under the Apache 2.0 license are unrestricted for this purpose.
  • 2 replies
ยท