Mohamed Rashad PRO

MohamedRashad

AI & ML interests

Computer Vision, Robotics, Natural Language Processing

Recent Activity

Organizations

Navid AI's profile picture

MohamedRashad's activity

reacted to their post with πŸ‘€ 2 days ago
posted an update 2 days ago
reacted to their post with ❀️ 19 days ago
view post
Post
3460
I think we have released the best Arabic model under 25B at least based on inceptionai/AraGen-Leaderboard

Yehia = ALLaM-AI/ALLaM-7B-Instruct-preview + GRPO

and its ranked number one model under the 25B parameter size mark.

Now, i said "i think" not "i am sure" because this model used the same metric of evaluation the AraGen developers use (the 3C3H) as a reward model to improve its responses and this sparks the question. Is this something good for users or is it another type of overfitting that we don't want ?

I don't know if this is a good thing or a bad thing but what i know is that you can try it from here:
Navid-AI/Yehia-7B-preview

or Download it for your personal experiments from here:
Navid-AI/Yehia-7B-preview

Ramadan Kareem πŸŒ™
  • 1 reply
Β·
posted an update 19 days ago
view post
Post
3460
I think we have released the best Arabic model under 25B at least based on inceptionai/AraGen-Leaderboard

Yehia = ALLaM-AI/ALLaM-7B-Instruct-preview + GRPO

and its ranked number one model under the 25B parameter size mark.

Now, i said "i think" not "i am sure" because this model used the same metric of evaluation the AraGen developers use (the 3C3H) as a reward model to improve its responses and this sparks the question. Is this something good for users or is it another type of overfitting that we don't want ?

I don't know if this is a good thing or a bad thing but what i know is that you can try it from here:
Navid-AI/Yehia-7B-preview

or Download it for your personal experiments from here:
Navid-AI/Yehia-7B-preview

Ramadan Kareem πŸŒ™
  • 1 reply
Β·
reacted to their post with πŸ”₯ about 1 month ago
posted an update about 1 month ago
reacted to lewtun's post with ❀️ about 1 month ago
view post
Post
4956
Introducing OpenR1-Math-220k!

open-r1/OpenR1-Math-220k

The community has been busy distilling DeepSeek-R1 from inference providers, but we decided to have a go at doing it ourselves from scratch πŸ’ͺ

What’s new compared to existing reasoning datasets?

β™Ύ Based on AI-MO/NuminaMath-1.5: we focus on math reasoning traces and generate answers for problems in NuminaMath 1.5, an improved version of the popular NuminaMath-CoT dataset.

🐳 800k R1 reasoning traces: We generate two answers for 400k problems using DeepSeek R1. The filtered dataset contains 220k problems with correct reasoning traces.

πŸ“€ 512 H100s running locally: Instead of relying on an API, we leverage vLLM and SGLang to run generations locally on our science cluster, generating 180k reasoning traces per day.

⏳ Automated filtering: We apply Math Verify to only retain problems with at least one correct answer. We also leverage Llama3.3-70B-Instruct as a judge to retrieve more correct examples (e.g for cases with malformed answers that can’t be verified with a rules-based parser)

πŸ“Š We match the performance of DeepSeek-Distill-Qwen-7B by finetuning Qwen-7B-Math-Instruct on our dataset.

πŸ”Ž Read our blog post for all the nitty gritty details: https://huggingface.co/blog/open-r1/update-2
replied to Keltezaa's post about 2 months ago
view reply

I am considering canceling my Pro subscription because I just discovered that i am just limited to 10 zeroGPU spaces i can host on my account. This number should be way higher.

reacted to their post with πŸš€ 2 months ago
view post
Post
2078
The winners of Best Paper Award in NeurIPs2024 (FoundationVision) Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction (2404.02905) has just released a new paper called infinty:
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431)

And i managed to build a space for it so anyone can try it out: MohamedRashad/Infinity

The idea of a text to image model using autoregressive archticture is quite interesting in my opinion.
reacted to alielfilali01's post with πŸ‘ 2 months ago
view post
Post
2061
3C3H AraGen Leaderboard welcomes today deepseek-ai/DeepSeek-V3 and 12 other models (including the late gpt-3.5 πŸ’€) to the ranking of best LLMs in Arabic !


Observations:
- DeepSeek-v3 ranked 3rd and only Open model among the top 5 !

- A 14B open model ( Qwen/Qwen2.5-14B-Instruct) outperforms gpt-3.5-turbo-0125 (from last year). This shows how much we came in advancing and supporting Arabic presence within the LLM ecosystem !

- Contrary to what observed in likelihood-acc leaderboards (like OALL/Open-Arabic-LLM-Leaderboard) further finetuned models like maldv/Qwentile2.5-32B-Instruct actually decreased the performance compared to the original model Qwen/Qwen2.5-32B-Instruct.
It's worth to note that the decrease is statiscally insignificant which imply that at best, the out-domain finetuning do not really hurts the model original capabilities acquired during pretraining.
Previous work addressed this (finetuning VS pretraining) but more investigation in this regard is required (any PhDs here ? This could be your question ...)


Check out the latest rankings: inceptionai/AraGen-Leaderboard
reacted to their post with ❀️ 2 months ago
view post
Post
2078
The winners of Best Paper Award in NeurIPs2024 (FoundationVision) Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction (2404.02905) has just released a new paper called infinty:
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431)

And i managed to build a space for it so anyone can try it out: MohamedRashad/Infinity

The idea of a text to image model using autoregressive archticture is quite interesting in my opinion.
posted an update 2 months ago
view post
Post
2078
The winners of Best Paper Award in NeurIPs2024 (FoundationVision) Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction (2404.02905) has just released a new paper called infinty:
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431)

And i managed to build a space for it so anyone can try it out: MohamedRashad/Infinity

The idea of a text to image model using autoregressive archticture is quite interesting in my opinion.
reacted to alielfilali01's post with πŸ€— 3 months ago
view post
Post
3490
Unpopular opinion: Open Source takes courage to do !

Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !

Cheers to the heroes here who see this!
Β·
reacted to their post with πŸ”₯ 3 months ago
posted an update 3 months ago
reacted to their post with πŸš€ 4 months ago
posted an update 4 months ago
posted an update 4 months ago
reacted to their post with πŸ‘€ 6 months ago
posted an update 6 months ago