AI & ML interests

Exploring Extreme Quantization techniques !

Recent Activity

Parveshiiii 
posted an update 3 days ago
view post
Post
2481
🧠 Glimpses of AGI — A Vision for All Humanity
What if AGI wasn’t just a distant dream—but a blueprint already unfolding?

I’ve just published a deep dive called Glimpses of AGI, exploring how scalable intelligence, synthetic reasoning, and alignment strategies are paving a new path forward. This isn’t your average tech commentary—it’s a bold vision for conscious AI systems that reason, align, and adapt beyond narrow tasks.

🔍 Read it, upvote it if it sparks something, and let’s ignite a collective conversation about the future of AGI.

https://huggingface.co/blog/Parveshiiii/glimpses-of-agi


Parveshiiii 
posted an update 6 days ago
view post
Post
2707
🧠 MathX-5M by XenArcAI — Scalable Math Reasoning for Smarter LLMs

Introducing MathX-5M, a high-quality, instruction-tuned dataset built to supercharge mathematical reasoning in large language models. With 5 million rigorously filtered examples, it spans everything from basic arithmetic to advanced calculus—curated from public sources and enhanced with synthetic data.

🔍 Key Highlights:
- Step-by-step reasoning with verified answers
- Covers algebra, geometry, calculus, logic, and more
- RL-validated correctness and multi-stage filtering
- Ideal for fine-tuning, benchmarking, and educational AI

📂 - XenArcAI/MathX-5M


  • 1 reply
·
Abhaykoul 
posted an update 9 days ago
view post
Post
2828
🎉 Dhanishtha 2.0 Preview is Now Open Source!

The world's first Intermediate Thinking Model is now available to everyone!

Dhanishtha 2.0 Preview brings revolutionary intermediate thinking capabilities to the open-source community. Unlike traditional reasoning models that think once, Dhanishtha can think, answer, rethink, answer again, and continue rethinking as needed using multiple blocks between responses.

🚀 Key Features
- Intermediate thinking: Think → Answer → Rethink → Answer → Rethink if needed...
- Token efficient: Uses up to 79% fewer tokens than DeepSeek R1 on similar queries
- Transparent thinking: See the model's reasoning process in real-time
- Open source: Freely available for research and development


HelpingAI/Dhanishtha-2.0-preview
https://helpingai.co/chat
  • 1 reply
·
Abhaykoul 
posted an update 15 days ago
view post
Post
4194
Introducing Dhanishtha 2.0: World's first Intermediate Thinking Model

Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.

Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---

You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.

---
For Devs:
🔑 Get your API key at https://helpingai.co/dashboard
from HelpingAI import HAI  # pip install HelpingAI==1.1.1
from rich import print

hai = HAI(api_key="hl-***********************")

response = hai.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What is the value of ∫0∞𝑥3/𝑥−1𝑑𝑥 ?"}],
    stream=True,
    hide_think=False # Hide or show models thinking
)

for chunk in response:
    print(chunk.choices[0].delta.content, end="", flush=True)
  • 2 replies
·
alielfilali01 
posted an update 2 months ago
alielfilali01 
posted an update 5 months ago
view post
Post
1052
🚨 Arabic LLM Evaluation 🚨

Few models join the ranking of https://huggingface.co/spaces/inceptionai/AraGen-Leaderboard Today.

The new MistralAI model, Saba, is quite impressive, Top10 ! Well done @arthurmensch and team.

Sadly Mistral did not follow its strategy about public weights this time, we hope this changes soon and we get the model with a permissive license.

We added other Mistral models and apparently, we have been sleeping on mistralai/Mistral-Large-Instruct-2411 !

Another impressive model that joined the ranking today is ALLaM-AI/ALLaM-7B-Instruct-preview. After a long wait finally ALLaM is here and it is IMPRESSIVE given its size !

ALLaM is ranked on OALL/Open-Arabic-LLM-Leaderboard as well.
Abhaykoul 
posted an update 5 months ago
view post
Post
4755
🔥 THE WAIT IS OVER... HAI-SER IS HERE! 🔥

Yo fam, this ain't just another AI drop— this is the FUTURE of emotional intelligence! 🚀

Introducing HAI-SER, powered by Structured Emotional Reasoning (SER), the next-level AI that doesn’t just understand your words—it feels you, analyzes your emotions, and helps you navigate life’s toughest moments. 💡

💥 What makes HAI-SER a game-changer?
🔹 Emotional Vibe Check – Gets the mood, energy, and what’s really going on 🎭
🔹 Mind-State Analysis – Breaks down your thoughts, beliefs, and patterns 🤯
🔹 Root Cause Deep-Dive – Unpacks the WHY behind your emotions 💡
🔹 Impact Check – Sees how it’s affecting your life and mental health 💔
🔹 Safety Check – Prioritizes your well-being and crisis management 🚨
🔹 Healing Game Plan – Custom strategies to help you bounce back 💪
🔹 Growth Potential – Turns struggles into opportunities for self-improvement 📈
🔹 How to Approach – Teaches you and others how to communicate and heal 🤝
🔹 Personalized Response – Not just generic advice—real talk, tailored to YOU 💯

No more robotic AI responses. No more surface-level advice. HAI-SER gets deep, analyzing emotions with precision and giving real, actionable support.

This ain’t just AI—this is your digital therapist, life coach, and hype squad all in one. Whether it’s mental health, career struggles, relationships, or personal growth, HAI-SER has your back.

🚀 The future of emotionally intelligent AI is HERE.
Are you ready? 🔥💯

HelpingAI/HAI-SER
·
alielfilali01 
posted an update 6 months ago
view post
Post
2137
3C3H AraGen Leaderboard welcomes today deepseek-ai/DeepSeek-V3 and 12 other models (including the late gpt-3.5 💀) to the ranking of best LLMs in Arabic !


Observations:
- DeepSeek-v3 ranked 3rd and only Open model among the top 5 !

- A 14B open model ( Qwen/Qwen2.5-14B-Instruct) outperforms gpt-3.5-turbo-0125 (from last year). This shows how much we came in advancing and supporting Arabic presence within the LLM ecosystem !

- Contrary to what observed in likelihood-acc leaderboards (like OALL/Open-Arabic-LLM-Leaderboard) further finetuned models like maldv/Qwentile2.5-32B-Instruct actually decreased the performance compared to the original model Qwen/Qwen2.5-32B-Instruct.
It's worth to note that the decrease is statiscally insignificant which imply that at best, the out-domain finetuning do not really hurts the model original capabilities acquired during pretraining.
Previous work addressed this (finetuning VS pretraining) but more investigation in this regard is required (any PhDs here ? This could be your question ...)


Check out the latest rankings: https://huggingface.co/spaces/inceptionai/AraGen-Leaderboard
alielfilali01 
posted an update 6 months ago
view post
Post
2053
~75% on the challenging GPQA with only 40M parameters 🔥🥳

GREAT ACHIEVEMENT ! Or is it ?

This new Work, "Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation", take out the mystery about many models i personally suspected their results. Speacially on leaderboards other than the english one, Like the Open Arabic LLM Leaderbaord OALL/Open-Arabic-LLM-Leaderboard.

The authors of this work, first started by training a model on the GPQA data, which, unsurprisingly, led to the model achieving 100% performance.

Afterward, they trained what they referred to as a 'legitimate' model on legitimate data (MedMCQA). However, they introduced a distillation loss from the earlier, 'cheated' model.

What they discovered was fascinating: the knowledge of GPQA leaked through this distillation loss, even though the legitimate model was never explicitly trained on GPQA during this stage.

This raises important questions about the careful use of distillation in model training, especially when the training data is opaque. As they demonstrated, it’s apparently possible to (intentionally or unintentionally) leak test data through this method.

Find out more: Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation (2412.15255)
  • 1 reply
·
Abhaykoul 
posted an update 7 months ago
view post
Post
2190
🔥 BIG ANNOUNCEMENT: THE HELPINGAI API IS LIVE! 🔥

Yo, the moment you’ve all been waiting for is here! 🚀 The HelpingAI API is now LIVE and ready to level up your projects! 🔥 We’re bringing that next-level AI goodness straight to your fingertips. 💯

No more waiting— it’s time to build something epic! 🙌

From now on, you can integrate our cutting-edge AI models into your own applications, workflows, and everything in between. Whether you’re a developer, a creator, or just someone looking to make some serious moves, this is your chance to unlock the full potential of emotional intelligence and adaptive AI.

Check out the docs 🔥 and let’s get to work! 🚀

👉 Check out the docs and start building (https://helpingai.co/docs)
👉 Visit the HelpingAI website (https://helpingai.co/)
·
alielfilali01 
posted an update 7 months ago
view post
Post
3547
Unpopular opinion: Open Source takes courage to do !

Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !

Cheers to the heroes here who see this!
·
alielfilali01 
posted an update 7 months ago
view post
Post
1598
Apparently i forgot to put this here !

Well, this is a bit late but consider given our recent blog a read if you are interested in Evaluation.

You don't have to be into Arabic NLP in order to read it, the main contribution we are introducing is a new evaluation measure for NLG. We made the fisrt application of this measure on Arabic for now and we will be working with colleagues from the community to expand it to other languages.

Blog:
Rethinking LLM Evaluation with 3C3H: AraGen Benchmark and Leaderboard
https://huggingface.co/blog/leaderboard-3c3h-aragen

Space:
https://huggingface.co/spaces/inceptionai/AraGen-Leaderboard

Give it a read and let me know your thoughts 🤗
erikkaum 
posted an update 8 months ago
view post
Post
1798
A while ago I started experimenting with compiling the Python interpreter to WASM.

To build a secure, fast, and lightweight sandbox for code execution — ideal for running LLM-generated Python code.

- Send code simply as a POST request
- 1-2ms startup times

Hack away:
https://github.com/ErikKaum/runner
alielfilali01 
posted an update 8 months ago
view post
Post
2257
Unpopular opinion : o1-preview is more stupid than 4o and Qwen2.5-72B-Instruct in extremely underrated !
  • 2 replies
·