Joseph [open/acc] Pollack
Tonic
AI & ML interests
🤖Making robots to help people learn things quicker 👩🏻🚀🚀
Recent Activity
liked
a model
8 minutes ago
google/siglip2-base-patch32-256
upvoted
a
collection
9 minutes ago
SigLIP2
upvoted
a
collection
38 minutes ago
MassiveDS
Organizations
Tonic's activity

reacted to
smirki's
post with 👍
1 day ago

reacted to
tianchez's
post with 👍❤️❤️🔥🚀🚀
1 day ago
Post
3803
Introducing VLM-R1!
GRPO has helped DeepSeek R1 to learn reasoning. Can it also help VLMs perform stronger for general computer vision tasks?
The answer is YES and it generalizes better than SFT. We trained Qwen 2.5 VL 3B on RefCOCO (a visual grounding task) and eval on RefCOCO Val and RefGTA (an OOD task).
https://github.com/om-ai-lab/VLM-R1
GRPO has helped DeepSeek R1 to learn reasoning. Can it also help VLMs perform stronger for general computer vision tasks?
The answer is YES and it generalizes better than SFT. We trained Qwen 2.5 VL 3B on RefCOCO (a visual grounding task) and eval on RefCOCO Val and RefGTA (an OOD task).
https://github.com/om-ai-lab/VLM-R1

reacted to
louisbrulenaudet's
post with 🤗👍
2 days ago
Post
2960
I am pleased to introduce my first project built upon Hugging Face’s smolagents framework, integrated with Alpaca for financial market analysis automation 🦙🤗
The project implements technical indicators such as the Relative Strength Index (RSI) and Bollinger Bands to provide momentum and volatility analysis. Market data is retrieved through the Alpaca API, enabling access to historical price information across various timeframes.
AI-powered insights are generated using Hugging Face’s inference API, facilitating the analysis of market trends through natural language processing with DuckDuckGo search integration for real-time sentiment analysis based on financial news 🦆
Link to the GitHub project: https://github.com/louisbrulenaudet/agentic-market-tool
The project implements technical indicators such as the Relative Strength Index (RSI) and Bollinger Bands to provide momentum and volatility analysis. Market data is retrieved through the Alpaca API, enabling access to historical price information across various timeframes.
AI-powered insights are generated using Hugging Face’s inference API, facilitating the analysis of market trends through natural language processing with DuckDuckGo search integration for real-time sentiment analysis based on financial news 🦆
Link to the GitHub project: https://github.com/louisbrulenaudet/agentic-market-tool

reacted to
ZennyKenny's
post with 👍
2 days ago
Post
1939
Okay this is pretty crazy. Snowflake has CortexAI and Uber is already teasing QueryGPT, both of which prominently feature plain text to SQL features to query your database.
I decided to see how hard it would be to put together something similar using 🤗 smolagents. Turns out, it was pretty straightforward. I managed to get it done in London Luton airport this afternoon.
ZennyKenny/sqlAgent
I decided to see how hard it would be to put together something similar using 🤗 smolagents. Turns out, it was pretty straightforward. I managed to get it done in London Luton airport this afternoon.
ZennyKenny/sqlAgent

reacted to
m-ric's
post with 🚀👍🔥
2 days ago
Post
2356
Less is More for Reasoning (LIMO): a 32B model fine-tuned with 817 examples can beat o1-preview on math reasoning! 🤯
Do we really need o1's huge RL procedure to see reasoning emerge? It seems not.
Researchers from Shanghai Jiaotong University just demonstrated that carefully selected examples can boost math performance in large language models using SFT —no huge datasets or RL procedures needed.
Their procedure allows Qwen2.5-32B-Instruct to jump from 6.5% to 57% on AIME and from 59% to 95% on MATH, while using only 1% of the data in previous approaches.
⚡ The Less-is-More Reasoning Hypothesis:
‣ Minimal but precise examples that showcase optimal reasoning patterns matter more than sheer quantity
‣ Pre-training knowledge plus sufficient computational resources at inference levels up math skills
➡️ Core techniques:
‣ High-quality reasoning chains with self-verification steps
‣ 817 handpicked problems that encourage deeper reasoning
‣ Enough inference-time computation to allow extended reasoning
💪 Efficiency gains:
‣ Only 817 examples instead of 100k+
‣ 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data
This really challenges the notion that SFT leads to memorization rather than generalization! And opens up reasoning to GPU-poor researchers 🚀
Read the full paper here 👉 LIMO: Less is More for Reasoning (2502.03387)
Do we really need o1's huge RL procedure to see reasoning emerge? It seems not.
Researchers from Shanghai Jiaotong University just demonstrated that carefully selected examples can boost math performance in large language models using SFT —no huge datasets or RL procedures needed.
Their procedure allows Qwen2.5-32B-Instruct to jump from 6.5% to 57% on AIME and from 59% to 95% on MATH, while using only 1% of the data in previous approaches.
⚡ The Less-is-More Reasoning Hypothesis:
‣ Minimal but precise examples that showcase optimal reasoning patterns matter more than sheer quantity
‣ Pre-training knowledge plus sufficient computational resources at inference levels up math skills
➡️ Core techniques:
‣ High-quality reasoning chains with self-verification steps
‣ 817 handpicked problems that encourage deeper reasoning
‣ Enough inference-time computation to allow extended reasoning
💪 Efficiency gains:
‣ Only 817 examples instead of 100k+
‣ 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data
This really challenges the notion that SFT leads to memorization rather than generalization! And opens up reasoning to GPU-poor researchers 🚀
Read the full paper here 👉 LIMO: Less is More for Reasoning (2502.03387)

reacted to
as-cle-bert's
post with ❤️
2 days ago
Post
2160
I built an AI agent app in less than 8 hours🤯
And, believe me, this is 𝗻𝗼𝘁 clickbait❌
GitHub 👉 https://github.com/AstraBert/PapersChat
Demo 👉 as-cle-bert/PapersChat
The app is called 𝐏𝐚𝐩𝐞𝐫𝐬𝐂𝐡𝐚𝐭, and it is aimed at 𝗺𝗮𝗸𝗶𝗻𝗴 𝗰𝗵𝗮𝘁𝘁𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝘀𝗰𝗶𝗲𝗻𝘁𝗶𝗳𝗶𝗰 𝗽𝗮𝗽𝗲𝗿𝘀 𝗲𝗮𝘀𝗶𝗲𝗿.
𝐇𝐞𝐫𝐞 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐭𝐡𝐞 𝐚𝐩𝐩 𝐝𝐨𝐞𝐬:
📄 Parses the papers that you upload thanks to LlamaIndex🦙 (either with LlamaParse or with simpler, local methods)
📄 Embeds documents both with a sparse and with a dense encoder to enable hybrid search
📄 Uploads the embeddings to Qdrant
⚙️ Activates an Agent based on mistralai/Mistral-Small-24B-Instruct-2501 that will reply to your prompt
🧠 Retrieves information relevant to your question from the documents
🧠 If no relevant information is found, it searches PubMed and arXiv databases
🧠 Returns a grounded answer to your prompt
𝐇𝐨𝐰 𝐝𝐢𝐝 𝐈 𝐦𝐚𝐧𝐚𝐠𝐞 𝐭𝐨 𝐦𝐚𝐤𝐞 𝐭𝐡𝐢𝐬 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐢𝐧 𝟖 𝐡𝐨𝐮𝐫𝐬?
Three key points:
- LlamaIndex🦙 provides countless integrations with LLM providers, text embedding models and vectorstore services, and takes care of the internal architecture of the Agent. You just plug it in, and it works!🔌⚡
- Qdrant is a vector database service extremely easy to set up and use: you just need a one-line Docker command😉
- Gradio makes frontend development painless and fast, while still providing modern and responsive interfaces🏗️
And a bonus point:
- Deploying the demo app couldn't be easier if you use Gradio-based Hugging Face Spaces🤗
So, no more excuses: build your own AI agent today and do it fast, (almost) for free and effortlessly🚀
And if you need a starting point, the code for PapersChat is open and fully reproducible on GitHub 👉 https://github.com/AstraBert/PapersChat
And, believe me, this is 𝗻𝗼𝘁 clickbait❌
GitHub 👉 https://github.com/AstraBert/PapersChat
Demo 👉 as-cle-bert/PapersChat
The app is called 𝐏𝐚𝐩𝐞𝐫𝐬𝐂𝐡𝐚𝐭, and it is aimed at 𝗺𝗮𝗸𝗶𝗻𝗴 𝗰𝗵𝗮𝘁𝘁𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝘀𝗰𝗶𝗲𝗻𝘁𝗶𝗳𝗶𝗰 𝗽𝗮𝗽𝗲𝗿𝘀 𝗲𝗮𝘀𝗶𝗲𝗿.
𝐇𝐞𝐫𝐞 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐭𝐡𝐞 𝐚𝐩𝐩 𝐝𝐨𝐞𝐬:
📄 Parses the papers that you upload thanks to LlamaIndex🦙 (either with LlamaParse or with simpler, local methods)
📄 Embeds documents both with a sparse and with a dense encoder to enable hybrid search
📄 Uploads the embeddings to Qdrant
⚙️ Activates an Agent based on mistralai/Mistral-Small-24B-Instruct-2501 that will reply to your prompt
🧠 Retrieves information relevant to your question from the documents
🧠 If no relevant information is found, it searches PubMed and arXiv databases
🧠 Returns a grounded answer to your prompt
𝐇𝐨𝐰 𝐝𝐢𝐝 𝐈 𝐦𝐚𝐧𝐚𝐠𝐞 𝐭𝐨 𝐦𝐚𝐤𝐞 𝐭𝐡𝐢𝐬 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐢𝐧 𝟖 𝐡𝐨𝐮𝐫𝐬?
Three key points:
- LlamaIndex🦙 provides countless integrations with LLM providers, text embedding models and vectorstore services, and takes care of the internal architecture of the Agent. You just plug it in, and it works!🔌⚡
- Qdrant is a vector database service extremely easy to set up and use: you just need a one-line Docker command😉
- Gradio makes frontend development painless and fast, while still providing modern and responsive interfaces🏗️
And a bonus point:
- Deploying the demo app couldn't be easier if you use Gradio-based Hugging Face Spaces🤗
So, no more excuses: build your own AI agent today and do it fast, (almost) for free and effortlessly🚀
And if you need a starting point, the code for PapersChat is open and fully reproducible on GitHub 👉 https://github.com/AstraBert/PapersChat

reacted to
Reality123b's
post with 👀
2 days ago

reacted to
clem's
post with 👍🔥
2 days ago
Post
2498
What are the best organizations to follow on
@huggingface
?
On top of my head:
- Deepseek (35,000 followers): https://huggingface.co/deepseek-ai
- Meta Llama (27,000 followers): https://huggingface.co/meta-llama
- Black Forrest Labs (11,000 followers): https://huggingface.co/black-forest-labs
- OpenAI (5,000 followers): https://huggingface.co/openai
- Nvidia (16,000 followers): https://huggingface.co/nvidia
- MIcrosoft (9,000 followers): https://huggingface.co/microsoft
- AllenAI (2,000 followers): https://huggingface.co/allenai
- Mistral (5,000 followers): https://huggingface.co/mistralai
- XAI (600 followers): https://huggingface.co/xai-org
- Stability AI (16,000 followers): https://huggingface.co/stabilityai
- Qwen (16,000 followers): https://huggingface.co/Qwen
- GoogleAI (8,000 followers): https://huggingface.co/google
- Unsloth (3,000 followers): https://huggingface.co/unsloth
- Bria AI (4,000 followers): https://huggingface.co/briaai
- NousResearch (1,300 followers): https://huggingface.co/NousResearch
Bonus, the agent course org with 17,000 followers: https://huggingface.co/agents-course
On top of my head:
- Deepseek (35,000 followers): https://huggingface.co/deepseek-ai
- Meta Llama (27,000 followers): https://huggingface.co/meta-llama
- Black Forrest Labs (11,000 followers): https://huggingface.co/black-forest-labs
- OpenAI (5,000 followers): https://huggingface.co/openai
- Nvidia (16,000 followers): https://huggingface.co/nvidia
- MIcrosoft (9,000 followers): https://huggingface.co/microsoft
- AllenAI (2,000 followers): https://huggingface.co/allenai
- Mistral (5,000 followers): https://huggingface.co/mistralai
- XAI (600 followers): https://huggingface.co/xai-org
- Stability AI (16,000 followers): https://huggingface.co/stabilityai
- Qwen (16,000 followers): https://huggingface.co/Qwen
- GoogleAI (8,000 followers): https://huggingface.co/google
- Unsloth (3,000 followers): https://huggingface.co/unsloth
- Bria AI (4,000 followers): https://huggingface.co/briaai
- NousResearch (1,300 followers): https://huggingface.co/NousResearch
Bonus, the agent course org with 17,000 followers: https://huggingface.co/agents-course

reacted to
AdinaY's
post with ❤️👍
2 days ago
Post
4070
🚀 StepFun阶跃星辰 is making BIG open moves!
Last year, their GOT-OCR 2.0 took the community by storm 🔥but many didn’t know they were also building some amazing models. Now, they’ve just dropped something huge on the hub!
📺 Step-Video-T2V: a 30B bilingual open video model that generates 204 frames (8-10s) at 540P resolution with high information density & consistency.
stepfun-ai/stepvideo-t2v
🔊 Step-Audio-TTS-3B : a TTS trained with the LLM-Chat paradigm on a large synthetic dataset, capable of generating RAP & Humming
stepfun-ai/step-audio-67b33accf45735bb21131b0b
Last year, their GOT-OCR 2.0 took the community by storm 🔥but many didn’t know they were also building some amazing models. Now, they’ve just dropped something huge on the hub!
📺 Step-Video-T2V: a 30B bilingual open video model that generates 204 frames (8-10s) at 540P resolution with high information density & consistency.
stepfun-ai/stepvideo-t2v
🔊 Step-Audio-TTS-3B : a TTS trained with the LLM-Chat paradigm on a large synthetic dataset, capable of generating RAP & Humming
stepfun-ai/step-audio-67b33accf45735bb21131b0b

reacted to
dreamerdeo's
post with 🚀
2 days ago
Post
2642
🚀 Excited to share our technical report on the Southeast Asian multilingual model Sailor2 and its latest updates!
Our 49-page report details Sailor2's development journey, including multilingual data cleaning, small model data mixture simulations, multi-stage continual pre-training, multi-stage post-training, and multi-cultural multi-lingual evaluations. Sailor2 aims to streamline the multilingual model pre-training process efficiently for the community.
🧭 We highlight Sailor2's impressive performance in low-resource language translation scenarios and its cultural understanding advantages in Southeast Asia, promoting practical applications for regional languages.
Model updates include:
💡 More precise outputs: Reduced redundancy in model outputs through refined post-training data and optimization techniques.
🌈 Handling longer texts: Expanded to handle up to 128K context length in Southeast Asian languages through long-text training.
⚡️ Faster inference: Achieved 2.5x faster inference speed with speculative decoding.
🌪️ More model sizes: Introduced new sizes of 3B and 14B through model pruning.
🌟 All models are Apache-licensed for commercial use; development tools (code, resources) are open-source.
📚 Technical report: Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs (2502.12982)
🤖️ Models: sail/sailor2-language-models-674d7c9e6b4dbbd9a869906b
💬 Demo: sail/Sailor2-20B-Chat
📣 Sailor2 community: https://huggingface.co/sailor2
Our 49-page report details Sailor2's development journey, including multilingual data cleaning, small model data mixture simulations, multi-stage continual pre-training, multi-stage post-training, and multi-cultural multi-lingual evaluations. Sailor2 aims to streamline the multilingual model pre-training process efficiently for the community.
🧭 We highlight Sailor2's impressive performance in low-resource language translation scenarios and its cultural understanding advantages in Southeast Asia, promoting practical applications for regional languages.
Model updates include:
💡 More precise outputs: Reduced redundancy in model outputs through refined post-training data and optimization techniques.
🌈 Handling longer texts: Expanded to handle up to 128K context length in Southeast Asian languages through long-text training.
⚡️ Faster inference: Achieved 2.5x faster inference speed with speculative decoding.
🌪️ More model sizes: Introduced new sizes of 3B and 14B through model pruning.
🌟 All models are Apache-licensed for commercial use; development tools (code, resources) are open-source.
📚 Technical report: Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs (2502.12982)
🤖️ Models: sail/sailor2-language-models-674d7c9e6b4dbbd9a869906b
💬 Demo: sail/Sailor2-20B-Chat
📣 Sailor2 community: https://huggingface.co/sailor2