✨ 36B - Base & Instruct ✨ Apache 2.0 ✨ Native 512K long context ✨ Strong reasoning & agentic intelligence ✨ 2 Base versions: with & without synthetic data
Want to learn to build an AI Agent? I put together a cookbook for creating your own news research agent with OpenAI GPT-OSS:
- Searches headlines & specific sites - Pulls full articles when you need depth - Summarizes with clickable sources - Runs in a simple Gradio chat UI - No GPU, no local setup — just open-weight GPT-OSS models via Hugging Face
If you’ve been wanting to try agents but weren’t sure where to start, this is an end-to-end example you can fork, run, and adapt.
What can OpenAI’s new open models do with the news? I built a News Agent to find out.
It can answer questions about the news in real time, and every answer comes with original source links so you can dive deeper.
Ask it things like: - "What are the top news stories today?" - "What's the latest on artificial intelligence?" - Follow-up questions on specific stories
Runs with Hugging Face inference providers, letting you compare results from the OpenAI 20B and 120B models
So far, I’m quite impressed by the capabilities of even the smaller 20B model. Definitely not a perfect project, but curious to hear your thoughts!
OpenAI’s GPT-OSS has sparked ~400 new models on Hugging Face and racked up 5M downloads in less than a week, already outpacing DeepSeek R1’s first-week numbers.
For comparison: when R1 launched, I tracked 550 derivatives (across 8 base models) in a week, with ~3M downloads. GPT-OSS is ahead on adoption and engagement.
It’s also the most-liked release of any major LLM this summer. The 20B and 120B versions quickly shot past Kimi K2, GLM 4.5, and others in likes.
Most-downloaded GPT-OSS models include LM Studio and Unsloth AI versions: 1️⃣ openai/gpt-oss-20b - 2.0M 2️⃣ lmstudio-community/gpt-oss-20b-MLX-8bit - 750K 3️⃣ openai/gpt-oss-120b - 430K 4️⃣ unsloth/gpt-oss-20b-GGUF - 380K 5️⃣ lmstudio-community/gpt-oss-20b-GGUF - 330K
The 20B version is clearly finding its audience, showing the power of smaller, faster, more memory- and energy-efficient models. (These numbers don’t include calls to the models via inference providers, so the real usage is likely even bigger, especially for the 120B version)
Open-weight models let anyone build on top. Empower the builders, and innovation takes off. 🚀
🚀 smolagents v1.21.0 is here! Now with improved safety in the local Python executor: dunder calls are blocked! ⚠️ Still, not fully isolated: for untrusted code, use a remote executor instead: Docker, E2B, Wasm. ✨ Many bug fixes: more reliable code. 👉 https://github.com/huggingface/smolagents/releases/tag/v1.21.0
New interactive viz from AI World showing OpenAI's new open model gpt-oss-120b breaking into the top 50 most liked models of all time on the Hub in under a day! ☄️☄️☄️