Post
262
Your Language Model needs better (open) environments to learn š
š https://huggingface.co/blog/anakin87/environments-hub
RL environments help LLMs practice, reason, and improve.
I explored the Environments Hub and wrote a walkthrough showing how to train and evaluate models using these open environments.
1ļøā£ šŖšµš š„š šŗš®ššš²šæš š³š¼šæ ššš š
DeepSeek-R1 made clear that Reinforcement Learning can be used to incentivize reasoning in LLMs.
In GRPO, the model generates multiple answers and learns to prefer the better ones from rewards.
2ļøā£ šŖšµš®š š²š»šš¶šæš¼š»šŗš²š»šš š®šæš²
In classic RL, the environment is the world where the Agent lives, interacts, and get rewards to learn.
We can also think of them as software packages, containing data, harness and scoring rules - for the model
to learn and be evaluated.
Nowadays, the Agent is not just the LLM. It can use tools, from a weather API to a terminal.
This makes environments for training and evaluation more complex and critical.
3ļøā£ šš”š šØš©šš§ šš”šš„š„šš§š š
Big labs are advancing, but open models and the community still face a fragmented ecosystem.
We risk becoming users of systems built with tools we can't access or fully understand.
4ļøā£ šš§šÆš¢š«šØš§š¦šš§šš¬ šš®š
That's why, I was excited when Prime Intellect released the Environments Hub.
It's a place where people share RL environments: tasks you can use to train LLMs with RL (GRPO-style) or evaluate Agents.
Plus, the Verifiers library ( @willcb ) standardizes the creation of RL environments and evaluations.
They can help to keep science and experimentation open. š¬
I explored the Hub and wrote a hands-on walkthrough š
- RL + LLMs basics
- Environments Hub navigation
- Evaluating models/Agents
- GRPO Training a tiny model on an alphabetical sort task
Take a look!
š https://huggingface.co/blog/anakin87/environments-hub
š https://huggingface.co/blog/anakin87/environments-hub
RL environments help LLMs practice, reason, and improve.
I explored the Environments Hub and wrote a walkthrough showing how to train and evaluate models using these open environments.
1ļøā£ šŖšµš š„š šŗš®ššš²šæš š³š¼šæ ššš š
DeepSeek-R1 made clear that Reinforcement Learning can be used to incentivize reasoning in LLMs.
In GRPO, the model generates multiple answers and learns to prefer the better ones from rewards.
2ļøā£ šŖšµš®š š²š»šš¶šæš¼š»šŗš²š»šš š®šæš²
In classic RL, the environment is the world where the Agent lives, interacts, and get rewards to learn.
We can also think of them as software packages, containing data, harness and scoring rules - for the model
to learn and be evaluated.
Nowadays, the Agent is not just the LLM. It can use tools, from a weather API to a terminal.
This makes environments for training and evaluation more complex and critical.
3ļøā£ šš”š šØš©šš§ šš”šš„š„šš§š š
Big labs are advancing, but open models and the community still face a fragmented ecosystem.
We risk becoming users of systems built with tools we can't access or fully understand.
4ļøā£ šš§šÆš¢š«šØš§š¦šš§šš¬ šš®š
That's why, I was excited when Prime Intellect released the Environments Hub.
It's a place where people share RL environments: tasks you can use to train LLMs with RL (GRPO-style) or evaluate Agents.
Plus, the Verifiers library ( @willcb ) standardizes the creation of RL environments and evaluations.
They can help to keep science and experimentation open. š¬
I explored the Hub and wrote a hands-on walkthrough š
- RL + LLMs basics
- Environments Hub navigation
- Evaluating models/Agents
- GRPO Training a tiny model on an alphabetical sort task
Take a look!
š https://huggingface.co/blog/anakin87/environments-hub