Ksenia Se

Kseniase

AI & ML interests

None yet

Recent Activity

replied to their post 3 days ago
8 Emerging trends in Reinforcement Learning Reinforcement learning is having a moment - and not just this week. Some of its directions are already showing huge promise, while others are still early but exciting. Here’s a look at what’s happening right now in RL: 1. Reinforcement Pre-Training (RPT) → https://huggingface.co/papers/2506.08007 Reframes next-token pretraining as RL with verifiable rewards, yielding scalable reasoning gains 2. Reinforcement Learning from Human Feedback (RLHF) → https://huggingface.co/papers/1706.03741 The top approach. It trains a model using human preference feedback, building a reward model and then optimizing the policy to generate outputs people prefer 3. Reinforcement Learning with Verifiable Rewards (RLVR) → https://huggingface.co/papers/2506.14245 Moves from subjective (human-labeled) rewards to objective ones that can be automatically verified, like in math, code, or rubrics as reward, for example → https://huggingface.co/papers/2508.12790, https://huggingface.co/papers/2507.17746 4. Multi-objective RL → https://huggingface.co/papers/2508.07768 Trains LMs to balance multiple goals at once, like being helpful but also concise or creative, ensuring that improving one goal doesn’t ruin another 5. Parallel thinking RL → https://huggingface.co/papers/2509.07980 Trains parallel chains of thought, boosting math accuracy and final ceilings. It first teaches the model “parallel thinking” skill on easier problems, then uses RL to refine it on harder ones Read further below ⬇️ And if you like this, subscribe to the Turing post: https://www.turingpost.com/subscribe Also, check out our recent guide about the past, present and future of RL: https://www.turingpost.com/p/rlguide
posted an update 3 days ago
8 Emerging trends in Reinforcement Learning Reinforcement learning is having a moment - and not just this week. Some of its directions are already showing huge promise, while others are still early but exciting. Here’s a look at what’s happening right now in RL: 1. Reinforcement Pre-Training (RPT) → https://huggingface.co/papers/2506.08007 Reframes next-token pretraining as RL with verifiable rewards, yielding scalable reasoning gains 2. Reinforcement Learning from Human Feedback (RLHF) → https://huggingface.co/papers/1706.03741 The top approach. It trains a model using human preference feedback, building a reward model and then optimizing the policy to generate outputs people prefer 3. Reinforcement Learning with Verifiable Rewards (RLVR) → https://huggingface.co/papers/2506.14245 Moves from subjective (human-labeled) rewards to objective ones that can be automatically verified, like in math, code, or rubrics as reward, for example → https://huggingface.co/papers/2508.12790, https://huggingface.co/papers/2507.17746 4. Multi-objective RL → https://huggingface.co/papers/2508.07768 Trains LMs to balance multiple goals at once, like being helpful but also concise or creative, ensuring that improving one goal doesn’t ruin another 5. Parallel thinking RL → https://huggingface.co/papers/2509.07980 Trains parallel chains of thought, boosting math accuracy and final ceilings. It first teaches the model “parallel thinking” skill on easier problems, then uses RL to refine it on harder ones Read further below ⬇️ And if you like this, subscribe to the Turing post: https://www.turingpost.com/subscribe Also, check out our recent guide about the past, present and future of RL: https://www.turingpost.com/p/rlguide
View all activity

Organizations

Turing Post's profile picture Journalists on Hugging Face's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture Sandbox's profile picture