Papers
arxiv:2509.25123

From f(x) and g(x) to f(g(x)): LLMs Learn New Skills in RL by Composing Old Ones

Published on Sep 29
· Submitted by weize on Sep 30
Authors:
,
,
,
,
,
,
,
,
,

Abstract

Reinforcement learning enables large language models to acquire new compositional skills by combining existing ones, which transfer to different tasks and improve reasoning behaviors.

AI-generated summary

Does RL teach LLMs genuinely new skills, or does it merely activate existing ones? This question lies at the core of ongoing debates about the role of RL in LLM post-training. On one side, strong empirical results can be achieved with RL even without preceding supervised finetuning; on the other, critics argue that RL contributes little beyond reweighting existing reasoning strategies. This work provides concrete evidence that LLMs can acquire genuinely new skills during RL by composing existing ones, mirroring one of the central mechanisms by which humans acquire new cognitive skills. To mitigate data contamination and other confounding factors, and to allow precise control over task complexity, we develop a synthetic framework for our investigation. Specifically, we define a skill as the ability to infer the output of a string transformation function f(x) given x. When an LLM has already learned f and g prior to RL, our experiments reveal that RL enables it to learn unseen compositions of them h(x)=g(f(x)). Further, this compositional ability generalizes to more difficult problems such as compositions of >2 functions unseen during RL training. Surprisingly, our experiments show that compositional skill acquired on a source task transfers to a different target task. This transfer happens even without compositional training on the target, requiring only prior knowledge of the target's atomic skills. Our qualitative analysis shows that RL fundamentally changes the reasoning behaviors of the models. In contrast, next-token training with the same data yields none of these findings. Our systematic experiments provide fresh insights into LLM learning, suggesting the value of first building base models with basic skills, then using RL to incentivize advanced, generalizable skills for complex problems.

Community

Paper submitter

Can RL teach LLMs new skills? We find the key is composition. Our work shows that once a model has the necessary atomic skills, properly incentivized RL enables it to learn a generalizable and transferable meta-skill for composing those abilities while SFT cannot. We also clarify "RL only reranks" debate: a fine-grained pass@k analysis reveals that while the performance gap may shrink on easy problems, it widens dramatically on difficult ones, proving genuine skill acquisition.

Code: https://github.com/PRIME-RL/RL-Compositionality
Tweet: https://x.com/lifan__yuan/status/1963662222602723673

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.25123 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.25123 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.25123 in a Space README.md to link it from this page.

Collections including this paper 1