Tool-Star: Empowering LLM-Brained Multi-Tool Reasoner via Reinforcement Learning
Abstract
Tool-Star, an RL-based framework, enables LLMs to autonomously use multiple tools for stepwise reasoning, leveraging data synthesis and hierarchical reward design.
Recently, large language models (LLMs) have shown remarkable reasoning capabilities via large-scale reinforcement learning (RL). However, leveraging the RL algorithm to empower effective multi-tool collaborative reasoning in LLMs remains an open challenge. In this paper, we introduce Tool-Star, an RL-based framework designed to empower LLMs to autonomously invoke multiple external tools during stepwise reasoning. Tool-Star integrates six types of tools and incorporates systematic designs in both data synthesis and training. To address the scarcity of tool-use data, we propose a general tool-integrated reasoning data synthesis pipeline, which combines tool-integrated prompting with hint-based sampling to automatically and scalably generate tool-use trajectories. A subsequent quality normalization and difficulty-aware classification process filters out low-quality samples and organizes the dataset from easy to hard. Furthermore, we propose a two-stage training framework to enhance multi-tool collaborative reasoning by: (1) cold-start fine-tuning, which guides LLMs to explore reasoning patterns via tool-invocation feedback; and (2) a multi-tool self-critic RL algorithm with hierarchical reward design, which reinforces reward understanding and promotes effective tool collaboration. Experimental analyses on over 10 challenging reasoning benchmarks highlight the effectiveness and efficiency of Tool-Star. The code is available at https://github.com/dongguanting/Tool-Star.
Community
š§āØ All the datasets and model checkpoints of Tool-star are fully open-sourced:
- Github: https://github.com/dongguanting/Tool-Star
- SFT Dataset: https://huggingface.co/datasets/dongguanting/Tool-Star-SFT-54K
- RL Dataset: https://github.com/dongguanting/Tool-Star/tree/main/Tool_Star_RL/mix_grpo
š” Overview
Tool-Star is a reinforcement learning-based framework designed to empower LLMs to autonomously invoke multiple external tools during stepwise reasoning. Specifically, Tool-Star integrates six types of tools into the reasoning process (three for training and three for inference-time optimization) and incorporates systematic designs in both data synthesis and training algorithms.
š Overall Performance
As shown below, Tool-Star demonstrates strong overall reasoning performance across more than 10 challenging computational reasoning tasks (e.g., AIME24 and MATH500) and knowledge-intensive reasoning tasks (e.g., WebWalker and HotpotQA), while ensuring both efficiency and reliability in tool usage.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ReTool: Reinforcement Learning for Strategic Tool Use in LLMs (2025)
- Agentic Reasoning and Tool Integration for LLMs via Reinforcement Learning (2025)
- ToolRL: Reward is All Tool Learning Needs (2025)
- OpenThinkIMG: Learning to Think with Images via Visual Tool Reinforcement Learning (2025)
- ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning (2025)
- ToRL: Scaling Tool-Integrated RL (2025)
- ZeroSearch: Incentivize the Search Capability of LLMs without Searching (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper