STEVE: AStep Verification Pipeline for Computer-use Agent Training
Abstract
Developing AI agents to autonomously manipulate graphical user interfaces is a long challenging task. Recent advances in data scaling law inspire us to train computer-use agents with a scaled instruction set, yet using behavior cloning to train agents still requires immense high-quality trajectories. To meet the scalability need, we designed STEVE, a step verification pipeline for computer-use agent training. First, we establish a large instruction set for computer-use agents and collect trajectory data with some suboptimal agents. GPT-4o is used to verify the correctness of each step in the trajectories based on the screens before and after the action execution, assigning each step with a binary label. Last, we adopt the Kahneman and Tversky Optimization to optimize the agent from the binary stepwise labels. Extensive experiments manifest that our agent outperforms supervised finetuning by leveraging both positive and negative actions within a trajectory. Also, STEVE enables us to train a 7B vision-language model as a computer-use agent, achieving leading performance in the challenging live desktop environment WinAgentArena with great efficiency at a reduced cost. Code and data: https://github.com/FanbinLu/STEVE.
Community
A good paper about computer use agent can do the long-reasoning task (see demo in GitHub pages)
Code&Model&Data https://github.com/FanbinLu/STEVE
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- STeCa: Step-level Trajectory Calibration for LLM Agent Learning (2025)
- AI Agents for Computer Use: A Review of Instruction-based Computer Control, GUI Automation, and Operator Assistants (2025)
- Optimus-2: Multimodal Minecraft Agent with Goal-Observation-Action Conditioned Policy (2025)
- Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training (2025)
- Scaling Autonomous Agents via Automatic Reward Modeling And Planning (2025)
- Process Reward Models for LLM Agents: Practical Framework and Directions (2025)
- EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper