--- language: - en license: apache-2.0 task_categories: - text-generation - image-to-text dataset_info: features: - name: image dtype: image - name: instruction dtype: string - name: bbox sequence: float64 - name: bucket dtype: string splits: - name: test num_bytes: 334903619 num_examples: 1639 download_size: 334903619 dataset_size: 334903619 configs: - config_name: default data_files: - split: test path: test* --- # WebClick: A Multimodal Localization Benchmark for Web-Navigation Models We introduce WebClick, a high-quality benchmark dataset for evaluating navigation and localization capabilities of multimodal models and agents in Web environments. WebClick features 1,639 English-language web screenshots from over 100 websites paired with precisely annotated natural-language instructions and pixel-level click targets, in the same format as the widely-used screenspot benchmark. ## Design Goals and Use Case WebClick is designed to measure and advance the ability of AI systems to understand web interfaces, interpret user instructions, and take accurate actions within digital environments. The dataset contains three distinct groups of web screenshots that capture a range of real-world navigation scenarios, from agent-based web retrieval to human tasks like online shopping and calendar management. On a more technical level, this benchmark is intended for assessing multimodal models on their ability to navigate web interfaces, evaluating AI agents' understanding of UI elements and their functions, and testing models' abilities to ground natural language instructions to specific interactive elements. ## Dataset Structure The dataset contains 1,639 samples divided into three key groups: 1. **`agentbrowse` (36%)**: Pages encountered by the SurferH agent while solving web retrieval tasks from [WebVoyager](https://arxiv.org/abs/2401.13919) 2. **`humanbrowse` (31.8%)**: Pages and elements interacted with by humans performing everyday tasks (e-shopping, trip planning, personal organization) 3. **`calendars` (32.2%)**: A specialized subset focusing on calendar interfaces, a known challenge for UI understanding models Each sample consists of: - **`image`**: A screenshot of a web page - **`instruction`**: A natural language instruction describing the desired action - **`bbox`**: Coordinates of the bounding box (relative to the image dimensions) that identify the correct click target, such as an input field or a button - **`bucket`**: One of `agentbrowse`, `humanbrowse`, `calendars`: group this row belongs to The dataset includes several challenging scenarios: - Disambiguation between similar elements (e.g., "the login button in the middle", “the login button in the top-right”) - Cases where OCR is insufficient because the visible text isn't the interactive element - Navigation requiring understanding of relative spatial relationships between information and interaction points ## Dataset Creation: High Quality Annotations and NLP Instructions A key strength of this benchmark is its meticulous annotation: all bounding boxes correspond precisely to HTML element boundaries, ensuring rigorous evaluation of model performance. Each screenshot is paired with natural language instructions that simulate realistic navigation requests, requiring models to not only understand UI elements but also interpret contextual relationships between visual elements. ### Curation Rationale WebClick focuses on realism by capturing authentic interactions: actions taken by humans and agents. The records of WebClick are English-language, desktop-size screenshots of 100+ websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data. ### Challenging Examples for UI Element Selection With this new benchmark, H Company aims to unlock new capabilities in VLMs, and stimulate the progress of web agents. [comment]: # (Link to presentation with images https://docs.google.com/presentation/d/1NQGq75Ao_r-4GF8WCyK0BRPCdvkjzxIE2xP9ttV5UcM/edit#slide=id.g358e1dac3df_0_60) Our dataset includes examples that go beyond standard object detection or OCR, requiring genuine **UI understanding** and **instruction-based visual reasoning**. These examples highlight failure points in current models and test capabilities critical for real-world interaction with user interfaces, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas. ### Key Challenges Captured in the Benchmark - **UI Understanding** Tasks require comprehension of common UI conventions (e.g., icons, labels, layout). For instance, identifying the correct user settings button may involve recognizing a gear icon, or adding a specific product to a cart might require interpreting both imagery and adjacent labels. State-of-the-art models often fail at such tasks due to lack of contextual or semantic UI awareness. - **Instruction-Based Disambiguation** Some instructions describe objects based on spatial position, appearance, or intent (e.g., "middle of the page", "green button"). These tasks require combining textual instruction with visual reasoning in order to solve them — a challange most models do not yet handle robustly. - **Calendar Navigation** Even frontier models struggle to interact with calendar widgets. Understanding which dates are available (e.g., not grayed out or marked unavailable) is a frequent failure case, demonstrating gaps in dynamic UI interpretation. - **Format and Location Sensitivity** Instructions that rely on regional formats—like time (“18:45”) or date representations—test the model’s resilience to location-specific variations. Models trained on culturally homogeneous data often perform poorly here. ### Example Tasks | **Category** | **Instruction** | **Image** | |------------------------|------------------------------------------------|-----------| | UI Understanding | Access user account settings | ![Access user account settings](./examples/Access%20user%20account%20settings.png) | | UI Understanding | Add Insignia cable to cart | ![Add Insignia cable to cart](./examples/Add%20Insignia%20cable%20to%20cart.png) | | UI Understanding | Pick the first available date | ![Pick the first available date](./examples/Pick%20the%20first%20available%20date.png) | | Format Understanding | Choose 18:45 | ![Choose 18:45](./examples/Choose%2018_45.png) | | UI Disambiguation | Green button to create a travel alert | ![Green Button to create a travel alert](./examples/Green%20Button%20to%20create%20a%20travel%20alert.png) | | UI Disambiguation | Log in button (middle of the page) | ![log in button (middle of the page)](./examples/log%20in%20button%20(middle%20of%20the%20page).png) | | UI Disambiguation | Select fifth image in gallery | ![Select fifth image in gallery](./examples/Select%20fifth%20image%20in%20gallery.png) | | Calendar Understanding | Select Aug 7th | ![Select aug 7th](./examples/Select%20aug%207th.png) | # Results of Popular Models To put our benchmark into context, we evaluate our benchmark alongside the popular Screenspot [1] and ScreenspotV2 [2] benchmarks using a set of popular pre-trained models. From the table we can observe that the models mostly underperform on WebClick compared to both Screenspot benchmarks, making it a more challenging task. We also find that WebClick provides better signal for downstream performance for agentic applications of the model. | **Model** | **WebClick (ours)** | Screenspot | Screenspot V2 | |-------------------------------|----------------------------|------------|---------------| | osunlp/UGround-V1-2B [3] | 71.69% | 77.12% | 79.31% | | osunlp/UGround-V1-7B [3] | 82.37% | 85.69% | 84.26% | | Qwen/Qwen2.5-VL-3B-Instruct [4] | 71.15% | 82.78% | 84.34% | | Qwen/Qwen2.5-VL-7B-Instruct [4] | 74.37% | 85.53% | 88.04% | | ByteDance-Seed/UI-TARS-2B-SFT [5] | 64.23% | 66.82% | 69.39% | | ByteDance-Seed/UI-TARS-7B-DPO [5] | 80.67% | 84.20% | 86.70% | | Holo1-3B | 81.50% | 86.01% | 87.33% | | Holo1-7B | 84.03% | 87.42% | 89.85% | ### Annotations Annotations were created by UI experts with specialized knowledge of web interfaces. Each screenshot was paired with a natural language instruction describing an intended action, and a bounding box precisely matching HTML element boundaries. All labels were hand-written or hand-reviewed. Instructions were rewritten when needed to only contain non-ambiguous intents rather than visual descriptions. Screenshots were manually reviewed to avoid any personal information, with any identifiable data removed or anonymized. ### Licence - **Curated by:** H Company - **Language:** English - **License:** Apache 2.0 ### Dataset Sources - **Paper:** https://arxiv.org/abs/2506.02865 ## Citation [1] SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, Zhiyong Wu Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Aug. 2024 [2] OS-ATLAS: A Foundation Action Model for Generalist GUI Agents Zhiyong Wu, Zhenyu Wu, Fangzhi Xu, Yian Wang, Qiushi Sun, Chengyou Jia, Kanzhi Cheng, Zichen Ding, Liheng Chen, Paul Pu Liang, Yu Qiao arXiv preprint arXiv:2410.23218 (2024) [3] Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents Boyu Gou and Ruohan Wang and Boyuan Zheng and Yanan Xie and Cheng Chang and Yiheng Shu and Huan Sun and Yu Su The Thirteenth International Conference on Learning Representations (2025) [4] Qwen2.5-VL Technical Report Qwen Team arXiv preprint arXiv:2502.13923 (2025) [5] UI-TARS: Pioneering Automated GUI Interaction with Native Agents Yujia Qin, Yining Ye, Junjie Fang, Haoming Wang, Shihao Liang, Shizuo Tian, Junda Zhang, Jiahao Li, Yunxin Li, Shijue Huang, Wanjun Zhong, Kuanye Li, Jiale Yang, Yu Miao, Woyu Lin, Longxiang Liu, Xu Jiang, Qianli Ma, Jingyu Li, Xiaojun Xiao, Kai Cai, Chuang Li, Yaowei Zheng, Chaolin Jin, Chen Li, Xiao Zhou, Minchao Wang, Haoli Chen, Zhaojian Li, Haihua Yang, Haifeng Liu, Feng Lin, Tao Peng, Xin Liu, Guang Shi arXiv:2501.12326 (2025) **BibTeX:** ``` @dataset{hcompany2025uinavigate, author = {H Company Research Team}, title = {WebClick: A Multimodal Localization Benchmark for Web-Navigation Models}, year = {2025}, publisher = {H Company}, } @misc{andreux2025surferhmeetsholo1costefficient, title={Surfer-H Meets Holo1: Cost-Efficient Web Agent Powered by Open Weights}, author={Mathieu Andreux and Breno Baldas Skuk and Hamza Benchekroun and Emilien Biré and Antoine Bonnet and Riaz Bordie and Matthias Brunel and Pierre-Louis Cedoz and Antoine Chassang and Mickaël Chen and Alexandra D. Constantinou and Antoine d'Andigné and Hubert de La Jonquière and Aurélien Delfosse and Ludovic Denoyer and Alexis Deprez and Augustin Derupti and Michael Eickenberg and Mathïs Federico and Charles Kantor and Xavier Koegler and Yann Labbé and Matthew C. H. Lee and Erwan Le Jumeau de Kergaradec and Amir Mahla and Avshalom Manevich and Adrien Maret and Charles Masson and Rafaël Maurin and Arturo Mena and Philippe Modard and Axel Moyal and Axel Nguyen Kerbel and Julien Revelle and Mats L. Richter and María Santos and Laurent Sifre and Maxime Theillard and Marc Thibault and Louis Thiry and Léo Tronchon and Nicolas Usunier and Tony Wu}, year={2025}, eprint={2506.02865}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2506.02865}, } ``` ## Dataset Card Contact research@hcompany.ai