Upload data.json with huggingface_hub
Browse files
data.json
CHANGED
@@ -109781,5 +109781,97 @@
|
|
109781 |
],
|
109782 |
"github": "",
|
109783 |
"abstract": "Human processes video reasoning in a sequential spatio-temporal reasoning logic, we first identify the relevant frames (\"when\") and then analyse the spatial relationships (\"where\") between key objects, and finally leverage these relationships to draw inferences (\"what\"). However, can Video Large Language Models (Video-LLMs) also \"reason through a sequential spatio-temporal logic\" in videos? Existing Video-LLM benchmarks primarily focus on assessing object presence, neglecting relational reasoning. Consequently, it is difficult to measure whether a model truly comprehends object interactions (actions/events) in videos or merely relies on pre-trained \"memory\" of co-occurrences as biases in generating answers. In this work, we introduce a Video Spatio-Temporal Reasoning (V-STaR) benchmark to address these shortcomings. The key idea is to decompose video understanding into a Reverse Spatio-Temporal Reasoning (RSTR) task that simultaneously evaluates what objects are present, when events occur, and where they are located while capturing the underlying Chain-of-thought (CoT) logic. To support this evaluation, we construct a dataset to elicit the spatial-temporal reasoning process of Video-LLMs. It contains coarse-to-fine CoT questions generated by a semi-automated GPT-4-powered pipeline, embedding explicit reasoning chains to mimic human cognition. Experiments from 14 Video-LLMs on our V-STaR reveal significant gaps between current Video-LLMs and the needs for robust and consistent spatio-temporal reasoning."
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
109784 |
}
|
109785 |
]
|
|
|
109781 |
],
|
109782 |
"github": "",
|
109783 |
"abstract": "Human processes video reasoning in a sequential spatio-temporal reasoning logic, we first identify the relevant frames (\"when\") and then analyse the spatial relationships (\"where\") between key objects, and finally leverage these relationships to draw inferences (\"what\"). However, can Video Large Language Models (Video-LLMs) also \"reason through a sequential spatio-temporal logic\" in videos? Existing Video-LLM benchmarks primarily focus on assessing object presence, neglecting relational reasoning. Consequently, it is difficult to measure whether a model truly comprehends object interactions (actions/events) in videos or merely relies on pre-trained \"memory\" of co-occurrences as biases in generating answers. In this work, we introduce a Video Spatio-Temporal Reasoning (V-STaR) benchmark to address these shortcomings. The key idea is to decompose video understanding into a Reverse Spatio-Temporal Reasoning (RSTR) task that simultaneously evaluates what objects are present, when events occur, and where they are located while capturing the underlying Chain-of-thought (CoT) logic. To support this evaluation, we construct a dataset to elicit the spatial-temporal reasoning process of Video-LLMs. It contains coarse-to-fine CoT questions generated by a semi-automated GPT-4-powered pipeline, embedding explicit reasoning chains to mimic human cognition. Experiments from 14 Video-LLMs on our V-STaR reveal significant gaps between current Video-LLMs and the needs for robust and consistent spatio-temporal reasoning."
|
109784 |
+
},
|
109785 |
+
{
|
109786 |
+
"date": "2025-03-18",
|
109787 |
+
"arxiv_id": "2503.13327",
|
109788 |
+
"title": "Edit Transfer: Learning Image Editing via Vision In-Context Relations",
|
109789 |
+
"authors": [
|
109790 |
+
"Lan Chen",
|
109791 |
+
"Qi Mao",
|
109792 |
+
"Yuchao Gu",
|
109793 |
+
"Mike Zheng Shou"
|
109794 |
+
],
|
109795 |
+
"github": "",
|
109796 |
+
"abstract": "We introduce a new setting, Edit Transfer, where a model learns a transformation from just a single source-target example and applies it to a new query image. While text-based methods excel at semantic manipulations through textual prompts, they often struggle with precise geometric details (e.g., poses and viewpoint changes). Reference-based editing, on the other hand, typically focuses on style or appearance and fails at non-rigid transformations. By explicitly learning the editing transformation from a source-target pair, Edit Transfer mitigates the limitations of both text-only and appearance-centric references. Drawing inspiration from in-context learning in large language models, we propose a visual relation in-context learning paradigm, building upon a DiT-based text-to-image model. We arrange the edited example and the query image into a unified four-panel composite, then apply lightweight LoRA fine-tuning to capture complex spatial transformations from minimal examples. Despite using only 42 training samples, Edit Transfer substantially outperforms state-of-the-art TIE and RIE methods on diverse non-rigid scenarios, demonstrating the effectiveness of few-shot visual relation learning."
|
109797 |
+
},
|
109798 |
+
{
|
109799 |
+
"date": "2025-03-18",
|
109800 |
+
"arxiv_id": "2503.12885",
|
109801 |
+
"title": "DreamRenderer: Taming Multi-Instance Attribute Control in Large-Scale Text-to-Image Models",
|
109802 |
+
"authors": [
|
109803 |
+
"Dewei Zhou",
|
109804 |
+
"Mingwei Li",
|
109805 |
+
"Zongxin Yang",
|
109806 |
+
"Yi Yang"
|
109807 |
+
],
|
109808 |
+
"github": "",
|
109809 |
+
"abstract": "Image-conditioned generation methods, such as depth- and canny-conditioned approaches, have demonstrated remarkable abilities for precise image synthesis. However, existing models still struggle to accurately control the content of multiple instances (or regions). Even state-of-the-art models like FLUX and 3DIS face challenges, such as attribute leakage between instances, which limits user control. To address these issues, we introduce DreamRenderer, a training-free approach built upon the FLUX model. DreamRenderer enables users to control the content of each instance via bounding boxes or masks, while ensuring overall visual harmony. We propose two key innovations: 1) Bridge Image Tokens for Hard Text Attribute Binding, which uses replicated image tokens as bridge tokens to ensure that T5 text embeddings, pre-trained solely on text data, bind the correct visual attributes for each instance during Joint Attention; 2) Hard Image Attribute Binding applied only to vital layers. Through our analysis of FLUX, we identify the critical layers responsible for instance attribute rendering and apply Hard Image Attribute Binding only in these layers, using soft binding in the others. This approach ensures precise control while preserving image quality. Evaluations on the COCO-POS and COCO-MIG benchmarks demonstrate that DreamRenderer improves the Image Success Ratio by 17.7% over FLUX and enhances the performance of layout-to-image models like GLIGEN and 3DIS by up to 26.8%. Project Page: https://limuloo.github.io/DreamRenderer/."
|
109810 |
+
},
|
109811 |
+
{
|
109812 |
+
"date": "2025-03-18",
|
109813 |
+
"arxiv_id": "2503.12533",
|
109814 |
+
"title": "Being-0: A Humanoid Robotic Agent with Vision-Language Models and Modular Skills",
|
109815 |
+
"authors": [
|
109816 |
+
"Haoqi Yuan",
|
109817 |
+
"Yu Bai",
|
109818 |
+
"Yuhui Fu",
|
109819 |
+
"Bohan Zhou",
|
109820 |
+
"Yicheng Feng",
|
109821 |
+
"Xinrun Xu",
|
109822 |
+
"Yi Zhan",
|
109823 |
+
"B\u00f6rje F. Karlsson",
|
109824 |
+
"Zongqing Lu"
|
109825 |
+
],
|
109826 |
+
"github": "",
|
109827 |
+
"abstract": "Building autonomous robotic agents capable of achieving human-level performance in real-world embodied tasks is an ultimate goal in humanoid robot research. Recent advances have made significant progress in high-level cognition with Foundation Models (FMs) and low-level skill development for humanoid robots. However, directly combining these components often results in poor robustness and efficiency due to compounding errors in long-horizon tasks and the varied latency of different modules. We introduce Being-0, a hierarchical agent framework that integrates an FM with a modular skill library. The FM handles high-level cognitive tasks such as instruction understanding, task planning, and reasoning, while the skill library provides stable locomotion and dexterous manipulation for low-level control. To bridge the gap between these levels, we propose a novel Connector module, powered by a lightweight vision-language model (VLM). The Connector enhances the FM's embodied capabilities by translating language-based plans into actionable skill commands and dynamically coordinating locomotion and manipulation to improve task success. With all components, except the FM, deployable on low-cost onboard computation devices, Being-0 achieves efficient, real-time performance on a full-sized humanoid robot equipped with dexterous hands and active vision. Extensive experiments in large indoor environments demonstrate Being-0's effectiveness in solving complex, long-horizon tasks that require challenging navigation and manipulation subtasks. For further details and videos, visit https://beingbeyond.github.io/being-0."
|
109828 |
+
},
|
109829 |
+
{
|
109830 |
+
"date": "2025-03-18",
|
109831 |
+
"arxiv_id": "2503.11751",
|
109832 |
+
"title": "reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs",
|
109833 |
+
"authors": [
|
109834 |
+
"Zhaofeng Wu",
|
109835 |
+
"Michihiro Yasunaga",
|
109836 |
+
"Andrew Cohen",
|
109837 |
+
"Yoon Kim",
|
109838 |
+
"Asli Celikyilmaz",
|
109839 |
+
"Marjan Ghazvininejad"
|
109840 |
+
],
|
109841 |
+
"github": "",
|
109842 |
+
"abstract": "Reward models have become a staple in modern NLP, serving as not only a scalable text evaluator, but also an indispensable component in many alignment recipes and inference-time algorithms. However, while recent reward models increase performance on standard benchmarks, this may partly be due to overfitting effects, which would confound an understanding of their true capability. In this work, we scrutinize the robustness of reward models and the extent of such overfitting. We build **reWordBench**, which systematically transforms reward model inputs in meaning- or ranking-preserving ways. We show that state-of-the-art reward models suffer from substantial performance degradation even with minor input transformations, sometimes dropping to significantly below-random accuracy, suggesting brittleness. To improve reward model robustness, we propose to explicitly train them to assign similar scores to paraphrases, and find that this approach also improves robustness to other distinct kinds of transformations. For example, our robust reward model reduces such degradation by roughly half for the Chat Hard subset in RewardBench. Furthermore, when used in alignment, our robust reward models demonstrate better utility and lead to higher-quality outputs, winning in up to 59% of instances against a standardly trained RM."
|
109843 |
+
},
|
109844 |
+
{
|
109845 |
+
"date": "2025-03-18",
|
109846 |
+
"arxiv_id": "2503.13435",
|
109847 |
+
"title": "WideRange4D: Enabling High-Quality 4D Reconstruction with Wide-Range Movements and Scenes",
|
109848 |
+
"authors": [
|
109849 |
+
"Ling Yang",
|
109850 |
+
"Kaixin Zhu",
|
109851 |
+
"Juanxi Tian",
|
109852 |
+
"Bohan Zeng",
|
109853 |
+
"Mingbao Lin",
|
109854 |
+
"Hongjuan Pei",
|
109855 |
+
"Wentao Zhang",
|
109856 |
+
"Shuicheng Yan"
|
109857 |
+
],
|
109858 |
+
"github": "https://github.com/Gen-Verse/WideRange4D",
|
109859 |
+
"abstract": "With the rapid development of 3D reconstruction technology, research in 4D reconstruction is also advancing, existing 4D reconstruction methods can generate high-quality 4D scenes. However, due to the challenges in acquiring multi-view video data, the current 4D reconstruction benchmarks mainly display actions performed in place, such as dancing, within limited scenarios. In practical scenarios, many scenes involve wide-range spatial movements, highlighting the limitations of existing 4D reconstruction datasets. Additionally, existing 4D reconstruction methods rely on deformation fields to estimate the dynamics of 3D objects, but deformation fields struggle with wide-range spatial movements, which limits the ability to achieve high-quality 4D scene reconstruction with wide-range spatial movements. In this paper, we focus on 4D scene reconstruction with significant object spatial movements and propose a novel 4D reconstruction benchmark, WideRange4D. This benchmark includes rich 4D scene data with large spatial variations, allowing for a more comprehensive evaluation of the generation capabilities of 4D generation methods. Furthermore, we introduce a new 4D reconstruction method, Progress4D, which generates stable and high-quality 4D results across various complex 4D scene reconstruction tasks. We conduct both quantitative and qualitative comparison experiments on WideRange4D, showing that our Progress4D outperforms existing state-of-the-art 4D reconstruction methods. Project: https://github.com/Gen-Verse/WideRange4D"
|
109860 |
+
},
|
109861 |
+
{
|
109862 |
+
"date": "2025-03-18",
|
109863 |
+
"arxiv_id": "2503.12937",
|
109864 |
+
"title": "R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization",
|
109865 |
+
"authors": [
|
109866 |
+
"Jingyi Zhang",
|
109867 |
+
"Jiaxing Huang",
|
109868 |
+
"Huanjin Yao",
|
109869 |
+
"Shunyu Liu",
|
109870 |
+
"Xikun Zhang",
|
109871 |
+
"Shijian Lu",
|
109872 |
+
"Dacheng Tao"
|
109873 |
+
],
|
109874 |
+
"github": "",
|
109875 |
+
"abstract": "Recent studies generally enhance MLLMs' reasoning capabilities via supervised fine-tuning on high-quality chain-of-thought reasoning data, which often leads models to merely imitate successful reasoning paths without understanding what the wrong reasoning paths are. In this work, we aim to enhance the MLLMs' reasoning ability beyond passively imitating positive reasoning paths. To this end, we design Step-wise Group Relative Policy Optimization (StepGRPO), a new online reinforcement learning framework that enables MLLMs to self-improve reasoning ability via simple, effective and dense step-wise rewarding. Specifically, StepGRPO introduces two novel rule-based reasoning rewards: Step-wise Reasoning Accuracy Reward (StepRAR) and Step-wise Reasoning Validity Reward (StepRVR). StepRAR rewards the reasoning paths that contain necessary intermediate reasoning steps via a soft key-step matching technique, while StepRAR rewards reasoning paths that follow a well-structured and logically consistent reasoning process through a reasoning completeness and logic evaluation strategy. With the proposed StepGRPO, we introduce R1-VL, a series of MLLMs with outstanding capabilities in step-by-step reasoning. Extensive experiments over 8 benchmarks demonstrate the superiority of our methods."
|
109876 |
}
|
109877 |
]
|