HumanSense: From Multimodal Perception to Empathetic Context-Aware Responses through Reasoning MLLMs
Abstract
HumanSense is a benchmark for evaluating human-centered perception and interaction in Multimodal Large Language Models, focusing on multimodal context understanding and rational feedback through reinforcement learning.
While Multimodal Large Language Models (MLLMs) show immense promise for achieving truly human-like interactions, progress is hindered by the lack of fine-grained evaluation frameworks for human-centered scenarios, encompassing both the understanding of complex human intentions and the provision of empathetic, context-aware responses. Here we introduce HumanSense, a comprehensive benchmark designed to evaluate the human-centered perception and interaction capabilities of MLLMs, with a particular focus on deep understanding of extended multimodal contexts and the formulation of rational feedback. Our evaluation reveals that leading MLLMs still have considerable room for improvement, particularly for advanced interaction-oriented tasks. Supplementing visual input with audio and text information yields substantial improvements, and Omni-modal models show advantages on these tasks. Furthermore, we argue that appropriate feedback stems from a contextual analysis of the interlocutor's needs and emotions, with reasoning ability serving as the key to unlocking it. Accordingly, we employ a multi-stage, modality-progressive reinforcement learning to enhance the reasoning abilities of an Omni model, achieving substantial gains on evaluation results. Additionally, we observe that successful reasoning processes exhibit highly consistent thought patterns. By designing corresponding prompts, we also enhance the performance of non-reasoning models in a training-free manner. Project page: brightpinkhttps://digital-avatar.github.io/ai/HumanSense/
Community
We introduce the HumanSense benchmark to explore MLLMs’ capabilities in complex human-centered perception and interaction scenarios. We propose that omni-modal reasoning can enhance MLLMs’ performance on such tasks. We aim to inspire the community to recognize the potential
of MLLMs in advancing AI interaction experiences.
Project page: https://digital-avatar.github.io/ai/HumanSense/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MME-Emotion: A Holistic Evaluation Benchmark for Emotional Intelligence in Multimodal Large Language Models (2025)
- HumanOmniV2: From Understanding to Omni-Modal Reasoning with Context (2025)
- MOMENTS: A Comprehensive Multimodal Benchmark for Theory of Mind (2025)
- MultiVox: Benchmarking Voice Assistants for Multimodal Interactions (2025)
- OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs (2025)
- OSUM-EChat: Enhancing End-to-End Empathetic Spoken Chatbot via Understanding-Driven Spoken Dialogue (2025)
- ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language Models for Audio Generation and Editing (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper