Papers
arxiv:2508.10576

HumanSense: From Multimodal Perception to Empathetic Context-Aware Responses through Reasoning MLLMs

Published on Aug 14
· Submitted by RobinRoaR on Aug 15
Authors:
,
,
,
,
,
,

Abstract

HumanSense is a benchmark for evaluating human-centered perception and interaction in Multimodal Large Language Models, focusing on multimodal context understanding and rational feedback through reinforcement learning.

AI-generated summary

While Multimodal Large Language Models (MLLMs) show immense promise for achieving truly human-like interactions, progress is hindered by the lack of fine-grained evaluation frameworks for human-centered scenarios, encompassing both the understanding of complex human intentions and the provision of empathetic, context-aware responses. Here we introduce HumanSense, a comprehensive benchmark designed to evaluate the human-centered perception and interaction capabilities of MLLMs, with a particular focus on deep understanding of extended multimodal contexts and the formulation of rational feedback. Our evaluation reveals that leading MLLMs still have considerable room for improvement, particularly for advanced interaction-oriented tasks. Supplementing visual input with audio and text information yields substantial improvements, and Omni-modal models show advantages on these tasks. Furthermore, we argue that appropriate feedback stems from a contextual analysis of the interlocutor's needs and emotions, with reasoning ability serving as the key to unlocking it. Accordingly, we employ a multi-stage, modality-progressive reinforcement learning to enhance the reasoning abilities of an Omni model, achieving substantial gains on evaluation results. Additionally, we observe that successful reasoning processes exhibit highly consistent thought patterns. By designing corresponding prompts, we also enhance the performance of non-reasoning models in a training-free manner. Project page: brightpinkhttps://digital-avatar.github.io/ai/HumanSense/

Community

Paper submitter
edited 4 days ago

We introduce the HumanSense benchmark to explore MLLMs’ capabilities in complex human-centered perception and interaction scenarios. We propose that omni-modal reasoning can enhance MLLMs’ performance on such tasks. We aim to inspire the community to recognize the potential
of MLLMs in advancing AI interaction experiences.

Project page: https://digital-avatar.github.io/ai/HumanSense/

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.10576 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.10576 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.10576 in a Space README.md to link it from this page.

Collections including this paper 1