MMToM-QA / README.md
Chuanyang-Jin's picture
Update README.md
35a3a78 verified
metadata
license: mit
language:
  - en
task_categories:
  - question-answering
tags:
  - Multimodal
  - Theory_of_Mind
size_categories:
  - n<1K

MMToM-QA is the first multimodal benchmark to evaluate machine Theory of Mind (ToM), the ability to understand people's minds. It is introduced in the paper MMToM-QA: Multimodal Theory of Mind Question Answering (Outstanding Paper Award at ACL 2024).

MMToM-QA systematically evaluates the cognitive ability to understand people's minds both on multimodal data and different unimodal data. MMToM-QA consists of 600 questions. The questions are categorized into seven types, evaluating belief inference and goal inference in rich and diverse situations. Each belief inference type has 100 questions, totaling 300 belief questions; each goal inference type has 75 questions, totaling 300 goal questions.

Currently, only the text-only version of MMToM-QA is available on Hugging Face. For the multimodal or video-only versions, please visit the GitHub repository: https://github.com/chuanyangjin/MMToM-QA