--- dataset_info: features: - name: prompt dtype: string - name: metadata dtype: string - name: task dtype: string - name: dataset dtype: string splits: - name: train num_bytes: 432885480 num_examples: 30647 download_size: 132698519 dataset_size: 432885480 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for MMTU ## Dataset Summary |[**🛠️GitHub**](https://github.com/MMTU-Benchmark/MMTU/tree/main) |[**🏆Leaderboard**](#leaderboard)|[**📖 Paper**](https://arxiv.org/abs/2506.05587) | MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark by [Junjie Xing](https://www.microsoft.com/en-us/research/people/junjiexing/), [Yeye He](https://www.microsoft.com/en-us/research/people/yeyehe/), [Mengyu Zhou](https://www.microsoft.com/en-us/research/people/mezho/), [Haoyu Dong](https://www.microsoft.com/en-us/research/people/hadong/), [Shi Han](https://www.microsoft.com/en-us/research/people/shihan/), [Lingjiao Chen](https://www.microsoft.com/en-us/research/people/lingjiaochen/), [Dongmei Zhang](https://www.microsoft.com/en-us/research/people/dongmeiz/), [Surajit Chaudhuri](https://www.microsoft.com/en-us/research/people/surajitc/), and [H. V. Jagadish](https://web.eecs.umich.edu/~jag/). Tables and table-based use cases play a crucial role in many real-world applications, such as spreadsheets, databases, and computational notebooks, which traditionally require expert-level users like data engineers, analysts, and database administrators to operate. Although LLMs have shown remarkable progress in working with tables, comprehensive benchmarking of such capabilities remains limited, often narrowly focusing on tasks like NL-to-SQL and Table-QA, while overlooking the broader spectrum of real-world tasks that professional users face today. We introduce **MMTU**, a large-scale benchmark with over **30K questions** across **25 real-world table tasks**, designed to comprehensively evaluate models ability to understand, reason, and manipulate real tables at the expert-level. These tasks are drawn from decades' worth of computer science research on tabular data, with a focus on complex table tasks faced by professional users. We show that MMTU require a combination of skills -- including table understanding, reasoning, and coding -- that remain challenging for today's frontier models, where even frontier reasoning models like OpenAI o4-mini and DeepSeek R1 score only around 60%, suggesting significant room for improvement. Our evaluation code is available at [GitHub](https://github.com/MMTU-Benchmark/MMTU/tree/main). mmtu ## Dataset Creation MMTU was developed through the meticulous curation of 52 datasets across 25 task categories, each carefully labeled by computer science researchers, in decades’ worth of research on tabular data from communities such as data management (SIGMOD/VLDB), programming languages (PLDI/POPL), and web data (WWW/WSDM). The benchmark emphasizes real-world, complex table tasks encountered by professional users—tasks that demand advanced skills in table understanding, coding, and reasoning. Plesae see the table below for key statistics of the benchmark.
A complete list of tasks: 'table-transform-by-relationalization', 'table-transform-by-output-schema', 'table-transform-by-output-table', 'Entity matching', 'Schema matching', 'Head value matching', 'data-imputation', 'error-detection', 'list-to-table', 'semantic-join', 'equi-join-detect', 'program-transform-by-example', 'formula-by-context', 'semantic-transform-by-example', 'arithmetic-relationship', 'functional-relationship', 'string-relationship', 'Needle-in-a-haystack-table', 'Needle-in-a-haystack-index', 'NL-2-SQL', 'Table Question Answering', 'Fact Verification', 'Column type annotation', 'Column property annotation', 'Cell entity annotation'. ## Leaderboard | **Model Type** | **Model** | **MMTU Score** | |----------------|---------------------|----------------------| | Reasoning | o4-mini (2024-11-20)| **0.639 ± 0.01** | | Reasoning | Deepseek-R1 | 0.596 ± 0.01 | | Chat | Deepseek-V3 | 0.517 ± 0.01 | | Chat | GPT-4o (2024-11-20) | 0.491 ± 0.01 | | Chat | Llama-3.3-70B | 0.438 ± 0.01 | | Chat | Mistral-Large-2411 | 0.430 ± 0.01 | | Chat | Mistral-Small-2503 | 0.402 ± 0.01 | | Chat | GPT-4o-mini (2024-07-18)| 0.386 ± 0.01 | | Chat | Llama-3.1-8B | 0.259 ± 0.01 | ## Language English ## Data Structure ### Data Fields - prompt: The prompt presented in the MMTU instance. - metadata: Supplementary information associated with the MMTU instance, typically used for evaluation purposes. - task: The specific subtask category within the MMTU framework to which the instance belongs. - dataset: The original source dataset from which the MMTU instance is derived.