Papers
arxiv:2507.01544

MARVIS: Modality Adaptive Reasoning over VISualizations

Published on Jul 2
ยท Submitted by penfever on Jul 3
Authors:
,
,

Abstract

Scientific applications of machine learning often rely on small, specialized models tuned to particular domains. Such models often achieve excellent performance, but lack flexibility. Foundation models offer versatility, but typically underperform specialized approaches, especially on non-traditional modalities and long-tail domains. We propose MARVIS (Modality Adaptive Reasoning over VISualizations), a training-free method that enables even small vision-language models to predict any data modality with high accuracy. MARVIS transforms latent embedding spaces into visual representations and then leverages the spatial and fine-grained reasoning skills of VLMs to successfully interpret and utilize them. MARVIS achieves competitive performance on vision, audio, biological, and tabular domains using a single 3B parameter model, achieving results that beat Gemini by 16\% on average and approach specialized methods, without exposing personally identifiable information (P.I.I.) or requiring any domain-specific training. We open source our code and datasets at https://github.com/penfever/marvis

Community

Paper submitter

Did you know your VLM has a secret identity? ๐Ÿ•ต๏ธ

Despite their power, SOTA VLMs still struggle to interpret complex modalities such as tabular data; other key modalities, such as genomic data, are not supported at all. In this work, we convert a small VLM into a superstar "everything classifier": long-tailed visual data, scientific imagery, tabular classification and regression, even audio!

How do we do it? Our key insight: Vision is a skeleton key! ๐Ÿ—๏ธ Instead of forcing non-visual data into text, we transform ANY data into visualizations that VLMs can naturally understand and reason about. We call it MARVIS: Modality Adaptive Reasoning over VISualizations. Here's how it works:

  1. Feed your data into a specialized embedding model (think DINOv2 for images or TabPFNv2 for tables) ๐Ÿ”„
  2. Visualize the embedding space using one or more standard approaches (KNN, T-SNe, ...) ๐Ÿ“Š
  3. Prompt the VLM to predict based on the provided context ๐Ÿค–

With this simple approach, our MARVIS-3B model:

  • Beats Gemini by 16% on average across 100s of vision and tabular tasks ๐Ÿ†
  • Gets within 2.5% of the best specialized model across across 4 modalities ... ๐ŸŽฏ
  • Using just one 3B model ... ๐Ÿ’ช
  • ... without exposing any P.I.I. (personally identifiable information) to the VLM ... ๐Ÿ”
  • And without requiring any model training! โšก

MARVIS works out of the box with your favorite VLM, including API models with thinking like GPT4V; try it on our GitHub. ๐Ÿš€

Our GitHub: https://github.com/penfever/marvis ๐Ÿ’ป
Our Paper: https://arxiv.org/abs/2507.01544 ๐Ÿ“„
Research Supported By: https://oumi.ai

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.01544 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.01544 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.01544 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.