MARVIS: Modality Adaptive Reasoning over VISualizations
Abstract
Scientific applications of machine learning often rely on small, specialized models tuned to particular domains. Such models often achieve excellent performance, but lack flexibility. Foundation models offer versatility, but typically underperform specialized approaches, especially on non-traditional modalities and long-tail domains. We propose MARVIS (Modality Adaptive Reasoning over VISualizations), a training-free method that enables even small vision-language models to predict any data modality with high accuracy. MARVIS transforms latent embedding spaces into visual representations and then leverages the spatial and fine-grained reasoning skills of VLMs to successfully interpret and utilize them. MARVIS achieves competitive performance on vision, audio, biological, and tabular domains using a single 3B parameter model, achieving results that beat Gemini by 16\% on average and approach specialized methods, without exposing personally identifiable information (P.I.I.) or requiring any domain-specific training. We open source our code and datasets at https://github.com/penfever/marvis
Community
Did you know your VLM has a secret identity? ๐ต๏ธ
Despite their power, SOTA VLMs still struggle to interpret complex modalities such as tabular data; other key modalities, such as genomic data, are not supported at all. In this work, we convert a small VLM into a superstar "everything classifier": long-tailed visual data, scientific imagery, tabular classification and regression, even audio!
How do we do it? Our key insight: Vision is a skeleton key! ๐๏ธ Instead of forcing non-visual data into text, we transform ANY data into visualizations that VLMs can naturally understand and reason about. We call it MARVIS: Modality Adaptive Reasoning over VISualizations. Here's how it works:
- Feed your data into a specialized embedding model (think DINOv2 for images or TabPFNv2 for tables) ๐
- Visualize the embedding space using one or more standard approaches (KNN, T-SNe, ...) ๐
- Prompt the VLM to predict based on the provided context ๐ค
With this simple approach, our MARVIS-3B model:
- Beats Gemini by 16% on average across 100s of vision and tabular tasks ๐
- Gets within 2.5% of the best specialized model across across 4 modalities ... ๐ฏ
- Using just one 3B model ... ๐ช
- ... without exposing any P.I.I. (personally identifiable information) to the VLM ... ๐
- And without requiring any model training! โก
MARVIS works out of the box with your favorite VLM, including API models with thinking like GPT4V; try it on our GitHub. ๐
Our GitHub: https://github.com/penfever/marvis ๐ป
Our Paper: https://arxiv.org/abs/2507.01544 ๐
Research Supported By: https://oumi.ai
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper