You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

PRIM: Towards Practical In-Image Multilingual Machine Translation (EMNLP 2025 Main)

📄Paper arXiv | 💻Code GitHub | 🤗Training set HuggingFace | 🤗Model HuggingFace

Introduction

This repository provides the PRIM benchmark, which is introduced in our paper PRIM: Towards Practical In-Image Multilingual Machine Translation.

PRIM (Practical In-Image Multilingual Machine Translation) is the first publicly available benchmark captured from real-word images for In-Image machine Translation.

The source images are collected from [1] and [2]. We sincerely thank the authors of these datasets for making their data available.

Citation

If you find our work helpful, we would greatly appreciate it if you could cite our paper:

@misc{tian2025primpracticalinimagemultilingual,
      title={PRIM: Towards Practical In-Image Multilingual Machine Translation}, 
      author={Yanzhi Tian and Zeming Liu and Zhengyang Liu and Chong Feng and Xin Li and Heyan Huang and Yuhang Guo},
      year={2025},
      eprint={2509.05146},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.05146}, 
}

[1] Modal Contrastive Learning Based End-to-End Text Image Machine Translation

[2] MIT-10M: A Large Scale Parallel Corpus of Multilingual Image Translation

Downloads last month
151

Models trained or fine-tuned on yztian/PRIM