JarvisArt Icon

JarvisArt: Liberating Human Artistic Creativity via an Intelligent Photo Retouching Agent

Paper Huggingface Daily Papers Project Page YouTube BiliBili Twitter Follow GitHub Stars

1Xiamen University, 2The Hong Kong University of Science and Technology (Guangzhou), 3 The Chinese University of Hong Kong, 4Bytedance, 5National University of Singapore, 6Tsinghua University


⚠️ Security Warning

IMPORTANT: This is the ONLY official JarvisArt repository!

We have identified fake repositories claiming to be JarvisArt that may contain malware, viruses, or malicious code. Please be extremely cautious and only use this official repository.

Known fake/malicious repositories:

  • ❌ https://github.com/joelp0/JarvisArt - FAKE & POTENTIALLY DANGEROUS
  • ❌ Any other repositories not from our official organization

πŸ“ Overview

JarvisArt Teaser
JarvisArt workflow and results showcase

JarvisArt is a multi-modal large language model (MLLM)-driven agent for intelligent photo retouching. It is designed to liberate human creativity by understanding user intent, mimicking the reasoning of professional artists, and coordinating over 200 tools in Adobe Lightroom. JarvisArt utilizes a novel two-stage training framework, starting with Chain-of-Thought supervised fine-tuning for foundational reasoning, followed by Group Relative Policy Optimization for Retouching (GRPO-R) to enhance its decision-making and tool proficiency. Supported by the newly created MMArt dataset (55K samples) and MMArt-Bench, JarvisArt demonstrates superior performance, outperforming GPT-4o with a 60% improvement in pixel-level metrics for content fidelity while maintaining comparable instruction-following capabilities.


🎬 Demo Videos

Global Retouching Case

JarvisArt Demo

Local Retouching Case

JarvisArt Demo

JarvisArt supports multi-granularity retouching goals, ranging from scene-level adjustments to region-specific refinements. Users can perform intuitive, free-form edits through natural inputs such as text prompts and bounding boxes

πŸ“š Citation

If you find JarvisArt useful in your research, please consider citing:

@article{jarvisart2025,
title={JarvisArt: Liberating Human Artistic Creativity via an Intelligent Photo Retouching Agent}, 
      author={Yunlong Lin and Zixu Lin and Kunjie Lin and Jinbin Bai and Panwang Pan and Chenxin Li and Haoyu Chen and Zhongdao Wang and Xinghao Ding and Wenbo Li and Shuicheng Yan},
      year={2025},
      journal={arXiv preprint arXiv:2506.17612}
}

πŸ“§ Contact

For any questions or inquiries, please reach out to us:


πŸ™ Acknowledgements

We would like to express our gratitude to LLaMA-Factory and gradio_image_annotator for their valuable open-source contributions which have provided important technical references for our work.

Downloads last month
64
Safetensors
Model size
8.29B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for JarvisArt/JarvisArt-Preview

Quantizations
1 model

Space using JarvisArt/JarvisArt-Preview 1