ThermalGen: Style-Disentangled Flow-Based Generative Models for RGB-to-Thermal Image Translation
Abstract
ThermalGen, an adaptive flow-based generative model with RGB image conditioning and style-disentangled mechanism, synthesizes thermal images from RGB datasets, achieving superior performance across various benchmarks.
Paired RGB-thermal data is crucial for visual-thermal sensor fusion and cross-modality tasks, including important applications such as multi-modal image alignment and retrieval. However, the scarcity of synchronized and calibrated RGB-thermal image pairs presents a major obstacle to progress in these areas. To overcome this challenge, RGB-to-Thermal (RGB-T) image translation has emerged as a promising solution, enabling the synthesis of thermal images from abundant RGB datasets for training purposes. In this study, we propose ThermalGen, an adaptive flow-based generative model for RGB-T image translation, incorporating an RGB image conditioning architecture and a style-disentangled mechanism. To support large-scale training, we curated eight public satellite-aerial, aerial, and ground RGB-T paired datasets, and introduced three new large-scale satellite-aerial RGB-T datasets--DJI-day, Bosonplus-day, and Bosonplus-night--captured across diverse times, sensor types, and geographic regions. Extensive evaluations across multiple RGB-T benchmarks demonstrate that ThermalGen achieves comparable or superior translation performance compared to existing GAN-based and diffusion-based methods. To our knowledge, ThermalGen is the first RGB-T image translation model capable of synthesizing thermal images that reflect significant variations in viewpoints, sensor characteristics, and environmental conditions. Project page: http://xjh19971.github.io/ThermalGen
Community
RGB-to-Thermal (RGB-T) image translation is a powerful tool for advancing cross-modality perception, especially in scenarios where synchronized RGB-thermal data is scarce. With ThermalGen, we introduce:
๐น A flow-based generative model with RGB conditioning and style disentanglement for robust RGB-T translation.
๐น Three new large-scale RGB-T datasets (DJI-day, BosonPlus-day, BosonPlus-night) covering diverse times, sensors, and environments.
๐น Comprehensive benchmarks showing comparable or superior performance to state-of-the-art GAN- and diffusion-based methods.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Spectral Compressive Imaging via Chromaticity-Intensity Decomposition (2025)
- Collaborative Multi-Modal Coding for High-Quality 3D Generation (2025)
- HyPSAM: Hybrid Prompt-driven Segment Anything Model for RGB-Thermal Salient Object Detection (2025)
- IDCNet: Guided Video Diffusion for Metric-Consistent RGBD Scene Generation with Precise Camera Control (2025)
- RapidMV: Leveraging Spatio-Angular Representations for Efficient and Consistent Text-to-Multi-View Synthesis (2025)
- DepthVision: Robust Vision-Language Understanding through GAN-Based LiDAR-to-RGB Synthesis (2025)
- Collaborating Vision, Depth, and Thermal Signals for Multi-Modal Tracking: Dataset and Algorithm (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper