[ACMMM 2025] DenseSR: Image Shadow Removal as Dense Prediction

Yu-Fan Lin1, Chia-ming Lee1, Chih-Chung Hsu2

1National Cheng Kung University  2National Yang Ming Chiao Tung University

arXiv

Abstract Shadows are a common factor degrading image quality. Single-image shadow removal (SR), particularly under challenging indirect illumination, is hampered by non-uniform content degradation and inherent ambiguity. Consequently, traditional methods often fail to simultaneously recover intra-shadow details and maintain sharp boundaries, resulting in inconsistent restoration and blurring that negatively affect both downstream applications and the overall viewing experience. To overcome these limitations, we propose the DenseSR, approaching the problem from a dense prediction perspective to emphasize restoration quality. This framework uniquely synergizes two key strategies: (1) deep scene understanding guided by geometric-semantic priors to resolve ambiguity and implicitly localize shadows, and (2) high-fidelity restoration via a novel Dense Fusion Block (DFB) in the decoder. The DFB employs adaptive component processing-using an Adaptive Content Smoothing Module (ACSM) for consistent appearance and a Texture-Boundary Recuperation Module (TBRM) for fine textures and sharp boundaries-thereby directly tackling the inconsistent restoration and blurring issues. These purposefully processed components are effectively fused, yielding an optimized feature representation preserving both consistency and fidelity. Extensive experimental results demonstrate the merits of our approach over existing methods.

⭐ Citation

If you find this project useful, please consider citing us and giving us a star.

@misc{lin2025densesrimageshadowremoval,
      title={DenseSR: Image Shadow Removal as Dense Prediction}, 
      author={Yu-Fan Lin and Chia-Ming Lee and Chih-Chung Hsu},
      year={2025},
      eprint={2507.16472},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.16472}, 
}

🌱 Environments

conda create -n ntire_shadow python=3.9 -y

conda activate ntire_shadow

pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118

pip install -r requirements.txt

πŸ“‚ Folder Structure

You can download WSRD dataset from here.

test_dir
β”œβ”€β”€ origin          <- Put the shadow affected images in this folder
β”‚   β”œβ”€β”€ 0000.png
β”‚   β”œβ”€β”€ 0001.png
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ depth
β”œβ”€β”€ normal


output_dir
β”œβ”€β”€ 0000.png
β”œβ”€β”€ 0001.png
β”œβ”€β”€...

✨ How to test?

  1. Clone Depth anything v2
git clone https://github.com/DepthAnything/Depth-Anything-V2.git
  1. Download the pretrain model of depth anything v2

  2. Run get_depth_normap.py to create depth and normal map.

python get_depth_normap.py

Now folder structure will be

test_dir
β”œβ”€β”€ origin
β”‚   β”œβ”€β”€ 0000.png
β”‚   β”œβ”€β”€ 0001.png
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ depth
β”‚   β”œβ”€β”€ 0000.npy
β”‚   β”œβ”€β”€ 0001.npy
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ ormal
β”‚   β”œβ”€β”€ 0000.npy
β”‚   β”œβ”€β”€ 0001.npy
β”‚   β”œβ”€β”€ ...

output_dir
β”œβ”€β”€ 0000.png
β”œβ”€β”€ 0001.png
β”œβ”€β”€...
  1. Clone DINOv2
git clone https://github.com/facebookresearch/dinov2.git
  1. Download shadow removal weight
gdown 1of3KLSVhaXlsX3jasuwdPKBwb4O4hGZD
  1. Run run_test.sh to get inference results.
bash run_test.sh

πŸ“° News

βœ” 2025/08/11 Release WSRD pretrained model

βœ” 2025/08/11 Release inference code

βœ” 2025/07/05 Paper Accepted by ACMMM'25

πŸ› οΈ TODO

β—» Release training code

β—» Release other pretrained model

πŸ“œ License

This code repository is release under MIT License.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support