Image-Space Adaptive Sampling for Fast Inverse Rendering
Kai Yan1, Cheng Zhang2, Sébastien Speierer2, Guangyan Cai1, Yufeng Zhu2, Zhao Dong2, and Shuang Zhao1
1University of California, Irvine          2Meta Reality Labs
ACM SIGGRAPH 2025 (Conference Track Full Paper)
teaser
Abstract

Inverse rendering is crucial for many scientific and engineering disciplines. Recent progress in differentiable rendering has led to efficient differentiation of the full image formation process with respect to scene parameters, enabling gradient-based optimization.

However, computational demands pose a significant challenge for differentiable rendering, particularly when rendering all pixels during inverse rendering from high-resolution/multi-view images. This computational cost leads to slow performance in each iteration of inverse rendering. Meanwhile, naively reducing the sampling budget by uniformly sampling pixels to render in each iteration can result in high gradient variance during inverse rendering, ultimately degrading overall performance.

Our goal is to accelerate inverse rendering by reducing the sampling budget without sacrificing overall performance. In this paper, we introduce a novel image-space adaptive sampling framework to accelerate inverse rendering by dynamically adjusting pixel sampling probabilities based on gradient variance and contribution to the loss function. Our approach efficiently handles high-resolution images and complex scenes, with faster convergence and improved performance compared to uniform sampling, making it a robust solution for efficient inverse rendering.

Downloads
  • Paper: pdf (30 MB)
  • Source code: coming soon.
Bibtex citation
@inproceedings{Yan:2025:Batching,
    author = {Yan, K. and Zhang, C. and Speierer, S. and Cai, G. and Zhu, Y. and Dong, Z. and Zhao, S.},
    title = {Image-space Adaptive Sampling for Fast Inverse Rendering},
    year = {2025},
    booktitle = {Proceedings of the SIGGRAPH Conference Conference Papers},
    pages = {66:1--66:11},
    series = {SIGGRAPH Conference Papers '25}
}
Acknowledgments

We would like to thank Qianhui Wu for artistic support. We would like to thank Ning Zhou for providing the voice-over for the video. The environment maps were provided by PolyHaven. The Mars and Earth textures were provided by Solar System Scope. The Lego model is from Blend Swap by Heinzelnisse. The Bowl model is from the DTC dataset [Dong et al. 2025]. All other assets are inspired creations by Yan [2024]. The assets are not products of Meta, nor are they endorsed by Meta. This work started when Kai Yan was an intern at Meta. This project was partially funded by NSF grant 2239627.