PSDR-Room: Single Photo to Scene using Differentiable Rendering
Kai Yan1, Fujun Luan2, Miloš Hašan2, Thibault Groueix2, Valentin Deschaintre2, and Shuang Zhao1
1University of California, Irvine          2Adobe Research
ACM SIGGRAPH Asia 2023 (Conference Track Full Paper)
teaser
Abstract

A 3D digital scene contains many components: lights, materials and geometries, interacting to reach the desired appearance. Staging such a scene is time-consuming and requires both artistic and technical skills. In this work, we propose PSDR-Room, a system allowing to optimize lighting as well as the pose and materials of individual objects to match a target image of a room scene, with minimal user input. To this end, we leverage a recent path-space differentiable rendering approach that provides unbiased gradients of the rendering with respect to geometry, lighting, and procedural materials, allowing us to optimize all of these components using gradient descent to visually match the input photo appearance. We use recent single-image scene understanding methods to initialize the optimization and search for appropriate 3D models and materials. We evaluate our method on real photographs of indoor scenes and demonstrate the editability of the resulting scene components.

Downloads
  • Paper: pdf (21 MB)
  • Supplemental material: html, zip (107 MB)
  • Source code: Github
Selected results
Target Ours
target ours
Ours (animated)
Bibtex citation
@inproceedings{Yan:2023:PSDR-Room,
  title={PSDR-Room: Single Photo to Scene using Differentiable Rendering},
  author={Yan, K. and Luan, F. and Ha\v{s}an, M. and Groueix, T. and Deschaintre, V. and Zhao, S.},
  booktitle = {ACM SIGGRAPH Asia 2023 Conference Proceedings},
  year = {2023},
}
Acknowledgments

We thank the anonymous reviewers for their constructive suggestions. We also thank Liang Shi and Beichen Li for their insightful suggestions, as well as providing implementation and dataset on DiffMatV2. This work started when Kai Yan was an intern at Adobe Research. Kai’s contributions while at the University of California, Irvine were partially supported by NSF grant 1900927.