Volumetric micro-appearance models have provided remarkably high-quality renderings, but are highly data intensive and usually require tens of gigabytes in storage. When an object is viewed from a distance, the highest level of detail offered by these models is usually unnecessary, but traditional linear downsampling weakens the object's intrinsic shadowing structures and can yield poor accuracy. We introduce a joint optimization of single-scattering albedos and phase functions to accurately downsample heterogeneous and anisotropic media. Our method is built upon scaled phase functions, a new representation combining abledos and (standard) phase functions. We also show that modularity can be exploited to greatly reduce the amortized optimization overhead by allowing multiple synthesized models to share one set of downsampled parameters. Our optimized parameters generalize well to novel lighting and viewing configurations, and the resulting data sets offer several orders of magnitude storage savings.
@article{Zhao:2016:downsample, title={Downsampling Scattering Parameters for Rendering Anisotropic Media}, author={Zhao, Shaung and Wu, Lifan and Durand, Fr\'edo and Ramamoorthi, Ravi}, journal={ACM Trans. Graph.}, volume={35}, number={6}, year={2016}, }
We thank the anonymous reviewers for their constructive comments and suggestions. This work was supported by the National Science Foundation (IIS 1451828), the UC San Diego Center for Visual Computing, and AWS Cloud Credits for Research.