A Differential Theory of Radiative Transfer
Cheng Zhang, Lifan Wu, Changxi Zheng, Ioannis Gkioulekas, Ravi Ramamoorthi, and Shuang Zhao
ACM Transactions on Graphics (SIGGRAPH Asia 2019), 38(6), November 2019
Physics-based differentiable rendering is the task of estimating the derivatives of radiometric measures with respect to scene parameters. The ability to compute these derivatives is necessary for enabling gradient-based optimization in a diverse array of applications: from solving analysis-by-synthesis problems to training machine learning pipelines incorporating forward rendering processes. Unfortunately, physics-based differentiable rendering remains challenging, due to the complex and typically nonlinear relation between pixel intensities and scene parameters.
We introduce a differential theory of radiative transfer, which shows how individual components of the radiative transfer equation (RTE) can be differentiated with respect to arbitrary differentiable changes of a scene. Our theory encompasses the same generality as the standard RTE, allowing differentiation while accurately handling a large range of light transport phenomena such as volumetric absorption and scattering, anisotropic phase functions, and heterogeneity. To numerically estimate the derivatives given by our theory, we introduce an unbiased Monte Carlo estimator supporting arbitrary surface and volumetric configurations. Our technique differentiates path contributions symbolically and uses additional boundary integrals to capture geometric discontinuities such as visibility changes.
We validate our method by comparing our derivative estimations to those generated using the finite-difference method. Furthermore, we use a few synthetic examples inspired by real-world applications in inverse rendering, non-line-of-sight (NLOS) and biomedical imaging, and design, to demonstrate the practical usefulness of our technique.
Mechanics-Aware Modeling of Cloth Appearance
Zahra Montazeri, Chang Xiao, Yun (Raymond) Fei, Changxi Zheng, and Shuang Zhao
IEEE Transactions on Visualization and Computer Graphics, in press, 2019
Micro-appearance models have brought unprecedented fidelity and details to cloth rendering. Yet, these models neglect fabric mechanics: when a piece of cloth interacts with the environment, its yarn and fiber arrangement usually changes in response to external contact and tension forces. Since subtle changes of a fabric's microstructures can greatly affect its macroscopic appearance, mechanics-driven appearance variation of fabrics has been a phenomenon that remains to be captured. We introduce a mechanics-aware model that adapts the microstructures of cloth yarns in a physics-based manner. Our technique works on two distinct physical scales: using physics-based simulations of individual yarns, we capture the rearrangement of yarn-level structures in response to external forces. These yarn structures are further enriched to obtain appearance-driving fiber-level details. The cross-scale enrichment is made practical through a new parameter fitting algorithm for simulation, an augmented procedural yarn model coupled with a custom-design regression neural network. We train the network using a dataset generated by joint simulations at both the yarn and the fiber levels. Through several examples, we demonstrate that our model is capable of synthesizing photorealistic cloth appearance in a mechanically plausible way.
Accurate Appearance Preserving Prefiltering for Rendering Displacement-Mapped Surfaces
Lifan Wu, Shuang Zhao, Ling-Qi Yan, and Ravi Ramamoorthi
ACM Transactions on Graphics (SIGGRAPH 2019), 38(4), July 2019
Prefiltering the reflectance of a displacement-mapped surface while preserving its overall appearance is challenging, as smoothing a displacement map causes complex changes of illumination effects such as shadowing-masking and interreflection. In this paper, we introduce a new method that prefilters displacement maps and BRDFs jointly and constructs SVBRDFs at reduced resolutions. These SVBRDFs preserve the appearance of the input models by capturing both shadowing-masking and interreflection effects. To expressour appearance-preserving SVBRDFs efficiently, we leverage a new representation that involves spatially varying NDFs and a novel scaling function that accurately captures micro-scale changes of shadowing, masking, and interreflection effects. Further, we show that the 6D scaling function can be factorized into a 2D function of surface location and a 4D function of direction. By exploiting the smoothness of these functions, we develop a simple and efficient factorization method that does not require computing the full scaling function. The resulting functions can be represented at low resolutions (e.g., 4^2 for the spatial function and 15^4 for the angular function),leading to minimal additional storage. Our method generalizes well to differ-ent types of geometries beyond Gaussian surfaces. Models prefiltered using our approach at different scales can be combined to form mipmaps, allowing accurate and anti-aliased level-of-detail (LoD) rendering.
Position-Free Monte Carlo Simulation for Arbitrary Layered BSDFs
Yu Guo, Miloš Hašan, and Shuang Zhao
ACM Transactions on Graphics (SIGGRAPH Asia 2018), 37(6), November 2018
Real-world materials are often layered: metallic paints, biological tissues, and many more. Variation in the interface and volumetric scattering properties of the layers leads to a rich diversity of material appearances from anisotropic highlights to complex textures and relief patterns. However, simulating light-layer interactions is a challenging problem. Past analytical or numerical solutions either introduce several approximations and limitations, or rely on expensive operations on discretized BSDFs, preventing the ability to freely vary the layer properties spatially. We introduce a new unbiased layered BSDF model based on Monte Carlo simulation, whose only assumption is the layer assumption itself. Our novel position-free path formulation is fundamentally more powerful at constructing light transport paths than generic light transport algorithms applied to the special case of flat layers, since it is based on a product of solid angle instead of area measures, so does not contain the high-variance geometry terms needed in the standard formulation. We introduce two techniques for sampling the position-free path integral, a forward path tracer with next-event estimation and a full bidirectional estimator. We show a number of examples, featuring multiple layers with surface and volumetric scattering, surface and phase function anisotropy, and spatial variation in all parameters.
Inverse Transport Networks
Chengqian Che, Fujun Luan, Shuang Zhao, Kavita Bala, and Ioannis Gkioulekas
Technical Report (arXiv:1809.10820), September 2018
We introduce inverse transport networks as a learning architecture for inverse rendering problems where, given input image measurements, we seek to infer physical scene parameters such as shape, material, and illumination. During training, these networks are evaluated not only in terms of how close they can predict groundtruth parameters, but also in terms of whether the parameters they produce can be used, together with physically-accurate graphics renderers, to reproduce the input image measurements. To en- able training of inverse transport networks using stochastic gradient descent, we additionally create a general-purpose, physically-accurate differentiable renderer, which can be used to estimate derivatives of images with respect to arbitrary physical scene parameters. Our experiments demonstrate that inverse transport networks can be trained efficiently using differentiable rendering, and that they generalize to scenes with completely unseen geometry and illumination better than networks trained without appearance- matching regularization.
Inverse Diffusion Curves using Shape Optimization
Shuang Zhao, Frédo Durand, and Changxi Zheng
IEEE Transactions on Visualization and Computer Graphics, 24(7), July 2018
The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.
Does Geometric Sharpness Affect Perception of Translucent Material?
Bei Xiao, Wenyan Bi, Shuang Zhao, Ioannis Gkioulekas, and Kavita Bala
Vision Science Society Annual Meeting, May 2018
When judging material properties of a translucent object, we often look at sharp geometric features such as edges. Image analysis shows edges of translucent objects exhibit distinctive light scattering profiles. Around the edges of translucent objects, there is often a rapid change of material thickness, which provides valuable information for recovering material properties. It was found that perception of 3D shape is different between opaque and translucent objects. Here, we examine whether geometry affects the perception of translucent material perception.
The images used in the experiment are computer-generated using Mitsuba physically based renderer. The shape of an object is described as 2D height fields (in which each pixel contains the amount of extrusion from the object surface to the base plane). We varied both material properties and 3D shapes of the stimuli: for the former, we used materials with varying optical densities (used by the radiative transfer model) so that the object would have different levels of ground-truth translucency; for the latter, we applied different amounts of Gaussian blur to the underlying height fields. Seven observers finished a paired-comparison experiment where they viewed a pair of images that had different ground-truth translucency and blur levels. They were asked to judge which object appeared to be more translucent. We also included control conditions where the objects in both images have the same blur levels.
We found that when there was no difference in the level of blurring between the images, observers could discriminate material properties of the two objects well (mean accuracy = 81%). However, when the two objects differ in the blur level, all observers started to make more mistakes (mean accuracy = 71%). We conclude that observers’ sensitivity to translucent appearance is affected by the sharpness of the 3D geometry of the object, thus suggesting 3D shape affects material perception for translucency.
Fiber-Level On-the-Fly Procedural Textiles
Fujun Luan, Shuang Zhao, and Kavita Bala
Computer Graphics Forum (Eurographics Symposium on Rendering), 36(4), July 2017
Procedural textile models are compact, easy to edit, and can achieve state-of-the-art realism with fiber-level details. However, these complex models generally need to be fully instantiated (aka. realized) into 3D volumes or fiber meshes and stored in memory, We introduce a novel realization-minimizing technique that enables physically based rendering of procedural textiles, without the need of full model realizations. The key ingredients of our technique are new data structures and search algorithms that look up regular and flyaway fibers on the fly, efficiently and consistently. Our technique works with compact fiber-level procedural yarn models in their exact form with no approximation imposed. In practice, our method can render very large models that are practically unrenderable using existing methods, while using considerably less memory (60–200X less) and achieving good performance.
Real-Time Linear BRDF MIP-Mapping
Chao Xu, Rui Wang, Shuang Zhao, and Hujun Bao
Computer Graphics Forum (Eurographics Symposium on Rendering), 36(4), July 2017
We present a new technique to jointly MIP-map BRDF and normal maps. Starting with generating an instant BRDF map, our technique builds its MIP-mapped versions based on a highly efficient algorithm that interpolates von Mises-Fisher (vMF) distributions. In our BRDF MIP-maps, each pixel stores a vMF mixture approximating the average of all BRDF lobes from the finest level. Our method is capable of jointly MIP-mapping BRDF and normal maps, even with high-frequency variations, at real-time while preserving high-quality reflectance details. Further, it is very fast, easy to implement, and requires no precomputation.
Downsampling Scattering Parameters for Rendering Anisotropic Media
Shuang Zhao*, Lifan Wu*, Frédo Durand, and Ravi Ramamoorthi
(* Joint first authors)
ACM Transactions on Graphics (SIGGRAPH Asia 2016), 35(6), November 2016
Volumetric micro-appearance models have provided remarkably high-quality renderings, but are highly data intensive and usually require tens of gigabytes in storage. When an object is viewed from a distance, the highest level of detail offered by these models is usually unnecessary, but traditional linear downsampling weakens the object's intrinsic shadowing structures and can yield poor accuracy. We introduce a joint optimization of single-scattering albedos and phase functions to accurately downsample heterogeneous and anisotropic media. Our method is built upon scaled phase functions, a new representation combining abledos and (standard) phase functions. We also show that modularity can be exploited to greatly reduce the amortized optimization overhead by allowing multiple synthesized models to share one set of downsampled parameters. Our optimized parameters generalize well to novel lighting and viewing configurations, and the resulting data sets offer several orders of magnitude storage savings.
Towards Real-Time Monte Carlo for Biomedicine
Shuang Zhao, Rong Kong, and Jerome Spanier
International Conference on Monte Carlo and Quasi-Monte Carlo Method in Scientific Computing, August 2016
Monte Carlo methods provide the "gold standard" computational technique for solving biomedical problems but their use is hindered by the slow convergence of the sample means. An exponential increase in the convergence rate can be obtained by adaptively modifying the sampling and weighting strategy employed. However, if the radiance is represented globally by a truncated expansion of basis functions, or locally by a region-wise constant or low degree polynomial, a bias is introduced by the truncation and/or the number of subregions. The sheer number of expansion coefficients or geometric subdivisions created by the biased representation then partly or entirely offsets the geometric acceleration of the convergence rate. As well, the (unknown amount of) bias is unacceptable for a gold standard numerical method. We introduce a new unbiased estimator of the solution of radiative transfer equation (RTE) that constrains the radiance to obey the transport equation. We provide numerical evidence of the superiority of this Transport-Constrained Unbiased Radiance Estimator (T-CURE) in various transport problems and indicate its promise for general heterogeneous problems.
Fitting Procedural Yarn Models for Realistic Cloth Rendering
Shuang Zhao, Fujun Luan, and Kavita Bala
ACM Transactions on Graphics (SIGGRAPH 2016), 35(4), July 2016
Fabrics play a significant role in many applications in design, prototyping, and entertainment. Recent fiber-based models capture the rich visual appearance of fabrics, but are too onerous to design and edit. Yarn-based procedural models are powerful and convenient, but too regular and not realistic enough in appearance. In this paper, we introduce an automatic fitting approach to create high-quality procedural yarn models of fabrics with fiber-level details. We fit CT data to procedural models to automatically recover a full range of parameters, and augment the models with a measurement-based model of flyaway fibers. We validate our fabric models against CT measurements and photographs, and demonstrate the utility of this approach for fabric modeling and editing.
Matching Real Fabrics with Micro-Appearance Models
Pramook Khungurn, Daniel Schroeder, Shuang Zhao, Kavita Bala, and Steve Marschner
ACM Transactions on Graphics, 35(1), December 2015
Micro-appearance models explicitly model the interaction of light with microgeometry at the fiber scale to produce realistic appearance. To effectively match them to real fabrics, we introduce a new appearance matching framework to determine their parameters. Given a micro-appearance model and photographs of the fabric under many different lighting conditions, we optimize for parameters that best match the photographs using a method based on calculating derivatives during rendering. This highly applicable framework, we believe, is a useful research tool because it simplifies development and testing of new models.
Using the framework, we systematically compare several types of micro-appearance models. We acquired computed microtomography (micro CT) scans of several fabrics, photographed them under many viewing/illumination conditions, and matched several appearance models to this data. We compare a new fiber-based light scattering model to the previously used microflake model. We also compare representing cloth microgeometry using volumes derived directly from the micro CT data to using explicit fibers reconstructed from the volumes. From our comparisons we make the following conclusions: (1) given a fiber-based scattering model, volume- and fiber-based microgeometry representations are capable of very similar quality, and (2) using a fiber-specific scattering model is crucial to good results as it achieves considerably higher accuracy than prior work.
Modeling and Rendering Fabrics at Micron-Resolution
Department of Computer Science, Cornell University, August 2014
Building Volumetric Appearance Models of Fabric using Micro CT Imaging
Shuang Zhao, Wenzel Jakob, Steve Marschner, and Kavita Bala
Communications of the ACM (Research Highlights), 57(11), November 2014
Cloth is essential to our everyday lives; consequently, visualizing and rendering cloth has been an important area of research in graphics for decades. One important aspect contributing to the rich appearance of cloth is its complex 3D structure. Volumetric algorithms that model this 3D structure can correctly simulate the interaction of light with cloth to produce highly realistic images of cloth. But creating volumetric models of cloth is difficult: writing specialized procedures for each type of material is onerous, and requires significant programmer effort and intuition. Further, the resulting models look unrealistically “perfect” because they lack visually important features like naturally occurring irregularities.
This paper proposes a new approach to acquiring volume models, based on density data from X-ray computed tomography (CT) scans and appearance data from photographs under uncontrolled illumination. To model a material, a CT scan is made, yielding a scalar density volume. This 3D data has micron resolution details about the structure of cloth but lacks all optical information. So we combine this density data with a reference photograph of the cloth sample to infer its optical properties. We show that this approach can easily produce volume appearance models with extreme detail, and at larger scales the distinctive textures and highlights of a range of very different fabrics such as satin and velvet emerge automatically—all based simply on having accurate mesoscale geometry.
High-Order Similarity Relations in Radiative Transfer
Shuang Zhao, Ravi Ramamoorthi, and Kavita Bala
ACM Transactions on Graphics (SIGGRAPH 2014), 33(4), July 2014
Radiative transfer equations (RTEs) with different scattering parameters can lead to identical solution radiance fields. Similarity theory studies this effect by introducing a hierarchy of equivalence relations called "similarity relations". Unfortunately, given a set of scattering parameters, it remains unclear how to find altered ones satisfying these relations, significantly limiting the theory's practical value. This paper presents a complete exposition of similarity theory, which provides fundamental insights into the structure of the RTE's parameter space. To utilize the theory in its general high-order form, we introduce a new approach to solve for the altered parameters including the absorption and scattering coefficients as well as a fully tabulated phase function. We demonstrate the practical utility of our work using two applications: forward and inverse rendering of translucent media. Forward rendering is our main application, and we develop an algorithm exploiting similarity relations to offer "free" speedups for Monte Carlo rendering of optically dense and forward-scattering materials. For inverse rendering, we propose a proof-of-concept approach which warps the parameter space and greatly improves the efficiency of gradient descent algorithms. We believe similarity theory is important for simulating and acquiring volume-based appearance, and our approach has the potential to benefit a wide range of future applications in this area.
Inverse Volume Rendering with Material Dictionaries
Ioannis Gkioulekas, Shuang Zhao, Kavita Bala, Todd Zickler, and Anat Levin
ACM Transactions on Graphics (SIGGRAPH Asia 2013), 32(6), November 2013
Translucent materials are ubiquitous, and simulating their appearance requires accurate physical parameters. However, physically-accurate parameters for scattering materials are difficult to acquire. We introduce an optimization framework for measuring bulk scattering properties of homogeneous materials (phase function, scattering coefficient, and absorption coefficient) that is more accurate, and more applicable to a broad range of materials. The optimization combines stochastic gradient descent with Monte Carlo rendering and a material dictionary to invert the radiative transfer equation. It offers several advantages: (1) it does not require isolating singlescattering events; (2) it allows measuring solids and liquids that are hard to dilute; (3) it returns parameters in physically-meaningful units; and (4) it does not restrict the shape of the phase function using Henyey-Greenstein or any other low-parameter model. We evaluate our approach by creating an acquisition setup that collects images of a material slab under narrow-beam RGB illumination. We validate results by measuring prescribed nano-dispersions and showing that recovered parameters match those predicted by Lorenz-Mie theory. We also provide a table of RGB scattering parameters for some common liquids and solids, which are validated by simulating color images in novel geometric configurations that match the corresponding photographs with less than 5% error.
Understanding the Role of Phase Function in Translucent Appearance
Ioannis Gkioulekas, Bei Xiao, Shuang Zhao, Edward Adelson, Todd Zickler, and Kavita Bala
ACM Transactions on Graphics, 32(5), September 2013
Multiple scattering contributes critically to the characteristic translucent appearance of food, liquids, skin, and crystals; but little is known about how it is perceived by human observers. This paper explores the perception of translucency by studying the image effects of variations in one factor of multiple scattering: the phase function. We consider an expanded space of phase functions created by linear combinations of Henyey-Greenstein and von Mises-Fisher lobes, and we study this physical parameter space using computational data analysis and psychophysics.
Our study identifies a two-dimensional embedding of the physical scattering parameters in a perceptually-meaningful appearance space. Through our analysis of this space, we find uniform parameterizations of its two axes by analytical expressions of moments of the phase function, and provide an intuitive characterization of the visual effects that can be achieved at different parts of it. We show that our expansion of the space of phase functions enlarges the range of achievable translucent appearance compared to traditional single-parameter phase function models. Our findings highlight the important role phase function can have in controlling translucent appearance, and provide tools for manipulating its effect in material design applications.
Modular Flux Transfer: Efficient Rendering of High-Resolution Volumes with Repeated Structures
Shuang Zhao, Miloš Hašan, Ravi Ramamoorthi, and Kavita Bala
ACM Transactions on Graphics (SIGGRAPH 2013), 32(4), July 2013
The highest fidelity images to date of complex materials like cloth use extremely high-resolution volumetric models. However, rendering such complex volumetric media is expensive, with brute-force path tracing often the only viable solution. Fortunately, common volumetric materials (fabrics, finished wood, synthesized solid textures) are structured, with repeated patterns approximated by tiling a small number of exemplar blocks. In this paper, we introduce a precomputation-based rendering approach for such volumetric media with repeated structures based on a modular transfer formulation. We model each exemplar block as a voxel grid and precompute voxel-to-voxel, patch-to-patch, and patch-to-voxel flux transfer matrices. At render time, when blocks are tiled to produce a high-resolution volume, we accurately compute low-order scattering, with modular flux transfer used to approximate higher-order scattering. We achieve speedups of up to 12X over path tracing on extremely complex volumes, with minimal loss of quality. In addition, we demonstrate that our approach outperforms photon mapping on these materials.
Structure-Aware Synthesis for Predictive Woven Fabric Appearance
Shuang Zhao, Wenzel Jakob, Steve Marschner, and Kavita Bala
ACM Transactions on Graphics (SIGGRAPH 2012), 31(4), July 2012
Paper figure selected as the front cover
of the SIGGRAPH 2012
Woven fabrics have a wide range of appearance determined by their small-scale 3D structure. Accurately modeling this structural detail can produce highly realistic renderings of fabrics and is critical for predictive rendering of fabric appearance. But building these yarn-level volumetric models is challenging. Procedural techniques are manually intensive, and fail to capture the naturally arising irregularities which contribute significantly to the overall appearance of cloth. Techniques that acquire the detailed 3D structure of real fabric samples are constrained only to model the scanned samples and cannot represent different fabric designs.
This paper presents a new approach to creating volumetric models of woven cloth, which starts with user-specified fabric designs and produces models that correctly capture the yarn-level structural details of cloth. We create a small database of volumetric exemplars by scanning fabric samples with simple weave structures. To build an output model, our method synthesizes a new volume by copying data from the exemplars at each yarn crossing to match a weave pattern that specifies the desired output structure. Our results demonstrate that our approach generalizes well to complex designs and can produce highly realistic results at both large and small scales.
Effects of Shape and Color on the Perception of Translucency
Bei Xiao, Ioannis Gkioulekas, Asher Dunn, Shuang Zhao, Todd Zickler, Edward Adelson, and Kavita Bala
Vision Science Society Annual Meeting, May 2012
Single View Reflectance Capture using Multiplexed Scattering and Time-of-Flight Imaging
Nikhil Naik, Shuang Zhao, Andreas Velten, Ramesh Raskar, and Kavita Bala
ACM Transactions on Graphics (SIGGRAPH Asia 2011), 30(5), December 2011
This paper introduces the concept of time-of-flight reflectance estimation, and demonstrates a new technique that allows a camera to rapidly acquire reflectance properties of objects from a single viewpoint, over relatively long distances and without encircling equipment. We measure material properties by indirectly illuminating an object by a laser source, and observing its reflected light indirectly using a time-of-flight camera. The configuration collectively acquires dense angular, but low spatial sampling, within a limited solid angle range -- all from a single viewpoint. Our ultra-fast imaging approach captures space-time "streak images" that can separate out different bounces of light based on path length. Entanglements rise in the streak images mixing signals from multiple paths if they have the same total path length. We show how reflectances can be recovered by solving for a linear system of equations and assuming parametric material models; fitting to lower dimensional reflectance models enables us to disentangle measurements.
We demonstrate proof-of-concept results of parametric reflectance models for homogeneous and discretized heterogeneous patches, both using simulation and experimental hardware. As compared to lengthy or highly calibrated BRDF acquisition techniques, we demonstrate a device that can rapidly, on the order of seconds, capture meaningful reflectance information. We expect hardware advances to improve the portability and speed of this device.
Building Volumetric Appearance Models of Fabric using Micro CT Imaging
Shuang Zhao, Wenzel Jakob, Steve Marschner, and Kavita Bala
ACM Transactions on Graphics (SIGGRAPH 2011), 30(4), July 2011
as "Research Highlights" in Communications of the ACM (CACM)
The appearance of complex, thick materials like textiles is determined by their 3D structure, and they are incompletely described by surface reflection models alone. While volume scattering can produce highly realistic images of such materials, creating the required volume density models is difficult. Procedural approaches require significant programmer effort and intuition to design specialpurpose algorithms for each material. Further, the resulting models lack the visual complexity of real materials with their naturallyarising irregularities.
This paper proposes a new approach to acquiring volume models, based on density data from X-ray computed tomography (CT) scans and appearance data from photographs under uncontrolled illumination. To model a material, a CT scan is made, resulting in a scalar density volume. This 3D data is processed to extract orientation information and remove noise. The resulting density and orientation fields are used in an appearance matching procedure to define scattering properties in the volume that, when rendered, produce images with texture statistics that match the photographs. As our results show, this approach can easily produce volume appearance models with extreme detail, and at larger scales the distinctive textures and highlights of a range of very different fabrics like satin and velvet emerge automatically -- all based simply on having accurate mesoscale geometry.
Automatic Bounding of Programmable Shaders for Efficient Global Illumination
Edgar Velázquez-Armendáriz, Shuang Zhao, Miloš Hašan, Bruce Walter, and Kavita Bala
ACM Transactions on Graphics (SIGGRAPH Asia 2009), 28(5), December 2009
This paper describes a technique to automatically adapt programmable shaders for use in physically-based rendering algorithms. Programmable shading provides great flexibility and power for creating rich local material detail, but only allows the material to be queried in one limited way: point sampling. Physically-based rendering algorithms simulate the complex global flow of light through an environment but rely on higher level information about the material properties, such as importance sampling and bounding, to intelligently solve high dimensional rendering integrals.
We propose using a compiler to automatically generate interval versions of programmable shaders that can be used to provide the higher level query functions needed by physically-based rendering without the need for user intervention or expertise. We demonstrate the use of programmable shaders in two such algorithms, multidimensional lightcuts and photon mapping, for a wide range of scenes including complex geometry, materials and lighting.
Single Scattering in Refractive Media with Triangle Mesh Boundaries
Bruce Walter, Shuang Zhao, Nicolas Holzschuch, and Kavita Bala
ACM Transactions on Graphics (SIGGRAPH 2009), 28(3), August 2009
Light scattering in refractive media is an important optical phenomenon for computer graphics. While recent research has focused on multiple scattering, there has been less work on accurate solutions for single or low-order scattering. Refraction through a complex boundary allows a single external source to be visible in multiple directions internally with different strengths; these are hard to find with existing techniques. This paper presents techniques to quickly find paths that connect points inside and outside a medium while obeying the laws of refraction. We introduce: a half-vector based formulation to support the most common geometric representation, triangles with interpolated normals; hierarchical pruning to scale to triangular meshes; and, both a solver with strong accuracy guarantees, and a faster method that is empirically accurate. A GPU version achieves interactive frame rates in several examples.
Modeling Anisotropic Surface Reflectance with Example-Based Microfacet Synthesis
Jiaping Wang, Shuang Zhao, Xin Tong, John Snyder, and Baining Guo
ACM Transactions on Graphics (SIGGRAPH 2008), 27(3), August 2008
We present a new technique for the visual modeling of spatially varying anisotropic reflectance using data captured from a single view. Reflectance is represented using a microfacet-based BRDF which tabulates the facets' normal distribution (NDF) as a function of surface location. Data from a single view provides a 2D slice of the 4D BRDF at each surface point from which we fit a partial NDF. The fitted NDF is partial because the single view direction coupled with the set of light directions covers only a portion of the "half-angle" hemisphere. We complete the NDF at each point by applying a novel variant of texture synthesis using similar, overlapping partial NDFs from other points. Our similarity measure allows azimuthal rotation of partial NDFs, under the assumption that reflectance is spatially redundant but the local frame may be arbitrarily oriented. Our system includes a simple acquisition device that collects images over a 2D set of light directions by scanning a linear array of LEDs over a flat sample. Results demonstrate that our approach preserves spatial and directional BRDF details and generates a visually compelling match to measured materials.
Modeling and Rendering Heterogeneous Translucent Materials using Diffusion Equation
Jiaping Wang, Shuang Zhao, Xin Tong, Stephen Lin, Zhouchen Lin, Yue Dong, Baining Guo, and Heung-Yeung Shum
ACM Transactions on Graphics, 27(1), March 2008
In this paper, we propose techniques for modeling and rendering of heterogeneous translucent materials that enable acquisitionfrom measured samples, interactive editing of material attributes, and real-time rendering. The materials are assumed to be optically dense such that multiple scattering can be approximated by a diffusion process described by the diffusion equation. For modeling heterogeneous materials, we present an algorithm for acquiring material properties from appearance measurements by solving an inverse diffusion problem. Our modeling algorithm incorporates a regularizer to handle the ill-conditioned inverse problem, an adjoint method to dramatically reduce the computational cost, and a hierarchical GPU implementation for further speedup. To display an object with known material properties, we present an algorithm that performs rendering by solving the diffusion equation with the boundarycondition defined by the given illumination environment. This algorithm is centered around object representation by a polygrid, a grid with regular connectivity and an irregular shape, which facilitates the solution of the diffusion equation in arbitraryvolumes. Because of the regular connectivity, our rendering algorithm can be implemented on the GPU for real-time performance. We demonstrate our techniques by capturing materials from physical samples and performing real-time rendering and editingwith these materials.