Real-Time Realistic Skin Translucency

http://www.iryoku.com/translucency/downloads/Real-Time-Realistic-Skin-Translucency.pdf

many materials possess a degree of translucency. light scatters within transluent objects (such as tree leaves, paper, or candles蜡烛) before leaving the object at a certain distance from the incident point. this process is called subsurface scattering (SSS). simulation of SSS in computer graphics is challenging. the rendering process must correctly simulate the light transport beneath an object’s surface to accurately capture its appearance (see figure 1).
Real-Time Realistic Skin Translucency
figure 1. a comparison between (a) ignoring subsurface and (b) accouting for it. the skin’s reflectance component is softened from being scattering within the skin. in addition, the figure compares © raw sreen-space diffusion and (d) screen-space diffusion with transmittance simulation, calculated using the algorithm proposed in this article. light travels through thin parts of the skin, which the transmittance component accounts for.

figure 2 shows how SSS affects real-world objects.

Real-Time Realistic Skin Translucency
figure 2. several objects showing varying degrees of translucency. as the middle and right images show, light transmitted through an object can greatly impact its final appearance.

human skin is particularly interesting because it consists of multiple translucent layers that scatter light according to their specific composition. this provides the characteristic reddish 红色的 look to which our visual system seems to be particulary well tuned. slight simulation errors will be more noticebale in skin than in, say, a wax candle. Correctly depicting human skin is important in fields such as cinematography 摄影 and games. whereas the former can count on the luxury of offline rendering, the latter imposes real-time constraints that make the problem much harder. the main challenge is to compute a real-time, perceptually plausible approximation of the complex SSS effects. it should also be easy to implement so that it integrates well with existing pipelines.

several real-time algorithms for simulating skin exist (for more information, see the “Related work in subsurface-scattering simulation”). their common key insight is that SSS mainly amounts to blurring of high-frequency details, which these algorithms perform in texture space. although the results can be realistic, the algorithms do not scale well; more objects mean more textures to process, so performance quickly decays. this is especially problematic in computer games, in which many characeters can appear on screen simutaneously and real-time performance is needed. we believe this is a main issue keeping game programmers from rendering truly realistic human skin. however, the commonly adopted alternative is to simply ignore SSS, thus decreasing the skin’s realism. additionaly, real-time rendering in a computer game context can become much more difficult, with issues such as the backgrond geometry, depth-of-field simulation, or motion blur imposing additional time penalties. in this field, great efforts are spend on obtaining further performance boosts (either in processing or memory usage). this lets us spend the saved resources on other effects, such as high-resolution textures and increased geometry complexity.

to develop a pratical skin-rendering model and thus solve the scalability issues that arise in multicharater scenarios, we proposed an algorithm that translated the simulation of scattering effects from texture space to screen space (see figure 3). this algorithm therefore reduced the problem of simulating translucency to a postprocess, with the added advantages of easy adaptability to any graphics engine. the main consequence is that we have less information to work with in screen space, as opposed to algorithms that work in 3D or texture space. we lose irradiance in all points of the surface not seen from the camera, 去除了摄像机看不到的点。because only the visible pixels are rendered. so, we can no longer calculate the transmittance of light through thin parts of an object.

Eugene d’Eon and his colleagues proposed an algorithm4 based on translucent shadow maps7 with good results. However, their solution takes up to 30 percent of the total computation time (inferred from the performance analysis in their paper) and requires irradiance maps, which aren’t available when simulating diffusion in screen space. We aim to simulate forward scattering through thin geometry with much lower computational costs, similar to how we’ve made reflectance practical.1 From our observations of the transmittance phenomenon, we derived several assumptions on which we built a heuristic that let us approximately reconstruct the irradiance on the back of an object. This in turn let us approximately calculate transmittance based on the multipole theory.8 The results show that we can produce images whose quality is on a par with photon mapping and other diffusion-based techniques (for a high-level overview of the diffusion approximation on which we based our algorithm, see the “Diffusion Profiles and Convolutions” sidebar). Our technique also requires minimal to no additional processing or memory resources.

Real-Time Transmittance Approaches
Our algorithm builds on two approaches. The first is Simon Green’s approach, which relies on depth maps to estimate the distance a light ray travels inside an object.9 The scene is rendered from the light’s point of view to create a depth map that stores the distance from the objects nearest to the light (see Figure 4).

Real-Time Realistic Skin Translucency
figure 4. a comparison of Simon Green’s approach9 (red lines) to that of Eugene d’Eon and his colleagues 4,10 (blue lines). The former stores only depth information (z), whereas the latter stores z and the (u, v) coordinates of the points of the surface nearest the light. zout represents the depth of the points where shading is being calculated, while zin is the corresponding depth of the nearest point to the light source. s is the distance between zin and zout. For example, while rendering zout1, this technique accesses the depth map to obtain the depth of zin1, the point nearest to the light. It uses an operation similar to the one used in shadow mapping. However, instead of evaluating a comparison to determine whether a pixel is shadowed, it simply subtracts zin1 from zout1 of the pixel being shaded, obtaining s1, the actual distance the light traveled inside the object.
After calculating this distance, Green’s approach offers two ways to calculate the attenuation as a function of s:

1、using an artist-created texture that maps distance to attenuation or
2、attenuating light according to Real-Time Realistic Skin Translucencywhere Real-Time Realistic Skin Translucency is the extinction coefficient of the material being rendered and T(s) is the transmission coefficient that relates the incoming and outgoing lighting.

an inherent problem with this transmittance approach (which also hinders most approaches based on shadow mapping) is that in theory it works only for convex objects. In practice, however, it approximates the solution well enough with arbitrary geometries.

the second approach is d’Eon and his colleague’s texture-space approach, 4,10 which extends the idea behind translucent shadow maps to leverage the fact the irradiance is calculated at each point of the surface being rendered. texture-space diffusion, per se, does not account for scattering in areas that are close in 3D space but far in texture space. so, simulation of this effect requires special measurements. translucent shadow maps store depth z, irradiance, and the normal of each point on the surface nearest the light, whereas the proposed modified translucent 半透明 shadow maps store z and these point’s (u,v) coordinates (see figure 4). Then, while rendering, for example, zout2, u can access the shadow map to obtain the (uin2, vin2) coordinates, which you can then use to obtain the irradiance at the back of the object. Using zout1 and zout2, you can calculate the distance traveled through the object using the depth information from the shadow map, as in Green’s approach.

as figure 5 shows, the approach can approximate the radiant exitance at point C by the radiant exitance M(x, y) at point B—where it’s faster to calculate—using the irradiance information E(x, y) around point A in the back of the object:
Real-Time Realistic Skin Translucency
figure 5. in d’Eon and his colleague’s approach, the radiant exitance at point C is approximated by the radiant exitance at point B——where it is faster to calcualte——using the irradiance information E around point A.4,10 L represents the light vector, N is the surface normal, d is the distance between A and B, s is the distance between A and C, and r is the distance from A to sampled points around it.
Real-Time Realistic Skin Translucency

As we saw, d’Eon and his colleagues calculate Real-Time Realistic Skin Translucency using the Gaussian-sum approximation (see Equation D in the “Diffusion Profiles and Convolutions” sidebar):4

Real-Time Realistic Skin Translucency

This lets them reuse the irradiance maps convoluted by each G(vi, r), used for reflectance calculation, for transmittance computations.

with shadow-map-based transmittance, high-frequency features in the depth of the shadow map might turn into high-frequency shading features. this is generally a problem when rendering translucent objects, where a softer appearance is expected. Green recommends sampling multiple points from the shadow map to soften these high-frequency depth changes. in d’Eon’s texture-space approach, the distance traveled by the light inside the object is stored in the irradiance map’s alpha channel and blurred together with this irradiance information. the downside is that there’s no obvious way to extend to multiple lights, because the alpha channel can store only the distance of one light.

Although Green’s approach is physically based, if we use Beer’s law instead of artist-controlled attenuation textures, it doesn’t account for the attenuation of the light in multilayer materials. On the other side, d’Eon and his colleagues’ approach requires texture-space diffusion, because in screen space there are no irradiance maps or irradiance information in the back of the object. Furthermore, the approach requires storing three floats in each shadow map (depth and texture coordinates), whereas regular shadow mapping requires storing only depth. This implies 3× memory usage and 3× bandwidth consumption for each shadow map.

Our Algorithm
Building on these ideas, we present a simple yet physically based transmittance shader. For this, we need a physically based function T(s) that relates the attenuation of the light with the distance traveled inside an object. First, we make four observations: