ADAM中的渲染和着色:第3集

Have you seen ADAM: The Mirror and ADAM: Episode 3 yet? These two short films have captivated millions of viewers, and many are eager to know how Neill Blomkamp and his team did such cool effects in real-time. Read our in-depth, behind-the-scenes posts on lighting, Alembic support, clothing simulation, Timeline, shaders, real-time rendering and more.

您是否看过ADAM:《镜子》ADAM:第3集 ? 这两部短片吸引了数百万观众,许多人渴望知道尼尔·布隆坎普(Neill Blomkamp)和他的团队如何实时制作出如此出色的效果。 请阅读我们的深入,幕后的职位 ighting蒸馏器的支持衣服模拟时间轴 ,着色器,实时渲染等。

John Parsaie is the Software Engineer on the Made with Unity team. On the recent ADAM films, John delivered features like subsurface scattering, transparent post-processing effects, Alembic graphics integrations, and more. Prior to his time on the team, John worked as an engineering intern at Unity and Warner Bros. Interactive, while studying in Vermont and Montreal.

John Parsaie是Unity团队的软件工程师。 在最近的ADAM电影中,John提供了诸如地下散射,透明的后处理效果,Alembic图形集成等功能。 在加入团队之前,John在佛蒙特州和蒙特利尔学习期间曾在Unity和Warner Bros. Interactive担任工程实习生。

Yibing Jiang is the Technical Art Supervisor on the Made with Unity team. Before joining Unity, she worked at top studios in both animated feature films and AAA games, including character shading for Uncharted 4 (Naughty Dog), sets shading for Monsters University, and Cars 2 (Pixar), and look development for Wreck-It Ralph (Disney).

姜一冰是Unity团队的技术美术总监。 加入Unity之前,她曾在顶级电影制片厂和AAA游戏中工作,包括《 神秘海域4》(调皮狗)的角色着色, 怪兽大学 》和《 汽车总动员2》 (皮克斯)的着色设置,以及《 无敌破坏王》拉尔夫的开发工作(迪士尼)。

设置阶段:帧分解 (Setting the stage: Frame breakdown)

In the striking, real-time ADAM films, a number of components come together in Unity to deliver the effects that have gained so much attention. In this post, we focus on just two aspects – albeit very important aspects – of how Oats Studios achieved such memorable effects. So if you’d like to know more about the custom shaders that these artists used, and real-time rendering of just one frame with Unity 2017.1, read on!

在引人注目的实时ADAM电影中,Unity中有许多组件组合在一起,以提供引起人们极大关注的效果。 在本文中,我们仅关注Oats Studios如何实现如此令人难忘的效果的两个方面(尽管非常重要)。 因此,如果您想了解更多关于这些艺术家使用的自定义着色器以及使用Unity 2017.1实时渲染一帧的更多信息,请继续阅读!

ADAM中的渲染和着色:第3集

The frame in ADAM: Episode 3 to be analyzed

ADAM中的框架:第3集需要分析

In this post we will be cracking open the frame above from ADAM: Episode 3 using RenderDoc – an extremely useful tool for frame analysis and graphics debugging – to give you an insider’s understanding of some of the visuals Oats accomplished. Conveniently, RenderDoc already has built-in editor integration with Unity, making it the logical next step from our own Frame Debugger, in case you want to do similar analysis on one of your own projects. Read more about RenderDoc and Unity here.

在这篇文章中,我们将使用RenderDoc (这是一个非常有用的框架分析和图形调试工具)从ADAM:第3集上解开框架,使您对燕麦的部分视觉效果有所了解。 方便的是,RenderDoc已经与Unity内置了编辑器集成,如果您想对自己的项目之一进行类似的分析,则它是我们自己的Frame Debugger的下一步。 在此处阅读有关RenderDoc和Unity的更多信息。

渲染G缓冲区 (Rendering the G-Buffer)

Both ADAM films were rendered on Unity 2017.1’s deferred render path. This means that all opaque objects* are rendered into a set of buffers collectively referred to as the G-Buffer, or the Geometry Buffer. The G-Buffer contains all the data necessary to perform the actual lighting calculations further down the render pipe.

这两部ADAM电影均在Unity 2017.1的延迟渲染路径上进行渲染。 这意味着将所有不透明的对象*渲染到一组缓冲区中,这些缓冲区统称为G缓冲区或几何缓冲区。 G缓冲区包含在渲染管道下方执行实际照明计算所需的所有数据。

ADAM中的渲染和着色:第3集

The G-Buffer (Geometry Buffer), alpha channels in left corners

G缓冲区(几何缓冲区),左上角有Alpha通道

By setting up multiple render targets (MRT), the following data can be written to all four constituents of the G-Buffer within the same draw call, for each opaque object as shown.

通过设置多个渲染目标(MRT),对于每个不透明对象,可以在同一绘制调用中将以下数据写入G缓冲区的所有四个组成部分,如图所示。

1. Diffuse Color RGB / Occlusion A (ARGB32)

1.漫反射色RGB /遮挡A(ARGB32)

The “intrinsic color” and baked ambient occlusion of geometry.

几何的“固有颜色”和烘焙的环境光遮挡。

2. Specular Color RGB / Smoothness A (ARGB32)

2.高光RGB /平滑度A(ARGB32)

Unity supports both specular and metallic PBR workflows. Internally, however, both workflow inputs boil down to the same information written in this buffer. This is done to unify PBR inputs under the same shading model, which is used later.

Unity支持镜面反射和金属PBR工作流程。 但是,在内部,两个工作流输入都归结为写入此缓冲区的相同信息。 这样做是为了统一相同阴影模型下的PBR输入,稍后再使用。

3. World Normal RGB / Unused A (ARGB2101010)

3.世界正常RGB /未使用A(ARGB2101010)

A higher precision buffer is used to store the world space normals, or the facing direction of a surface, at each pixel. This information is crucial for calculating illuminance with respect to a light source.

较高精度的缓冲区用于存储每个像素处的世界空间法线或表面的朝向。 该信息对于计算相对于光源的照度至关重要。

4. Emission, Ambient RGB / Unused A (ARGBHalf)

4.发射,环境RGB /未使用A(ARGBHalf)

Emission and ambient GI is rendered into this buffer. Later down the pipe, this buffer is also used to gather reflections and to accumulate lighting. This buffer is set to an LDR or HDR format, depending on the user setting. ADAM is rendered with HDR lighting.

发射和环境GI渲染到此缓冲区中。 后来沿着管道向下,该缓冲区还用于收集反射并累积照明。 根据用户设置,此缓冲区设置为LDR或HDR格式。 ADAM使用HDR照明进行渲染。

* 具有自定义阴影模型的 透明 和不透明项的处理方式有所不同,甚至更深。 (*Transparents and opaque items with custom shading models are handled differently, further down the pipe.)

深度模板缓冲区 (The depth-stencil buffer)

As the G-Buffer is rendered, so is the scene’s depth into its own special buffer. Storing depth is critical in real-time graphics for holding onto our sense of the third dimension, both during and after the projection of the scene onto a 2D space. It is also essential for reconstructing the world position of a pixel, needed later in deferred shading. Moreover, this is the “bread and butter” for the advanced post-processing effects we all know and love in real-time.

渲染G缓冲区时,场景进入其自己的特殊缓冲区的深度也是如此。 在实时图形中存储深度对于在将场景投影到2D空间期间和之后保持三维感至关重要。 这对于重建像素的世界位置也是必不可少的,稍后在延迟着色中需要使用该位置。 此外,这是实时众所周知的高级后期处理效果的“面包和黄油”。

ADAM中的渲染和着色:第3集

The depth and stencil buffers

深度和模板缓冲区

The stencil buffer (right) shares the same resource as the depth buffer. It is particularly useful for categorizing pixels based on what was rendered to them. We can use that information later to discriminate between pixels and choose what kind of work is done on them. In Unity’s case, the stencil buffer is used for light culling masks. Specifically for ADAM, it is also used to mark objects that exhibit subsurface scattering (SSS).

模板缓冲区(右)与深度缓冲区共享相同的资源。 这对于基于渲染到像素的像素进行分类特别有用。 以后我们可以使用该信息来区分像素,并选择对像素进行何种工作。 在Unity的情况下,模板缓冲区用于光剔除蒙版。 专门用于ADAM,它也用于标记具有次表面散射(SSS)的对象。

地下轮廓缓冲区 (Subsurface profile buffer)

Regarding subsurface scattering, the renderer was extended to also write indices into an extra buffer (during G-Buffer generation) that is later used for lookups into buckets containing the important data sent from the subsurface profiles. This buffer also stores a scalar representing how much scattering should occur.

关于地下散射,渲染器被扩展为也将索引写入额外的缓冲区(在生成G-Buffer期间),该缓冲区随后用于查找包含从地下轮廓发送的重要数据的存储桶。 该缓冲区还存储一个标量,表示应发生多少散射。

ADAM中的渲染和着色:第3集

Subsurface profile buffer: (R) profile index, (G) scatter radius

地下轮廓缓冲区:(R)轮廓索引,(G)散射半径

As mentioned, that important data comes from subsurface diffusion profiles, which a user creates on the Editor side. These user-defined profiles define how diffuse light scatters within highly translucent media.

如前所述,重要数据来自用户在编辑器端创建的地下扩散剖面。 这些用户定义的配置文件定义了散射光如何在高度半透明的介质中散射。

ADAM中的渲染和着色:第3集

A subsurface scattering profile

地下散射剖面

We are also able to control forward-scattering, or transmittance, through these profiles. Examples of this is in shots where light transmits through the thin flesh of the ear and nostrils. All of this information is sent to the GPU to read later.

我们还能够通过这些配置文件控制前向散射或透射率。 例如在镜头中,光线穿过耳朵的细肉和鼻Kong。 所有这些信息都发送到GPU,以供以后读取。

下一步 (Next steps)

With the G-Buffer rendered, we have effectively deferred all complexity of the scene geometry onto a handful of buffers. Doing this makes nearly all of our future calculations a reasonably fixed, predictable cost; this is because lighting calculations are now completely agnostic to the actual geometric complexity of the scene. Prior to main lighting, however, a few key preliminary passes remain, which are explored below.

通过渲染G缓冲区,我们有效地将场景几何的所有复杂性推迟到了少数缓冲区上。 这样做几乎使我们将来的所有计算都成为合理固定的,可预测的成本; 这是因为照明计算现在完全不了解场景的实际几何复杂度。 但是,在进行主要照明之前,还需要完成一些关键的初步准备工作,下面将对此进行探讨。

环境的思考 (Environment reflections)

Using data from the recently created G-Buffer, a calculation is run against the Skybox cubemap to obtain environment reflections. This calculation takes into consideration information ranging from roughness, normals, view direction, specular color, etc., and is pushed through a series of equations to produce a physically correct specular response from the environment. These reflections are additively blended to the emissive HDR buffer.

使用来自最近创建的G缓冲区的数据,对Skybox立方体贴图进行计算以获取环境反射。 该计算考虑了粗糙度,法线,视图方向,镜面反射颜色等范围内的信息,并通过一系列方程式进行计算,以从环境中产生物理上正确的镜面反射响应。 这些反射将添加到发射HDR缓冲区中。

ADAM中的渲染和着色:第3集

The environment reflections

环境的思考

Undertow (Shadows)

Nearly all preliminary work required by the renderer is now complete. With that, the renderer enters the deferred lighting phase, which begins with shadows.

渲染器所需的几乎所有准备工作现在都已完成。 这样,渲染器进入了延迟照明阶段,该阶段从阴影开始。

Unity uses a well-known technique called cascaded shadow mapping (CSM) on its directional lights. The idea is simple: our eyes can’t make out much detail the further away we look, so why should so much effort be put into calculating the faraway details in computer graphics? CSM works with this fact, and actually distributes, or cascades, the resolution of the shadow map based on distance from the camera.

Unity在其定向灯上使用一种称为级联阴影贴图(CSM)的众所周知的技术。 这个想法很简单:我们的眼睛看不到远处的细节,那么为什么要在计算计算机图形学中的远处细节时花那么多精力呢? CSM可以解决这个问题,并根据距相机的距离实际分布或级联阴影图的分辨率。

ADAM中的渲染和着色:第3集

 (L) Cascaded Shadow Map (CSM), (R) Spot Light Shadow Map 

(L)级联阴影图(CSM),(R)点光源阴影图

In this particular shot, the directional light CSM is actually only used on the environment geometry, leaving the two characters to be handled by a set of spotlights! This was done in some shots, including this one, because it gave the lighters at Oats Studios better flexibility in accentuating the key visuals of a shot.

在这个特殊的镜头中,定向光CSM实际上仅用于环境几何体,而这两个字符将由一组聚光灯处理! 这是在某些镜头(包括该镜头)中完成的,因为它使Oats Studios的打火机在强调镜头的关键视觉效果时具有更好的灵活性。

ADAM中的渲染和着色:第3集

Screen-Space Shadows

屏幕空间阴影

We also deployed a technique called “screen-space shadows”, or sometimes known as “contact shadows”, which grants us some highly detailed shadows by raymarching in the depth buffer. This technique was especially important because it was able to capture the granular shadow details in Oats’ photogrammetry environment, which even CSM was not strong enough to capture at times. Screen-space shadows work together with Unity’s shadow-mapping techniques to “fill in” light leaks.

我们还部署了一种称为“屏幕空间阴影”的技术,有时也称为“接触阴影”,该技术通过在深度缓冲区中进行光线移动来为我们提供一些高度详细的阴影。 这项技术非常重要,因为它能够捕获Oats摄影测量环境中的细微阴影细节,即使CSM有时也不够强大。 屏幕空间阴影与Unity的阴影映射技术配合使用以“填充”光泄漏。

延迟阴影 (Deferred shading)

With all of the pieces in place, we’re now equipped with enough information to completely reconstruct the lighting scenario at each pixel.

在所有组件都准备就绪之后,我们现在已经有了足够的信息,可以完全重建每个像素的照明方案。

ADAM中的渲染和着色:第3集

All inputs in one of the deferred lighting passes

延迟照明之一通过的所有输入

The deferred lighting pass will perform a pass for each light in view, accumulating light to the HDR buffer on every pass. The contents of the G-Buffer are computed against the current light’s information, including its shadow map.

延迟的照明遍历将对视野中的每个光源执行一次遍历,并在每次遍历时将光累积到HDR缓冲区。 根据当前光线的信息(包括其阴影贴图)计算G缓冲区的内容。

As a first step, the world space position of the pixel is reconstructed from the depth buffer, which is then used to calculate the direction from the surface point to the eye. This is crucial in determining the view-dependent specular reflections later. Shadow and other light information (cookie, etc.) is also gathered into a single scalar term to attenuate the final result. Next, all surface data is fetched from the G-Buffer. Finally, everything gets submitted to our shading model, a physically based, microfacet bidirectional reflectance distribution function (BRDF) for final shading.

第一步,从深度缓冲区重建像素的世界空间位置,然后将其用于计算从表面点到眼睛的方向。 这对于以后确定依赖于视图的镜面反射至关重要。 阴影和其他光线信息(cookie等)也被收集到单个标量项中,以减弱最终结果。 接下来,从G缓冲区获取所有表面数据。 最后,所有内容都将提交给我们的着色模型,这是用于最终着色的基于物理的微面双向反射分布函数 (BRDF)。

ADAM中的渲染和着色:第3集

All final lighting is accumulated for opaque objects, except subsurface objects

除地下物体外,所有最终照明都是不透明物体的累积

At this point we nearly have a fully shaded scene, but what’s up with the white outlines? Well, if you remember, those were the pixels that we marked in the stencil buffer for subsurface scattering, and we’re not quite done shading them.

此时,我们几乎有一个完全阴影的场景,但是白色轮廓是怎么回事? 好吧,如果您还记得的话,这些像素是我们在模板缓冲区中标记的用于次表面散射的像素,我们还没有完全对其进行着色。

地下散射 (Subsurface scattering)

Briefly mentioned earlier, subsurface scattering is the scatter and re-emergence of diffuse light most easily seen in translucent media (in fact, subsurface scattering actually occurs in all non-metals to some degree, you just don’t notice it most of the time). One of the most classic examples is skin.

前面已简要提到,次表面散射是半透明介质中最容易看到的散射光的散射和重新出现(实际上,次表面散射实际上在某种程度上发生在所有非金属中,您只是在大多数时候没有注意到)。 皮肤是最经典的例子之一。

But what does subsurface scattering really mean in the context of real-time computer graphics?

但是,在实时计算机图形学背景下,地下散射到底意味着什么?

ADAM中的渲染和着色:第3集

(L) Scattering distance is smaller than the pixel, (R) Scattering distance is larger than the pixel

(L)散射距离小于像素,(R)散射距离大于像素

(For more information, see Naty Hoffman’s SIGGRAPH 2013 course notes on mathematics in shading.)

(有关更多信息,请参阅Naty Hoffman的SIGGRAPH 2013课程笔记中有关阴影数学的信息。)

The answer is actually a big problem. Both diagrams above contain a green circle, which represents a pixel, and incoming incident light at its center. The blue arrows represent diffuse light, and the orange arrows specular. The left diagram shows that all of the diffuse light scatters in the material and re-emerges within the bounds of the same pixel. This is the case for nearly all materials one could hope to render, allowing the safe assumption that outgoing diffuse light emits from the entry point.

答案实际上是一个大问题。 上面的两个图表都包含一个代表像素的绿色圆圈,并在其中心处有入射光。 蓝色箭头代表漫射光,橙色箭头代表镜面反射。 左图显示所有散射光在材料中散射,并重新出现在同一像素的边界内。 几乎所有可能希望渲染的材料都是这种情况,从而可以安全地假设出射的散射光会从入射点发出。

The problem arises when rendering a material that scatters diffuse light so much that it re-emerges farther than the bounds of the pixel, shown right. The previous assumption is of no help in a situation like this, and more advanced techniques must be explored to solve it.

当渲染一种将散射光散射得如此之多的材料时,就会出现问题,如右图所示,它重新出现的距离比像素的边界还远。 在这种情况下,先前的假设无济于事,必须探索更先进的技术来解决它。

ADAM中的渲染和着色:第3集

               (L) Diffuse, (R) Specular

(L)漫射,(R)高光

Following current state-of-the-art, real-time subsurface scattering techniques, a special screen space blur is deployed on the lighting. Before we can do that, though, we must ensure the separation of diffuse and specular lighting off into their own buffers. Why bother doing this? Looking back at the diagrams, you will see that specular light immediately reflects off the surface, taking no part in the subsurface phenomenon. It makes sense that we should keep it separated from the diffuse lighting, at least until after performing the subsurface approximation.

遵循当前最新的实时地下散射技术,在照明上部署了特殊的屏幕空间模糊。 但是,在执行此操作之前,我们必须确保将漫反射和镜面照明分离到它们自己的缓冲区中。 为什么要这样做呢? 回顾这些图,您会看到镜面反射的光立即从表面反射出来,不参与地下现象。 有道理的是,至少在执行次表面近似之前,我们应将其与漫射照明分开。

Below you can see a closer capture of the split lighting for clarity. Note that all specular light is completely separated from the diffuse, allowing for the work needed on the irradiance/diffuse buffer on the left to be done without any concern for damaging the integrity of the high-frequency detail in the specular.

为了清晰起见,您可以在下面看到更紧密的分离照明。 请注意,所有镜面反射光都与散射完全隔离开,从而可以完成左侧的辐照/漫射缓冲区所需的工作,而无需担心会损坏镜面反射高频细节的完整性。

ADAM中的渲染和着色:第3集

       (L) Diffuse buffer, (R) Specular buffer

(L)漫反射缓冲区,(R)高光缓冲区

Armed with the diffusion kernels created and sent from user-created subsurface profiles, a screen-space blur approximates this subsurface scattering phenomenon.

借助从用户创建的地下轮廓创建和发送的扩散核,屏幕空间模糊可以近似此地下散射现象。

ADAM中的渲染和着色:第3集

Multiple subsurface profiles can be used for different materials

多个地下轮廓可用于不同的材料

By recombining with the specular at the end of the calculation, we have addressed our original problem! A technique like this is extremely effective at approximating the scattering that should occur outside the reach of a pixel. At this point, all of the opaque objects in the scene are now shaded.

通过在计算结束时与镜面反射重新组合,我们已经解决了我们原来的问题! 这样的技术在逼近应该在像素范围之外发生的散射方面非常有效。 此时,场景中的所有不透明对象现在都已着色。

ADAM中的渲染和着色:第3集

All opaque objects are fully shaded

所有不透明的对象均已完全着色

What comes next is rendering of screen-space reflections (SSR), skybox, screen-space ambient occlusion (SSAO), and transparents. Below you can observe the stepping through of these passes.

接下来是屏幕空间反射(SSR),天空盒,屏幕空间环境光遮挡(SSAO)和透明的渲染。 在下面您可以观察到这些通行证的逐步通过。

ADAM中的渲染和着色:第3集

Rendering of SSR, skybox, SSAO, and transparents

SSR,天空盒,SSAO和透明物体的渲染

运动模糊的重要性 (The importance of motion blur)

Motion blur did play a key role in the films. Offering yet another axis of subtle cinematic quality, motion blur was instrumental in making or breaking some shots.

运动模糊在影片中确实起了关键作用。 运动模糊提供了另一个微妙的电影品质轴,有助于拍摄或破坏某些镜头。

ADAM中的渲染和着色:第3集

(L) The motion vector texture, (R) Motion blur calculated with motion vectors

(L)运动矢量纹理,(R)用运动矢量计算的运动模糊

Of course, to render motion blur requires the renderer to have knowledge of motion itself. This information is acquired by first rendering a preliminary motion vector texture (left). This buffer is produced by calculating the delta between the current and previous vertex positions in screen space, yielding velocities to use for calculating motion blur.

当然,渲染运动模糊需要渲染器具有运动本身的知识。 通过首先渲染初步运动矢量纹理(左)来获取此信息。 通过计算屏幕空间中当前顶点位置与先前顶点位置之间的增量来生成此缓冲区,从而产生用于计算运动模糊的速度。

Some extra work was done to properly obtain motion vectors from the Alembic streams. For details, see my colleague Sean’s recent post on that and other Alembic topics.

为了从Alembic流中正确获取运动矢量,做了一些额外的工作。 有关详细信息,请参阅我的同事Sean关于该主题和其他Alembic主题的最新帖子

外汇后 (Post-FX)

ADAM中的渲染和着色:第3集

Before/after applying final post-processing

应用最终后处理之前/之后

We finally arrive at the uber-shaded post-processing pass. Here, final color grading, ACES tonemapping, vignette, bloom, and depth of field are composited, providing a near-complete image. One thing is off though, where is Marian’s visor?

我们终于到达了超级阴影的后处理通道。 在这里,最终的色彩分级,ACES色调映射,小插图,光晕和景深被合成,从而提供了接近完整的图像。 不过,一件事情没发生,玛丽安的护目镜在哪里?

玛丽安的面甲 (Marian’s visor)

Dealing with transparency in real-time graphics is a well-known issue in the industries that use it. The very backbone of effects like depth of field, screen-space reflections, motion blur, and ambient occlusion all require some spatial awareness/reconstruction from the scene depth – but how could something like this be possible for a pixel covered by a transparent object? You would need two or more depth values!

在使用实时图形的行业中,处理透明度是一个众所周知的问题。 像景深,屏幕空间反射,运动模糊和环境光遮挡这样的特效的主干都需要对场景深度进行一定的空间感知/重构-但是对于被透明物体覆盖的像素,这样的事情怎么可能呢? 您将需要两个或多个深度值!

What is done instead is to first render all opaque objects to the scene, followed by a special back-to-front forward pass of the transparent object list, blending each pass to the frame without writing depth. This effectively ignores the issue as best it can, getting the job done just fine for most things, like character eyebrows or the incense smoke.

相反,要做的是首先将所有不透明的对象渲染到场景,然后是透明对象列表的特殊的从后到前的前向遍历,将每遍遍历混合到帧中而无需写入深度。 这实际上可以最大程度地忽略该问题,使工作在大多数情况下都可以完成,例如角色眉毛或熏香。

ADAM中的渲染和着色:第3集

Marian’s visor shown receiving unrealistic reflections, no AO, incorrect DoF, no motion blur

玛丽安的遮阳板显示收到不真实的反射,没有AO,错误的DoF,没有运动模糊

However, as seen in the examples above, ignoring the issue will not fly for Marian’s transparent visor, which took up nearly half of the screen time of what is intended to be a cinematic short film. We need some sort of alternative for this specific corner case, and quickly.

但是,如上述示例所示,对于玛丽安的透明遮阳板来说,忽略该问题将是无济于事的,因为透明遮阳板占据了电影短片原本预期的放映时间的近一半。 我们需要针对这种特殊情况的快速替代方案。

The solution during production was to defer transparency to a composite pass between two fully shaded frames. As you have already seen so far in this post, the first frame contains everything but the visor. After rendering of the first frame, the G-Buffer and depth gets emplaced to the second render pass for the second frame, in which the visor is rendered as an opaque.

生产过程中的解决方案是将透明性推迟到两个完全着色的框架之间进行复合遍历。 正如您到目前为止在本文中已经看到的那样,第一帧包含除遮阳板之外的所有内容。 在渲染第一帧之后,G缓冲区和深度将放置到第二帧的第二渲染通道中,在其中,遮阳板被渲染为不透明的。

ADAM中的渲染和着色:第3集

Visor transparency was deferred to a composite pass between two fully shaded frames

遮阳板透明度被推迟到两个完全阴影的框架之间的合成传递

Running a second post-process pass on the second frame and armed with the contents of the original frame’s G-Buffer and depth, we can successfully obtain proper SSR, SSAO, motion blur, and depth of field for the visor. All it takes to get the mask back into the original frame is to composite by the visor’s alpha, which will get blurred by motion blur or depth of field.

在第二帧上进行第二次后期处​​理,并使用原始帧的G缓冲区和深度的内容,我们可以成功为遮阳板获得适当的SSR,SSAO,运动模糊和景深。 使遮罩恢复到原始帧所需的全部工作是通过遮阳板的Alpha进行合成,该Alpha将因运动模糊或景深而变得模糊。

ADAM中的渲染和着色:第3集

With and without the technique. Notice the expected occlusion on the right.

有无技术。 请注意右侧的预期遮挡。

By taking this necessary step for Marian’s visor we integrated it nicely back into the picture, as shown in the above comparison. You will notice the proper SSR and AO taking effect on the right. While by no means an all-encompassing solution for the transparency problem, this technique addressed the original corner case and offered full post-processing compliance for a transparent object.

通过为玛丽安的面罩采取这一必要步骤,我们将其很好地整合到了图片中,如上述比较所示。 您会注意到正确的SSR和AO在右侧生效。 尽管绝不能解决透明性问题的所有方法,但该技术解决了原始的极端情况,并为透明对象提供了完全的后处理合规性。

画龙点睛:耀斑 (The finishing touch: Flares)

Putting their in-house flare system to good use, Oats Studios completely elevated their picture’s cinematic quality with this great finishing touch. Animated and sequenced in Timeline, these lens flares are additively blended to the frame, producing our final picture.

通过充分利用室内照明系统,Oats Studios通过这种出色的画龙点睛的功能完全提高了其照片的电影画质。 在时间轴中对它们进行动画处理和排序,将这些镜头光晕添加到帧中,以生成最终照片。

ADAM中的渲染和着色:第3集

The final shaded frame, and the lens flares to be added on top

最终的阴影框和镜头光晕将添加到顶部

最后结果 (Final result)

Here you see how everything we’ve discussed is rendered at runtime.

在这里,您将看到我们讨论的所有内容如何在运行时呈现。

ADAM中的渲染和着色:第3集

Marian approaches her hostage brother with a rock

玛丽安用石头接近她的人质兄弟

In summary, frame breakdowns are a great way to understand the interesting choices graphics teams make to suit the needs of a production, as well as being a trove of useful information to learn from and use in your own projects. If you’d like to know more about this type of analysis, check out Adrian Courrèges’ excellent graphics study series, where he deconstructs frames from various AAA titles.

总而言之,框架故障是了解图形团队为满足生产需求而做出的有趣选择的好方法,并且是从您自己的项目中学习和使用的大量有用信息。 如果您想进一步了解此类分析,请查看AdrianCourrèges出色的图形研究系列,其中他解构了来自各种AAA标题的帧。

展望未来 (Looking ahead)

Unity has big plans to deliver enhanced graphics features (like the subsurface scattering used in this film) to every user in 2018. With what we call the Scriptable Render Pipeline (SRP), a new set of APIs now available in the 2018 beta, users can define the renderer for themselves. We will also be shipping a template implementation of SRP called the High Definition Render Pipeline (HDRP), a modern renderer that includes subsurface scattering and other awesome new features. The subsurface scattering used in ADAM was a direct port from the HDRP to the 2017.1 stock renderer.

Unity计划在2018年为每个用户提供增强的图形功能(如该影片中使用的地下散射)。借助我们所谓的脚本可渲染管道(SRP),现在在2018 beta版中为用户提供了一组新的API可以自己定义渲染器。 我们还将提供SRP的模板实现,称为高清晰度渲染管线(HDRP),这是一种现代渲染器,包括地下散射和其他令人敬畏的新功能。 ADAM中使用的地下散射是从HDRP到2017.1股票渲染器的直接端口。

If you are curious and want to know more about SRP and what Unity has in store for graphics this year, be sure to check out Tim Cooper’s 2018 and Graphics post.

如果您好奇并想了解有关SRP的更多信息,以及Unity今年将要提供的图形服务,请务必查看Tim Cooper的2018年和Graphics帖子。

了解有关Unity的更多信息 (Learn more about Unity)

Find out how Unity 2017 and its features like Timeline, Cinemachine, Post-Processing Stack, and real-time rendering at 30 FPS help teams like Oats Studios change the future of filmmaking.

了解Unity 2017及其时间轴,电影机,后处理堆栈和30 FPS实时渲染等功能如何帮助Oats Studios等团队改变电影制作的未来。

翻译自: https://blogs.unity3d.com/2018/01/24/rendering-and-shading-in-adam-episode-3/