Unity渲染2 Shader Fundamentals 学习笔记。

Can we make any sense of that fourth coordinate? Does it represent anything useful? We know that we give it the value 1 to enable repositioning of points. If its value were 0, the offset would be ignored, but scaling and rotation would still happen.

Something that can be scaled and rotated, but not moved. That is not a point, that is a vector. A direction.

So ⎡⎢ ⎢ ⎢ ⎢⎣xyz1⎤⎥ ⎥ ⎥ ⎥⎦[xyz1] represents a point, while ⎡⎢ ⎢ ⎢ ⎢⎣xyz0⎤⎥ ⎥ ⎥ ⎥⎦[xyz0] represents a vector. This is useful, because it means that we can use the same matrix to transform positions, normals, and tangents.

 

The background color is defined per camera. It renders the skybox by default, but it too falls back to a solid color.

 

 If the object ends up in the camera's view, it is scheduled for rendering.

Unity渲染2 Shader Fundamentals 学习笔记。

Who controls what.

 

Change our sphere object so it uses our own material, instead of the default material. The sphere will become magenta. This happens because Unity will switch to an error shader, which uses this color to draw your attention to the problem.

Unity渲染2 Shader Fundamentals 学习笔记。

The shader error mentioned sub-shaders. You can use these to group multiple shader variants together. This allows you to provide different sub-shaders for different build platforms or levels of detail. For example, you could have one sub-shader for desktops and another for mobiles.

The sub-shader has to contain at least one pass. A shader pass is where an object actually gets rendered. Having more than one pass means that the object gets rendered multiple times, which is required for a lot of effects.

Shaders consist of two programs each. The vertex program is responsible for processing the vertex data of a mesh. This includes the conversion from object space to display space, just like we did in part 1, Matrices. The fragment program is responsible for coloring individual pixels that lie inside the mesh's triangles.

 

 

Unity's shader compiler takes our code and transforms it into a different program, depending on the target platform. Different platforms require different solutions. For example, Direct3D for Windows, OpenGL for Macs, OpenGL ES for mobiles, and so on. We're not dealing with a single compiler here, but multiple.

Which compiler you end up using depends on what you're targeting. And as these compilers are not identical, you can end up with different results per platform. For example, our empty programs work fine with OpenGL and Direct3D 11, but fail when targeting Direct3D 9.

You can manually compile for other platforms as well, either your current build platform, all platforms you have licenses for, or a custom selection. This enables you to quickly make sure that your shader compiles on multiple platforms, without having to make complete builds.

 

UnityCG.cginc is one of the shader include files that are bundled with Unity. It includes a few other essential files, and contains some generic functionality.

Unity渲染2 Shader Fundamentals 学习笔记。

Include file hierarchy, starting at UnityCG.

UnityShaderVariables.cginc defines a whole bunch of shader variables that are necessary for rendering, like transformation, camera, and light data. These are all set by Unity when needed.

HLSLSupport.cginc sets things up so you can use the same code no matter which platform you're targeting. So you don't need to worry about using platform-specific data types and such.

UnityInstancing.cginc is specifically for instancing support, which is a specific rendering technique to reduce draw calls. Although it doesn't include the file directly, it depends on UnityShaderVariables.

 

The compiler sees that we're returning a collection of four floats, but it doesn't know what that data represents. So it doesn't know what the GPU should do with it. We have to be very specific about the output of our program.

In this case, we're trying to output the position of the vertex. We have to indicate this by attaching the SV_POSITION semantic to our method. SV stands for system value, and POSITION for the final vertex position.

 

The fragment program requires semantics as well. In this case, we have to indicate where the final color should be written to. We use SV_TARGET, which is the default shader target. This is the frame buffer, which contains the image that we are generating.

But wait, the output of the vertex program is used as input for the fragment program. This suggests that the fragment program should get a parameter that matches the vertex program's output.

 

The GPU creates images by rasterizing triangles. It takes three processed vertices and interpolates between them. For every pixel covered by the triangle, it invokes the fragment program, passing along the interpolated data.

Unity渲染2 Shader Fundamentals 学习笔记。

Interpolating vertex data.

 

We're not working with texture coordinates, so why TEXCOORD0?

There are no generic semantics for interpolated data. Everyone just uses the texture coordinate semantics for everything that's interpolated and is not the vertex position. TEXCOORD0TEXCOORD1TEXCOORD2, and so on. It's done for compatibility reasons.

There are also special color semantics, but those are rarely used and they're not available on all platforms.

 

The parameter names of the vertex and fragment functions do not need to match. It's all about the semantics.

 

Do you think that the parameter lists of our programs look messy? It will only gets worse as we pass more and more data between them. As the vertex output should match the fragment input, it would be convenient if we could define the parameter list in one place. Fortunately, we can do so.

 

The default value is a string referring one of Unity's default textures, either whiteblack, or gray.

The convention is to name the main texture _MainTex, so we'll use that. This also enables you to use the convenient Material.mainTexture property to access it via a script, in case you need to.

 

After we added a texture property to our shader, the material inspector didn't just add a texture field. It also added tiling and offset controls. However, changing these 2D vectors currently has no effect.

The tiling vector is used to scale the texture, so it is (1, 1) by default. It is stored in the XY portion of the variable. To use it, simply multiply it with the UV coordinates. This can be done either in the vertex shader or the fragment shader. It makes sense to do it in the vertex shader, so we perform the multiplications only for each vertex instead of for every fragment.

 

Does the wrap mode matter when staying within the 0–1 range?

It matters when you have UV coordinates that touch the 0 and 1 boundaries. When using bilinear or trilinear filtering, adjacent pixels are interpolated while sampling the texture. This is fine for pixels in the middle of the texture. But what are the neighboring pixels of those that lie on the edge? The answer depends on the wrap mode.

When clamped, pixels on the edge are blended with themselves. This produces a tiny region where the pixels don't blend, which is not noticeable.

When repeated, pixels on the edge are blended with the other side of the texture. If the sides are not similar, you'll notice a bit of the opposite side bleeding through the edge. Zoom in on a corner of a quad with the test texture on it to see the difference.

Unity渲染2 Shader Fundamentals 学习笔记。

Tiling on the edge.

 

The smaller the texture appears on the display, the smaller a version of it should be used.

 

Which mipmap level get selected is based on the worst dimension.

Anisotropic filtering mitigates this by decoupling the dimensions. Besides uniformly scaling down the texture, it also provides versions that are scaled different amounts in either dimension. So you don't just have a mipmap for 256x256, but also for 256x128, 256x64, and so on.

Unity渲染2 Shader Fundamentals 学习笔记。Unity渲染2 Shader Fundamentals 学习笔记。

Without and with anisotropic filtering.

转载于:https://my.oschina.net/u/918889/blog/1825319