Unity StanderShader Series 1 -- forwardbase source code analysis

Source code analysis of unity standershaderforwardbase

Article catalog

Vertex Processing

The first is the data structure used by vertex function, as well as the key functions.

Input structure - VertexInput

struct VertexInput{
float4 vertex   : POSITION;
half3 normal    : NORMAL;
float2 uv0      : TEXCOORD0;
float2 uv1      : TEXCOORD1;
float2 uv2      : TEXCOORD2;
half4 tangent   : TANGENT;

Output structure - vertex output forwardbase

struct VertexOutputForwardBase
UNITY_POSITION(pos);//float4 pos : SV_POSITION   SV_ The semantics of position and VPOS are slice element coordinates, and the GPU uses SV_ Value in position
                                      //To calculate the screen pixel POSITION from the clip space POSITION. If we use POSITION semantics, we also need to manually handle the perspective trigger to remove the w component.
float4 tex                            : TEXCOORD0;
float4 eyeVec                         : TEXCOORD1;    // eyeVec.xyz | fogCoord
float4 tangentToWorldAndPackedData[3] : TEXCOORD2;    // [3x3:tangentToWorld | 1x3:viewDirForParallax or worldPos]
half4 ambientOrLightmapUV             : TEXCOORD5;    // SH or Lightmap UV
// next ones would not fit into SM2.0 limits, but they are always for SM3.0+
float3 posWorld                     : TEXCOORD8;

Vertex function - vertForwardBase

VertexOutputForwardBase vertForwardBase (VertexInput v)   {
//Set InstanceID
VertexOutputForwardBase o;
//Initialization data
UNITY_INITIALIZE_OUTPUT(VertexOutputForwardBase, o);
//Get the instanceID of o data
//Binocular effect, game can be ignored, vr use
float4 posWorld = mul(unity_ObjectToWorld, v.vertex);
//Whether the world position coordinates are compressed in tangent space
        o.tangentToWorldAndPackedData[0].w = posWorld.x;
        o.tangentToWorldAndPackedData[1].w = posWorld.y;
        o.tangentToWorldAndPackedData[2].w = posWorld.z;
        o.posWorld = posWorld.xyz;
o.pos = UnityObjectToClipPos(v.vertex);
//Initialize the UV information, determine xy as the first set of UV information according to the input, zw is to save the first set of UV information or the second set of UV information-- UnityStandardInput.cginc
o.tex = TexCoords(v);
//--In this cgnic, if you will normalize in 2.0, if you do not process in 3.0, you will process in slice element
o.eyeVec.xyz = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos);
float3 normalWorld = UnityObjectToWorldNormal(v.normal);
//A kind of TANGENT_ TO_ Whether world needs space cutting conversion? In UnityStandardInput.cginc If there is a normal map, a direct lightmap, or a parallax map, this conversion is required
    float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w);
    //stay UnityStandardUtils.cginc Get the matrix from tangent space to world space
    float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w);
    o.tangentToWorldAndPackedData[0].xyz = tangentToWorld[0];
    o.tangentToWorldAndPackedData[1].xyz = tangentToWorld[1];
    o.tangentToWorldAndPackedData[2].xyz = tangentToWorld[2];
    //If not defined_ TANGENT_TO_WORLD saves the normal world.
    o.tangentToWorldAndPackedData[0].xyz = 0;
    o.tangentToWorldAndPackedData[1].xyz = 0;
    o.tangentToWorldAndPackedData[2].xyz = normalWorld;
//We need this for shadow receiving
//--In this cgnic, the ambient light is obtained. If the static light graph and the dynamic light graph are not turned on, the spherical harmonic calculation is used, and the color is returned; otherwise, uv, XY -- > static light graph uv, ZW -- > dynamic light graph uv is returned
o.ambientOrLightmapUV = VertexGIForward(v, posWorld, normalWorld);

    //rotation is defined as the transformation from model to tangent space, which is defined in UnityCG.cginc in
    //Convert viewdir coordinates to tangent space and store them
    half3 viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex));
    o.tangentToWorldAndPackedData[0].w = viewDirForParallax.x;
    o.tangentToWorldAndPackedData[1].w = viewDirForParallax.y;
    o.tangentToWorldAndPackedData[2].w = viewDirForParallax.z;
//Fog effect treatment
return o;   }

If you have comments or some straightforward ones, skip them

  1. o.tex = TexCoords(v);

    float4 TexCoords(VertexInput v)
    {  float4 texcoord;
    texcoord.xy = TRANSFORM_TEX(v.uv0, _MainTex); // Always source from uv0
    texcoord.zw = TRANSFORM_TEX(((_UVSec == 0) ? v.uv0 : v.uv1), _DetailAlbedoMap);
    return texcoord;  }

Initialize UV information_ UVSec is defined in UnityStandardInput.cginc In, assignment is assigned in the material panel.


  1. o.ambientOrLightmapUV = VertexGIForward(v, posWorld, normalWorld);

    inline half4 VertexGIForward(VertexInput v, float3 posWorld, half3 normalWorld){    
    half4 ambientOrLightmapUV = 0;
    // Static lightmaps
     //LIGHTMAP_ON = =) use static lightmap
     #ifdef LIGHTMAP_ON
        ambientOrLightmapUV.xy = v.uv1.xy * unity_LightmapST.xy + unity_LightmapST.zw;
        ambientOrLightmapUV.zw = 0;
     //  UNITY_SHOULD_SAMPLE_SH the currently rendered dynamic model uses SH to get indirect diffuse information,
     //Unit_ SHOULD_ SAMPLE_ SH is defined below. #define UNITY_SHOULD_SAMPLE_SH
    // ( defined (LIGHTMAP_ OFF) && defined(DYNAMICLIGHTMAP_ Off) to see that when there is no lightmap and dynamic lightmap, we get 
    // Diffuse information for
    //If VERTEXLIGHT_ON to obtain the illumination information of non important point light source. (vertex illumination, in the light source setting, import and pixellidth are pixel illumination)
     ambientOrLightmapUV.rgb = Shade4PointLights (
          unity_4LightPosX0, unity_4LightPosY0, unity_4LightPosZ0,
          unity_LightColor[0].rgb, unity_LightColor[1].rgb, unity_LightColor[2].rgb, unity_LightColor[3].rgb,
          unity_4LightAtten0, posWorld, normalWorld);
      //Calculation of spherical harmonic value--- UnityStanderUtils.cginc
    ambientOrLightmapUV.rgb = ShadeSHPerVertex (normalWorld, ambientOrLightmapUV.rgb);
       ambientOrLightmapUV.zw = v.uv2.xy * unity_DynamicLightmapST.xy + unity_DynamicLightmapST.zw;
    return ambientOrLightmapUV;}
    //Vertex spherical harmonic function
    half3 ShadeSHPerVertex (half3 normal, half3 ambient)
    // Completely per-pixel
    // nothing to do here
    // Completely per-vertex
    //--For 2.0 phones, use the
    ambient += max(half3(0,0,0), ShadeSH9 (half4(normal, 1.0)));
    // L2 per-vertex, L0..L1 & gamma-correction per-pixel
    // NOTE: SH data is always in Linear AND calculation is split between vertex & pixel
    // Convert ambient to Linear and do final gamma-correction at the end (per-pixel)
        ambient = GammaToLinearSpace (ambient);
    ambient += SHEvalLinearL2 (half4(normal, 1.0));     // no max since this is only L2 contribution
    return ambient; }

First of all, we need to know the classification of the interior lights of the unit, how to divide the pixel light, fixed-point light and spherical harmonic light. Baidu is OK. In fact, through the above to unity_ SHOULD_ SAMPLE_ As can be seen from the definition of SH, the logic above is a little problematic. Spherical harmonic calculation can only be performed when static and dynamic lightmaps are not turned on. If any of them are turned on, spherical harmonic calculation will not be performed. In other words, if rgb in ambient orlightmapuv is turned on to store color data, on the contrary, UV information of static lightmap is stored in xy (using uv1 input from vertex), and UV information of dynamic lightmap is stored in zw (using uv2).
So what is the spherical harmonic function used to do dynamic global illumination? What kind of thing. If you are interested, you can see his assignment process. Although I haven't seen it, the information contained in the spherical harmonic base includes the illumination probe and the spherical harmonic illumination. What I understand is only the information (later you will find that the spherical harmonic function will be used once in the slice element shader. Why? Related to ShaderTarget, 2.0 is calculated at the vertex, 3.0 is calculated at the slice element, 3.0 is really a local tyrant)


  1. Tangent to world and packed data is a 3 * 4 matrix. The first three rows and three columns represent the transformation matrix from tangent space to world space. When unity is defined_ REQUIRE_ FRAG_ Worldpos and UNITY_PACK_WORLDPOS_WITH_TANGENT saves the world coordinates in the fourth column of each row, otherwise directly in posWorld. When used_ The parallel map height map is also saved in the fourth column of each row. The question is, what if it is defined? Isn't there a loss of data? How can it be possible? The two must not exist at the same time. If you don't believe it, note that the mobile platform marked below will not compress the woldPos into this structure. The mobile platform uses the accuracy of mediump. It is speculated that it is saved in posWorld and can be used in the chip with medium accuracy. If it is saved in the matrix, there is no way to do the accuracy conversion. Under mark, it is OK. . .

    // Should we pack worldPos along tangent (saving an interpolator)
    // We want to skip this on mobile platforms, because worldpos gets packed into mediump
  2. What's the end of the summit? What did you do? I can do the same thing. To sum up, the vertex has done the data preparation work: position, uv, space cutting conversion. When the static and dynamic illumination map is turned on, the uv of both is set. When it is not turned on, the color of the environment is calculated (2.0 is calculated by spherical harmonic at the vertex, 3.0 is calculated by spherical harmonic at the chip). As a matter of fact, we find that the variety of the default material is so huge that most of it will not be used for mobile games, so it is necessary to have a slim standershader like him.

Chip processing

First, let's look at the data structure. The input is the output of vertex shader vertex output forwardbase, but we need to introduce some other data structures before this:

Intermediate data - FragmentCommonData

struct FragmentCommonData
half3 diffColor, specColor;
// Note: smoothness & oneMinusReflectivity for optimization purposes, mostly for DX9 SM2.0 level.
// Most of the math is being done on these (1-x) values, and that saves a few precious ALU slots.
half oneMinusReflectivity, smoothness;
float3 normalWorld;
float3 eyeVec;
half alpha;
float3 posWorld;
half3 reflUVW;//It's called reflection vector
half3 tangentSpaceNormal;
//----UNITY_STANDARD_SIMPLE is defined according to unity_ NO_ FULL_ STANDARD_ This can be customized in shader according to the needs,
//Let's not talk about simple here, huh? Why? Because it's simple.. Ah??? How can there be two more data structures after simply opening?... Ah! Let me see. It's true,
//This... What are you worried about? Do we see a unity in the structure of vertex output forwardbase_ REQUIRE_ FRAG_ Worldpos is defined as follows:    
//It's obvious for a long time that the world coordinates are needed for the original ones that are not opened. In simple, for the sake of efficiency calculation, we will make approximation algorithms on the vertices and some calculations, so we ignore them first. After all, we have the pursuit
//Of. But we need to know the alternative positions of these variables, which will be pointed out in the next meeting. Next, let's invite GI family to come on stage. What is it? I'm confused

GI data structure – unitygi unityindirect unitylight unitygiinput

struct UnityGI
UnityLight light;
UnityIndirect indirect;
struct UnityIndirect
half3 diffuse;
half3 specular;
struct UnityLight
half3 color;
half3 dir;
half  ndotl; // For compatibility, it doesn't work
//We can see that UnityGI stores the color and direction of a light, as well as the color of diffuse and highlight. That's all~~~

struct UnityGIInput
UnityLight light; //Pixel by pixel lighting, with engine settings. If you don't know what it is, see how the engine above divides the pixels, vertices and spherical harmonic lighting.

float3 worldPos;
half3 worldViewDir;
half atten;
half3 ambient;
float4 lightmapUV; // .xy = static lightmap UV, .zw = dynamic lightmap UV
//UNITY_SPECCUBE_BLENDING light probe fusion mainly means that when multiple light probes are received, there will be a transition after opening, rather than a direct mutation(
 //Meshrender lighting reflectionprobes settings).
//UNITY_ SPECCUBE_ BOX_ Project box calculation. When it is turned on, azimuth calculation will be added. Reflection sampling has perspective, normal and position determination, so it will change with movement. (ReflectionProbe-Runtimesettings-BoxProjection)
//UNITY_ENABLE_REFLECTION_BUFFERS -- literally, whether to turn on reflection buffer or not. I don't know if it's involved in delayed rendering...
float4 boxMin[2];
//Reflection turns on boxprojection, and you need to get the size and location of the box
float4 boxMax[2];
float4 probePosition[2];
// HDR cubemap properties, use to decompress HDR texture
float4 probeHDR[2];
//what? Is this how to calculate the input data of the UnityGI you just saw?? Good, good eyesight.

Now that we have finished the data structure of the chip element, because this time it's source code analysis, we won't talk about the principle. Later, we will have time to explain the understanding of the lower platform and the principle of pbr. All right, here's the code

Fragment function fragForwardBaseInternal

half4 fragForwardBaseInternal (VertexOutputForwardBase i)
//Perform LOD related calculations. --At UnityCG.cginc Defined in
//UNITY_ APPLY_ DITHER_ Crossfeed is a macro definition that takes effect when the scene object adds a LOD Group and the Fade Mode is Cross Fade,
//The core logic of the code is to calculate the coordinates of the object in the screen space, and then to sample_ DitherMaskLOD2D map, cut alpha test according to Alpha value, and the final effect is
//The farther away from the camera, the object will gradually disappear
//Initialization of internal variables is mainly to initialize and assign FragmentCommonData UnityCG.cginc Defined in
//Calculation of GPU Instance
//The calculation of visual field -- defined in UnityInstancing.cginc in
//Main light - get the position and color of the light, which is saved in the unit_ LightColor0 and_ WorldSpaceLightPos0
UnityLight mainLight = MainLight ();
//Calculate light attenuation -- for shadow calculation, in AutoLight.cginc in
UNITY_LIGHT_ATTENUATION(atten, i, s.posWorld);
//--Get mask
half occlusion = Occlusion(i.tex.xy);
//--Get GI, remember the above calculation of ambient orlightmapuv? GI of spherical harmonic function calculation
UnityGI gi = FragmentGI (s, occlusion, i.ambientOrLightmapUV, atten, mainLight);
 half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.smoothness, s.normalWorld, -s.eyeVec, gi.light, gi.indirect);
c.rgb += Emission(i.tex.xy);
UNITY_APPLY_FOG(_unity_fogCoord, c.rgb);
return OutputForward (c, s.alpha);


#define UNITY_APPLY_DITHER_CROSSFADE(vpos)  UnityApplyDitherCrossFade(vpos)
sampler2D _DitherMaskLOD2D;
void UnityApplyDitherCrossFade(float2 vpos)
    vpos /= 4; // the dither mask texture is 4x4
    vpos.y = frac(vpos.y) * 0.0625 /* 1/16 */ + unity_LODFade.y; // quantized lod fade by 16 levels
    clip(tex2D(_DitherMaskLOD2D, vpos).a - 0.5);

If dynamic mixing (setting POSITION logo up – FadeMode – crossfire) is on, define unity_ APPLY_ DITHER_ The crossfeed function has the parameter vpos. Note that this vpos is the POSITION of the screen space (note the semantic SV_POSITION and POSITION), by sampling_ Dietermasklod2d to realize the dynamic replacement of LOD. With this function, we can remove the elements that are not needed in LOD dynamic blanking.

Initialization piece metadata FRAGMENT_SETUP

#define FRAGMENT_SETUP(x) FragmentCommonData x = \
FragmentSetup(i.tex, i.eyeVec.xyz, IN_VIEWDIR4PARALLAX(i), i.tangentToWorldAndPackedData, IN_WORLDPOS(i));
//Analysis: parameter i is the data of vertex output forwardbase.
//IN_VIEWDIR4PARALLAX what's this for? Remember where we saved our world coordinates and viewDir in vertex functions? Yes, it's acquisition.
//#define IN_VIEWDIR4PARALLAX(i) NormalizePerPixelNormal(half3(i.tangentToWorldAndPackedData[0].w,i.tangentToWorldAndPackedData[1].w,i.tangentToWorldAndPackedData[2].w))
//Detail implementation
inline FragmentCommonData FragmentSetup (inout float4 i_tex, float3 i_eyeVec, half3 i_viewDirForParallax, float4 tangentToWorld[3], float3         i_posWorld)
//---At UnityStandardInput.cginc Medium -- calculate the inspection map, reset uv
i_tex = Parallax(i_tex, i_viewDirForParallax);
//---At UnityStandardInput.cginc  Get alpha value
half alpha = Alpha(i_tex.xy);
#if defined(_ALPHATEST_ON)
    clip (alpha - _Cutoff);
//Initializing the FragmentCommonData value -- in this cginc
FragmentCommonData o = UNITY_SETUP_BRDF_INPUT (i_tex);
//Normal of world space
o.normalWorld = PerPixelWorldNormal(i_tex, tangentToWorld);
//Viewdir --- target 3.0 is normalized here, and 2.0 is normalized at the vertex. Normalization is to prevent data overflow
o.eyeVec = NormalizePerPixelNormal(i_eyeVec);
o.posWorld = i_posWorld;

// NOTE: shader relies on pre-multiply alpha-blend (_SrcBlend = One, _DstBlend = OneMinusSrcAlpha)
//stay unitystanderuitils.cginc Calculates the effect of alpha on the diffuse color,
o.diffColor = PreMultiplyAlpha (o.diffColor, alpha, o.oneMinusReflectivity, /*out*/ o.alpha);
return o;

Yes, this is to get the FragmentCommonData of the data structure we just mentioned. Next, we will analyze it one by one, because every step after that is a wonderful part.

  1. First, we have the altitude map. The Height Map is displayed in the standard panel (in the code:_ ParallaxMap), after which a controllable scroll bar appears (in the code_ Parallax). This diagram is to highlight the sense of hierarchy. How to highlight the sense of hierarchy? In fact, the uv of the map sample is offset to simulate the hierarchical effect.

    //Defined in UnityStandardInput.cginc Note that this viewDir is the viewDir of tangent space
    float4 Parallax (float4 texcoords, half3 viewDir)
     //When there is no height map and target is less than 3.0, no calculation will be performed
      #if !defined(_PARALLAXMAP) || (SHADER_TARGET < 30)
      // Disable parallax on pre-SM3.0 shader target models
     return texcoords;
    //First, sample the altitude map, only using the g channel, so if you need to use the altitude map for a project, you can do some articles here
     half h = tex2D (_ParallaxMap, texcoords.xy).g;
     float2 offset = ParallaxOffset1Step (h, _Parallax, viewDir);
     //It's an offset of the original uv
     return float4(texcoords.xy + offset, texcoords.zw + offset);
    //Parallelaxoffset1step, notice that this is the viewDir of tangent space, which is composed of tangent space[ tangent.xyz, binormal, normal] 
    //The height range is 0-0.08,
    vec2 ParallaxOffset( float h, float height, vec3 viewDir )
       //Change h=(h-0.5)*height, and the height map is divided by 0.5
       h = h * height - height/2.0;         
       vec3 v = normalize(viewDir);
        //In other words, the smaller the angle between viewdir and normal is,
        //The value of (v.xy / v.z) is close to 0, and the following 0.42 is just an adjustment coefficient.
       v.z += 0.42;
       //h is controlled by the height map, defined by art, (v.xy / v.z) is the angle between the angle of view and the normal. The offset of uv is determined by the multiplication of the two.
       return h * (v.xy / v.z);
  2. Alpha get

    //A kind of Color is the color on the stand panel,
    half Alpha(float2 uv)
       //A kind of SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A is controlled by the panel Source. It is turned on when = = AlbedoAlpha and off when MetallicAlpha
       return _Color.a;
      return tex2D(_MainTex, uv).a * _Color.a;
    This sentence is to create BRDF data, and complete the creation of FragmentCommonData mentioned above. By default, its definition uses SpecularSetup, but our standershader uses MetallicSetup, as well as RoughnessSetup, which corresponds to three pbr production methods, namely, high light flow, metal flow and roughness flow. Let's start with MetallicSetup, where the metallicity function is used to describe what the specular color and diffuse color of this object are like when a beam of strong light (1,1,1,1) shines on it, which is related to metallicity and solid color:

    //Input is uv
    inline FragmentCommonData MetallicSetup (float4 i_tex)
    //Obtain x metallicity and y smoothness, defined in UnityStandardInput.cginc in
     half2 metallicGloss = MetallicGloss(i_tex.xy);
     half metallic = metallicGloss.x;
     half smoothness = metallicGloss.y; // this is 1 minus the square root of real roughness m.
     half oneMinusReflectivity;
     half3 specColor;
     //Diffuse color, specular color, and diffuse coefficient are calculated by metallicity. Input: map color, metallicity, specular, 1-reflection
     half3 diffColor = DiffuseAndSpecularFromMetallic (Albedo(i_tex), metallic, /*out*/ specColor, /*out*/ oneMinusReflectivity);
     FragmentCommonData o = (FragmentCommonData)0;
    o.diffColor = diffColor;
    o.specColor = specColor;
    o.oneMinusReflectivity = oneMinusReflectivity;
    o.smoothness = smoothness;
    return o;
    //Calculate the metallicity and smoothness. We seem to see old friends here, when calculating alpha
    //A kind of SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A. It can be seen that transparency affects smoothness. And then there's this_ Glossiness
    //And_ With the use of GlossMapScale, we can see that the two will not coexist. When the alpha channel is obtained from the map, use the_ From GlossMapScale
    //So the smoothness is related to transparency. If alpha is obtained from a metal map and the metal map does not exist, use the_ Glossiness to represent smoothness.
    half2 MetallicGloss(float2 uv)
         half2 mg;
             mg.r = tex2D(_MetallicGlossMap, uv).r;
             mg.g = tex2D(_MainTex, uv).a;
             mg = tex2D(_MetallicGlossMap, uv).ra;
         mg.g *= _GlossMapScale;
         mg.r = _Metallic;
             mg.g = tex2D(_MainTex, uv).a * _GlossMapScale;
             mg.g = _Glossiness;
         return mg;
    inline half3 DiffuseAndSpecularFromMetallic (half3 albedo, half metallic, out half3 specColor, out half oneMinusReflectivity)
         //unity_ColorSpaceDielectricSpec is defined in UnityCG.cginc In, its rgb stores the color of F0 between metal and non-metal. This color
         //The color and the color on the metal map are interpolated by the metallicity, and the color of F0 that is finally passed into the formula of F is determined by the custom color.
         specColor = lerp (unity_ColorSpaceDielectricSpec.rgb, albedo, metallic);
         //To calculate diffuse,
         oneMinusReflectivity = OneMinusReflectivityFromMetallic(metallic);
         return albedo * oneMinusReflectivity;
     inline half OneMinusReflectivityFromMetallic(half metallic)
         // Since the energy conservation diffuse reflectance ratio = 1-reflectance ratio and reflectance ratio = lerp(dielectricSpec, 1, metallic), so
         //   1-reflectivity = 1-lerp(dielectricSpec, 1, metallic) = lerp(1-dielectricSpec, 0, metallic)
         // Store (1-dielectricSpec) in unit_ In colorspacedielectricspec. A (alpha for short), and then bring in
         //   1-reflectivity = lerp(alpha, 0, metallic) = alpha + metallic*(0 - alpha) =
         //                  = alpha - metallic * alpha
         half oneMinusDielectricSpec = unity_ColorSpaceDielectricSpec.a;
         return oneMinusDielectricSpec - metallic * oneMinusDielectricSpec;
  4. PerPixelWorldNormal
    Calculating the normals of world space

     float3 PerPixelWorldNormal(float4 i_tex, float4 tangentToWorld[3])
     //Whether the normal map is used, if the normal map needs to be sampled (if there are details about detailmap, sample here too)
     #ifdef _NORMALMAP
         half3 tangent = tangentToWorld[0].xyz;
         half3 binormal = tangentToWorld[1].xyz;
         half3 normal = tangentToWorld[2].xyz;
         #if UNITY_TANGENT_ORTHONORMALIZE / / this macro is defined as XNormal, which is not used in general
             normal = NormalizePerPixelNormal(normal);
             // ortho-normalize Tangent
             tangent = normalize (tangent - normal * dot(tangent, normal));
             // recalculate Binormal
             half3 newB = cross(normal, tangent);
             binormal = newB * sign (dot (newB, binormal));
         half3 normalTangent = NormalInTangentSpace(i_tex);
         float3 normalWorld = NormalizePerPixelNormal(tangent * normalTangent.x + binormal * normalTangent.y + normal * normalTangent.z); // @TODO: see if we can squeeze this normalize on SM2.0 as well
         float3 normalWorld = normalize(tangentToWorld[2].xyz);
         return normalWorld;
    /****NormalInTangentSpace In fact, we can sample the normal map. If there is a detail map, we can fuse it with the detail map
     half3 NormalInTangentSpace(float4 texcoords)
         half3 normalTangent = UnpackScaleNormal(tex2D (_BumpMap, texcoords.xy), _BumpScale);
     //---UNITY_ENABLE_DETAIL_NORMALMAP is enabled in ProjectSettings--Graphics--TierSettings--DetailNormalMap
         //--The sampling mask only uses channel A, and then uses this value as the difference coefficient to difference the normal and detailmap data
         half mask = DetailMask(texcoords.xy);
         half3 detailNormalTangent = UnpackScaleNormal(tex2D (_DetailNormalMap, texcoords.zw), _DetailNormalMapScale);
         #if _DETAIL_LERP
             normalTangent = lerp(
             normalTangent = lerp(
                 BlendNormals(normalTangent, detailNormalTangent),
         return normalTangent;
  5. Premultiplyapha calculates the effect of transparency on color

     inline half3 PreMultiplyAlpha (half3 diffColor, half alpha, half oneMinusReflectivity, out half outModifiedAlpha)
         //A kind of ALPHAPREMULTIPLY_ON -- turn on the rendermodel of the shader panel. If you don't understand, you can check the code of rendermode
         #if defined(_ALPHAPREMULTIPLY_ON)
             // NOTE: shader relies on pre-multiply alpha-blend (_SrcBlend = One, _DstBlend = OneMinusSrcAlpha)
              //The transparency here affects the diffuse color. Because the color is mixed in the way of one, oneminus srcrapha, we directly multiply
              //alpha, here we see that transparency only affects the diffuse color, but not the specular color.
             diffColor *= alpha;
             #if (SHADER_TARGET < 30)
                         // SM2.0: instruction count limitation
                 // Instead will sacrifice part of physically based transparency where amount Reflectivity is affecting Transparency
                 // SM2.0: uses unmodified alpha
                 //target2.0 ignores the effect of reflection on transparency
                 outModifiedAlpha = alpha;
                 // Reflectivity 'removes' from the rest of components, including Transparency
                 //In reality, the stronger the reflection, the weaker the transparency, and the more obvious the highlight. Mark, the expression of this formula
                 //When mixing des colors, 1-outAlpha=(1-alpha)*(1-reflectivity) is the mixing coefficient of the target, which is obvious. The original formula
                 //The meaning is that the mixed target color is affected by transparency and reflection proportion. Here we do some more special performances, just change the formula
                 // outAlpha = 1-(1-alpha)*(1-reflectivity) = 1-(oneMinusReflectivity - alpha*oneMinusReflectivity) =
                 //          = 1-oneMinusReflectivity + alpha*oneMinusReflectivity
                 outModifiedAlpha = 1-oneMinusReflectivity + alpha*oneMinusReflectivity;
             outModifiedAlpha = alpha;
         return diffColor;


It's a simple calculation here. After that, it's going to be a special in-depth study, especially the shadow processing of cartoon, which is very interesting.
Here, set unity in the vertex output forwardbase structure_ LIGHTING_ COORDS (6,7) and then call UNITY_ in the vertex function. TRANSFER_ LIGHTING(o, v.uv1). Finally, unity is used in slice function_ LIGHT_ Attenuation (atten, I, i.worldpos) can get a coefficient that combines attenuation and shadow. This coefficient can be multiplied by the final light color to achieve the shadow effect.

Occlusion mask get

Defined in UnityStandardInput.cginc in
half Occlusion(float2 uv)
#if (SHADER_TARGET < 30)
    //Get the mask and get the g channel
    return tex2D(_OcclusionMap, uv).g;
    half occ = tex2D(_OcclusionMap, uv).g;
    //In fact, I don't know. unity needs to be written in this way, directly as lerp (1, occ_ You don't want to owe cg..
    return LerpOneTo (occ, _OcclusionStrength);

half LerpOneTo(half b, half t)
    half oneMinusT = 1 - t;
    return oneMinusT + b * t;


What is GI? I don't know. IG has heard of it..
Global illumination, or GI for short, means nothing....
Global light = direct light + indirect light + self illumination; direct light is the main light source we usually use, because ForwardBase is a light here. Indirect light is indirect light except indirect light... Unity indirect light = reflection probe + illumination probe + ambient light

/***Calls for chip coloring s:FragmentCommonData
UnityGI gi = FragmentGI (s, occlusion, i.ambientOrLightmapUV, atten, mainLight);
inline UnityGI FragmentGI (FragmentCommonData s, half occlusion, half4 i_ambientOrLightmapUV, half atten, UnityLight light, bool reflections)
	    UnityGIInput d;
	    d.light = light;
	    d.worldPos = s.posWorld;
	    d.worldViewDir = -s.eyeVec;
	    d.atten = atten;
	    //Whether to read the light map or use the spherical harmonic function to calculate the color value
	    #if defined(LIGHTMAP_ON) || defined(DYNAMICLIGHTMAP_ON)
	        d.ambient = 0;
	        d.lightmapUV = i_ambientOrLightmapUV;
	        d.ambient = i_ambientOrLightmapUV.rgb;
	        d.lightmapUV = 0;
	    //Reflection probe
	    d.probeHDR[0] = unity_SpecCube0_HDR;
	    d.probeHDR[1] = unity_SpecCube1_HDR;
	      d.boxMin[0] = unity_SpecCube0_BoxMin; // .w holds lerp value for blending
        d.boxMax[0] = unity_SpecCube0_BoxMax;
        d.probePosition[0] = unity_SpecCube0_ProbePosition;
        d.boxMax[1] = unity_SpecCube1_BoxMax;
        d.boxMin[1] = unity_SpecCube1_BoxMin;
        d.probePosition[1] = unity_SpecCube1_ProbePosition;

	        //If reflection is applied, the processing information of the environment -- defined in uni tyImageBasedLighting.cginc Middle -- just get the values of roughness and reflUVW
	        Unity_GlossyEnvironmentData g = UnityGlossyEnvironmentSetup(s.smoothness, -s.eyeVec, s.normalWorld, s.specColor);
	        // Replace the reflUVW if it has been compute in Vertex shader. Note: the compiler will optimize the calcul in UnityGlossyEnvironmentSetup itself
	            g.reflUVW = s.reflUVW;
	                return UnityGlobalIllumination (d, occlusion, s.normalWorld, g);
	        //Specular part of indirect light -- Uni tyGlobalIllumination.cginc The difference from the above is that the reflected part of indirect light is not calculated.
	        return UnityGlobalIllumination (d, occlusion, s.normalWorld);
Unity_GlossyEnvironmentData UnityGlossyEnvironmentSetup(half Smoothness, half3 worldViewDir, half3 Normal, half3 fresnel0)
    Unity_GlossyEnvironmentData g;

    g.roughness /* perceptualRoughness */   = SmoothnessToPerceptualRoughness(Smoothness);//(1 - smoothness)
    g.reflUVW   = reflect(-worldViewDir, Normal);

    return g;
	inline UnityGI UnityGlobalIllumination (UnityGIInput data, half occlusion, half3 normalWorld, 
	Unity_GlossyEnvironmentData glossIn)
	    UnityGI o_gi = UnityGI_Base(data, occlusion, normalWorld);//The diffuse color of GI and the light information are calculated here.
	    o_gi.indirect.specular = UnityGI_IndirectSpecular(data, occlusion, glossIn);
	    return o_gi;

Calculates the light data for GI, as well as the diffuse color. Notice the difference between this diffuse color and the diffuse color of FragmentCommonData. According to FragmentCommonData, the essence of an object is determined by its metallicity, which is the diffuse color of indirect light.

inline UnityGI UnityGI_Base(UnityGIInput data, half occlusion, half3 normalWorld)
    //Initialize GI data structure
    UnityGI o_gi;
     //For performance reasons, shadow and mix can be handled in GI
     //***Turn on the screen shadow (i.e. the shadow map must be drawn), and turn on the lightmap, which will process the shadow when turning on GI
		#if defined( SHADOWS_SCREEN ) && defined( LIGHTMAP_ON )
        //Sampling to get the value of shadow
        half bakedAtten = UnitySampleBakedOcclusion(data.lightmapUV.xy, data.worldPos);
        //Get the distance between the two in the visual space
        float zDist = dot(_WorldSpaceCameraPos - data.worldPos, UNITY_MATRIX_V[2].xyz);
        //Calculate attenuation coefficient based on distance
        float fadeDist = UnityComputeShadowFadeDistance(data.worldPos, zDist);
        //Mix real-time shadow and bake's shadow
        data.atten = UnityMixRealtimeAndBakedShadows(data.atten, bakedAtten, UnityComputeShadowFade(fadeDist));

    o_gi.light = data.light;
    o_gi.light.color *= data.atten;
    //Whether to start the spherical harmonic illumination? Only when there is no static and dynamic illumination, can the spherical harmonic sampling be started. So
        o_gi.indirect.diffuse = ShadeSHPerPixel(normalWorld, data.ambient, data.worldPos);

    #if defined(LIGHTMAP_ON)
        // Baked lightmaps -- analyze the baked color of lightmaps
        half4 bakedColorTex = UNITY_SAMPLE_TEX2D(unity_Lightmap, data.lightmapUV.xy);
        half3 bakedColor = DecodeLightmap(bakedColorTex);

            fixed4 bakedDirTex = UNITY_SAMPLE_TEX2D_SAMPLER (unity_LightmapInd, unity_Lightmap, data.lightmapUV.xy);
            o_gi.indirect.diffuse += DecodeDirectionalLightmap (bakedColor, bakedDirTex, normalWorld);

            #if defined(LIGHTMAP_SHADOW_MIXING) && !defined(SHADOWS_SHADOWMASK) && defined(SHADOWS_SCREEN)
                //Remove falloff from lightmap caused by primary light
                o_gi.indirect.diffuse = SubtractMainLightWithRealtimeAttenuationFromLightmap (o_gi.indirect.diffuse, data.atten, bakedColorTex, normalWorld);
        #else // not directional lightmap
            o_gi.indirect.diffuse += bakedColor;

            #if defined(LIGHTMAP_SHADOW_MIXING) && !defined(SHADOWS_SHADOWMASK) && defined(SHADOWS_SCREEN)
                o_gi.indirect.diffuse = SubtractMainLightWithRealtimeAttenuationFromLightmap(o_gi.indirect.diffuse, data.atten, bakedColorTex, normalWorld);

        // Dynamic lightmaps
        fixed4 realtimeColorTex = UNITY_SAMPLE_TEX2D(unity_DynamicLightmap, data.lightmapUV.zw);
        half3 realtimeColor = DecodeRealtimeLightmap (realtimeColorTex);

            half4 realtimeDirTex = UNITY_SAMPLE_TEX2D_SAMPLER(unity_DynamicDirectionality, unity_DynamicLightmap, data.lightmapUV.zw);
            o_gi.indirect.diffuse += DecodeDirectionalLightmap (realtimeColor, realtimeDirTex, normalWorld);
            o_gi.indirect.diffuse += realtimeColor;

    o_gi.indirect.diffuse *= occlusion;
    return o_gi;
//Indirect highlight. If the reflection probe is used, the default unity will be used if it is not used_ Indirectspeccolor, which refreshes every frame
inline half3 UnityGI_IndirectSpecular(UnityGIInput data, half occlusion, Unity_GlossyEnvironmentData glossIn)
    half3 specular;

        // we will tweak reflUVW in glossIn directly (as we pass it to Unity_GlossyEnvironment twice for probe0 and 
        //probe1), so keep original to pass into BoxProjectedCubemapDirection
        half3 originalReflUVW = glossIn.reflUVW;
        glossIn.reflUVW = BoxProjectedCubemapDirection (originalReflUVW, data.worldPos, data.probePosition[0], data.boxMin[0], data.boxMax[0]);
    //Do not use the reflection probe. If the reflection probe is used, sample the color in the reflection probe. Otherwise, use unity_IndirectSpecColor default color
        specular = unity_IndirectSpecColor.rgb;
        half3 env0 = Unity_GlossyEnvironment (UNITY_PASS_TEXCUBE(unity_SpecCube0), data.probeHDR[0], glossIn);
            const float kBlendFactor = 0.99999;
            float blendLerp = data.boxMin[0].w;
            if (blendLerp < kBlendFactor)
                #ifdef UNITY_SPECCUBE_BOX_PROJECTION
                    glossIn.reflUVW = BoxProjectedCubemapDirection (originalReflUVW, data.worldPos, data.probePosition[1], data.boxMin[1], data.boxMax[1]);

                half3 env1 = Unity_GlossyEnvironment (UNITY_PASS_TEXCUBE_SAMPLER(unity_SpecCube1,unity_SpecCube0), data.probeHDR[1], glossIn);
                specular = lerp(env1, env0, blendLerp);
                specular = env0;
            specular = env0;

    return specular * occlusion;


Finally, when you write this, you're tired. It's been a long time. It's 25000 words. But the material here is high ~ ~ ~ but I'm tired...
First of all, I saw Chang Wei at first sight.. , oh? In fact, it's to choose different algorithms according to the platform, and he's lying. It's definitely based on the settings. It's set in graphics tier settings standard shader quality. Then analyze one by one to see what's the difference.

#if !defined (UNITY_BRDF_PBS) // allow to explicitly override BRDF in custom shader
    // still add safe net for low shader models, otherwise we might end up with shaders failing to compile
        #define UNITY_BRDF_PBS BRDF3_Unity_PBS
    #elif defined(UNITY_PBS_USE_BRDF3)
        #define UNITY_BRDF_PBS BRDF3_Unity_PBS
    #elif defined(UNITY_PBS_USE_BRDF2)
        #define UNITY_BRDF_PBS BRDF2_Unity_PBS
    #elif defined(UNITY_PBS_USE_BRDF1)
        #define UNITY_BRDF_PBS BRDF1_Unity_PBS
        #error something broke in auto-choosing BRDF

This one seems to be the most rubbish one. It's actually used by target2.0. It's just a beating up. This is operation 666. I don't know why? That's it? That's it? In a word, the third gear has made a series of optimization, using Blinn Phong to simplify the specular calculation of BRDF ((n.h)k(n.h)^k(n.h)k, replacing with (r.l)k(r.l)^k(r.l)k), and then using LUT technology to make the curve of simulating specular distribution. The diffuse reflection of BRDF3 direct light is not treated by Fresnel, but is added directly. The highlight of the direct light is distributed by the normal through Blinn Phong, and then the highlight coefficient is made by LUT, multiplied by the highlight color. In this way, the calculation of direct light and indirect light is completed. The diffuse reflection in global light is multiplied by the diffuse reflection of the object itself. The indirect light highlight is calculated by the difference value. I didn't understand how to deduce the specific reason..

half4 BRDF3_Unity_PBS (half3 diffColor, half3 specColor, half oneMinusReflectivity, half smoothness,
    float3 normal, float3 viewDir,
    UnityLight light, UnityIndirect gi)
    float3 reflDir = reflect (viewDir, normal);	
    half nl = saturate(dot(normal, light.dir));
    half nv = saturate(dot(normal, viewDir));

    //I don't quite understand here. It's Blinn Phong's lighting model, but without half angle, I think it's more like Phong model.. wait,
    //Don't doubt the authorities. OK, I'm wrong. First of all, let's see that the normal distribution is simulated by rlPow4AndFresnelTerm.x,
    //rlPow4AndFresnelTerm.y to simulate Fresnel.
    // use R.L instead of N.H to save couple of instructions
    half2 rlPow4AndFresnelTerm = Pow4 (float2(dot(reflDir, light.dir), 1-nv));  
    // power exponent must match kHorizontalWarpExp in NHxRoughness() function in GeneratedTextures.cpp
    half rlPow4 = rlPow4AndFresnelTerm.x; 
    half fresnelTerm = rlPow4AndFresnelTerm.y;

    half grazingTerm = saturate(smoothness + (1-oneMinusReflectivity));

    half3 color = BRDF3_Direct(diffColor, specColor, rlPow4, smoothness);
    color *= light.color * nl;
    color += BRDF3_Indirect(diffColor, specColor, gi, grazingTerm, fresnelTerm);

    return half4(color, 1);
//Monkey thunder, what is this? It's the built-in LUT chart of unity. It's sneaky, and I won't tell you.
sampler2D_float unity_NHxRoughness;
half3 BRDF3_Direct(half3 diffColor, half3 specColor, half rlPow4, half smoothness)
    half LUT_RANGE = 16.0; // must match range in NHxRoughness() function in GeneratedTextures.cpp
    // Lookup texture to save instructions
    half specular = tex2D(unity_NHxRoughness, half2(rlPow4, SmoothnessToPerceptualRoughness(smoothness))).r * LUT_RANGE;
    specular = 0.0;

    return diffColor + specular * specColor;

half3 BRDF3_Indirect(half3 diffColor, half3 specColor, UnityIndirect indirect, half grazingTerm, half fresnelTerm)
    half3 c = indirect.diffuse * diffColor;
    c += indirect.specular * lerp (specColor, grazingTerm, fresnelTerm);
    return c;

Medium level PBS, the default use effect of mobile terminal, I think the current mobile phone is still very powerful, just go to the highest.. See CookTorrance BRDF for the normal distribution function. Tongue ache, in fact, here's the thing you want to say why so calculate, ah? Why does he want to multiply a 4 here? Why does he want to compare with 0.32? I'm also blindfolded.. I can only borrow a sentence from Trump, maybe this is life.

half4 BRDF2_Unity_PBS (half3 diffColor, half3 specColor, half oneMinusReflectivity, half smoothness,
    float3 normal, float3 viewDir,
    UnityLight light, UnityIndirect gi)
    float3 halfDir = Unity_SafeNormalize (float3(light.dir) + viewDir);

    half nl = saturate(dot(normal, light.dir));
    float nh = saturate(dot(normal, halfDir));
    half nv = saturate(dot(normal, viewDir));
    float lh = saturate(dot(light.dir, halfDir));

    // Specular term
    //Get the ideal roughness = (1 - smoothness). Why is it the ideal roughness? Because to calculate the true roughness = his square
    half perceptualRoughness = SmoothnessToPerceptualRoughness (smoothness);
    //Calculated roughness = perceptual roughness * perceptual roughness, which may be closer to the real effect
    half roughness = PerceptualRoughnessToRoughness(perceptualRoughness);


    // GGX Distribution multiplied by combined approximation of Visibility and Fresnel
    // See "Optimizing PBR for Mobile" from Siggraph 2015 moving mobile graphics course
    // https://community.arm.com/events/1155
    half a = roughness;
    float a2 = a*a;
    //This is the algorithm of GGX's d. I can't see how to express it linearly...
	float d = nh * nh * (a2 - 1.f) + 1.00001f;
    // Tighter approximation for Gamma only rendering mode!
    // DVF = sqrt(DVF);
    // DVF = (a * sqrt(.25)) / (max(sqrt(0.1), lh)*sqrt(roughness + .5) * d);
    //The above DVF is the specular formula of this calculation, but it needs to be squared in linear space. We assume that in GAMMA space
    //specularTerm is GA,Linear's specularTerm is LA
    //How can it be seen that GA is not equal to DVF, but LA=DVF*DVF, and then looking at the conversion between LA and GA, the difference is sqrt (4*(roughness + 0.5f)) = (1.5f + roughness) 
    //Up. As for why this difference is caused, it may be life...
    float specularTerm = a / (max(0.32f, lh) * (1.5f + roughness) * d);
    float specularTerm = a2 / (max(0.1f, lh*lh) * (roughness + 0.5f) * (d * d) * 4);

    // on mobiles (where half actually means something) denominator have risk of overflow
    // clamp below was added specifically to "fix" that, but dx compiler (we convert bytecode to metal/gles)
    // sees that specularTerm have only non-negative terms, so it skips max(0,..) in clamp (leaving only min(100,...))
#if defined (SHADER_API_MOBILE)
    specularTerm = specularTerm - 1e-4f;

// Legacy - this is an old version. It can be ignored. It's left for compatibility
		    half specularPower = PerceptualRoughnessToSpecPower(perceptualRoughness);
		    // Modified with approximate Visibility function that takes roughness into account
		    // Original ((n+1)*N.H^n) / (8*Pi * L.H^3) didn't take into account roughness
		    // and produced extremely bright specular at grazing angles
		    half invV = lh * lh * smoothness + perceptualRoughness * perceptualRoughness; 
		    // approx ModifiedKelemenVisibilityTerm(lh, perceptualRoughness);
		    half invF = lh;
		    half specularTerm = ((specularPower + 1) * pow (nh, specularPower)) / (8 * invV * invF + 1e-4h);
			    specularTerm = sqrt(max(1e-4f, specularTerm));


#if defined (SHADER_API_MOBILE)
    specularTerm = clamp(specularTerm, 0.0, 100.0); // Prevent FP16 overflow on mobiles
//If the highlight is turned off, here it is set to 0, which can be set in the panel of Stander
    specularTerm = 0.0;
// surfaceReduction = Int D(NdotH) * NdotH * Id(NdotL>0) dH = 1/(realRoughness^2+1)
// 1-0.28*x^3 as approximation for (1/(x^4+1))^(1/2.2) on the domain [0;1]
// 1-x^3*(0.6-0.08*x)   approximation for 1/(x^4+1)
    half surfaceReduction = 0.28;
    half surfaceReduction = (0.6-0.08*perceptualRoughness);

    surfaceReduction = 1.0 - roughness*perceptualRoughness*surfaceReduction;
    //Calculate the angle of grazing -- looking at the same direction of the angle of grazing, the Fresnel effect of rough surface is weaker than that of smooth surface
    half grazingTerm = saturate(smoothness + (1-oneMinusReflectivity));
    //Final color = (object diffuse color + highlight data * highlight color) * light color * NL + ambient diffuse color * object diffuse color
    //+Attenuation coefficient * specular color of ambient light * Fresnel coefficient it can be seen that there is no Fresnel effect in diffuse reflection of object and ambient light
    half3 color =   (diffColor + specularTerm * specColor) * light.color * nl
                    + gi.diffuse * diffColor
                    + surfaceReduction * gi.specular * FresnelLerpFast (specColor, grazingTerm, nv);

    return half4(color, 1);

The end of this code will come soon. In the legend, BRDF is also a more orthodox calculation. Most of the introduction about pbr is done according to this, step by step according to the formula. It is mainly to compare the difference between the first two and understand why the consumption of the first two is low..

// Main Physically Based BRDF
// Derived from Disney work and based on Torrance-Sparrow micro-facet model
//   BRDF = kD / pi + kS * (D * V * F) / 4
//   I = BRDF * NdotL
//In addition to the formula we see in the above column, in fact, we multiply PI on both sides of the equation, i.e., PI* BRDF = kD + kS * (D * V * F)* PI/ 4
//As for why, I've multiplied PI for compatibility. Maybe this is life~~
//! [stealing a picture from the Internet]( https://img-blog.csdnimg.cn/2020062220241575.jpg#pic_center)
// * NDF (depending on UNITY_BRDF_GGX):
//  a) Normalized BlinnPhong
//  b) GGX
//! [insert picture description here]( https://img-blog.csdnimg.cn/2020062220325795.png)
// * Smith for Visiblity term
//! [Formula]( https://img-blog.csdnimg.cn/20200622203103637.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTA3NzgyMjk=,size_16,color_FFFFFF,t_70)
// * Schlick approximation for Fresnel
//! [insert picture description here]( https://img-blog.csdnimg.cn/20200622203311127.png)
half4 BRDF1_Unity_PBS (half3 diffColor, half3 specColor, half oneMinusReflectivity, half smoothness,
    float3 normal, float3 viewDir,
    UnityLight light, UnityIndirect gi)
     //Get the ideal roughness = (1 - smoothness). Why is it the ideal roughness? Because to calculate the true roughness = his square
    float perceptualRoughness = SmoothnessToPerceptualRoughness (smoothness);
    //Half angle
    float3 halfDir = Unity_SafeNormalize (float3(light.dir) + viewDir);
// NdotV should not be negative for visible pixels, but it can happen due to perspective projection and normal mapping
// In this case normal should be modified to become valid (i.e facing camera) and not cause weird artifacts.
// but this operation adds few ALU and users may not want it. Alternative is to simply take the abs of NdotV (less correct but works too).
// Following define allow to control this. Set it to 0 if ALU is critical on your platform.
// This correction is interesting for GGX with SmithJoint visibility function because artifacts are more visible in this case due to highlight edge of rough surface
// Edit: Disable this code by default for now as it is not compatible with two sided lighting used in SpeedTree.
//In fact, the perspective camera will make the visible pixels, N*V is negative, which is not appropriate, why it is negative, because it is calculation
//Machine ah, so we need to make sure it's positive. Then we list two solutions, one is to correct the normal, and then this one is not used, and the other is abs.
#if UNITY_HANDLE_CORRECTLY_NEGATIVE_NDOTV -- this section is closed. Look at the above
	    // The amount we shift the normal toward the view vector is defined by the dot product.
	    half shiftAmount = dot(normal, viewDir);
	    normal = shiftAmount < 0.0f ? normal + viewDir * (-shiftAmount + 1e-5f) : normal;
	    // A re-normalization should be applied here but as the shift is small we don't do it to save ALU.
	    //normal = normalize(normal);
	    half nv = saturate(dot(normal, viewDir)); // TODO: this saturate should no be necessary here
    half nv = abs(dot(normal, viewDir));    // This abs allow to limit artifact
    half nl = saturate(dot(normal, light.dir));
    float nh = saturate(dot(normal, halfDir));

    half lv = saturate(dot(light.dir, viewDir));
    half lh = saturate(dot(light.dir, halfDir));

    // Diffuse term -- Disney's diffuse model. It's different if you look at the advanced one. The diffuse of BRDF2's direct light doesn't consider Fresnel at all.
    //This diffuse is directly incident, normal, angle of view, roughness
    half diffuseTerm = DisneyDiffuse(nv, nl, lh, perceptualRoughness) * nl;

    // Specular term
    // HACK: theoretically we should divide diffuseTerm by Pi and not multiply specularTerm!
    //In fact, it should be the diffuse term divided by pi, not the highlight image multiplied by PI as we do now. This is because of the following two points
    // BUT 1) that will make shader look significantly darker than Legacy ones
    // BUT 1) this will look darker than the original material
    // and 2) on engine side "Non-important" lights have to be divided by Pi too in cases when they are injected into ambient SH
    //and 2) the engine divides the "unimportant" light by Pi and injects it into the spherical harmonic environment parameters
    //In fact, the first is to ensure dual-use, the second is that when the light processing is not important, pi has been added to the light.
     //Calculated roughness = perceptual roughness * perceptual roughness, which may be closer to the real effect
     float roughness = PerceptualRoughnessToRoughness(perceptualRoughness);
    // GGX with roughtness to 0 would mean no specular at all, using max(roughness, 0.002) here to match HDrenderloop roughtness remapping.
    //A GGX roughness of 0 means there is no highlight, and a minimum of 0.002 is limited to match the HDrenderloop roughness reconstruction.
    roughness = max(roughness, 0.002);
    //Visible function
    half V = SmithJointGGXVisibilityTerm (nl, nv, roughness);
    //Normal distribution term
    float D = GGXTerm (nh, roughness);
    // Legacy - old version, not used by default
    half V = SmithBeckmannVisibilityTerm (nl, nv, roughness);
    half D = NDFBlinnPhongNormalizedTerm (nh, PerceptualRoughnessToSpecPower(perceptualRoughness));
    //Calculate the specular factor,
    half specularTerm = V*D * UNITY_PI; // Torrance-Sparrow model, Fresnel is applied later

        specularTerm = sqrt(max(1e-4h, specularTerm));
#   endif

    // specularTerm * nl can be NaN on Metal in some cases, use max() to make sure it's a sane value
    //Avoid negative numbers
    specularTerm = max(0, specularTerm * nl);
    //Turn reflection on or off in the shader panel
    specularTerm = 0.0;
    // surfaceReduction = Int D(NdotH) * NdotH * Id(NdotL>0) dH = 1/(roughness^2+1)
    half surfaceReduction;
    //Calculated attenuation, used for the attenuation calculation simulation of indirect light
        surfaceReduction = 1.0-0.28*roughness*perceptualRoughness;      // 1-0.28*x^3 as approximation for (1/(x^4+1))^(1/2.2) on the domain [0;1]
#   else
        surfaceReduction = 1.0 / (roughness*roughness + 1.0);           // fade \in [0.5;1]
#   endif

    // To provide true Lambert lighting, we need to be able to kill specular completely.
    specularTerm *= any(specColor) ? 1.0 : 0.0;
    //Calculate sweep angle
    half grazingTerm = saturate(smoothness + (1-oneMinusReflectivity));
    //If you take it apart, it means that indirect light diffuse * diffuse color of the object itself + diffuse color of the object itself * light color * diffuse proportion + highlight proportion * light
    //Color * Fresnel effect color + attenuation coefficient * indirect highlight * indirect Fresnel color
    half3 color =   diffColor * (gi.diffuse + light.color * diffuseTerm)
                    + specularTerm * light.color * FresnelTerm (specColor, lh)
                    + surfaceReduction * gi.specular * FresnelLerp (specColor, grazingTerm, nv);

    return half4(color, 1);

inline float3 Unity_SafeNormalize(float3 inVec)
    float dp3 = max(0.001f, dot(inVec, inVec));
    return inVec * rsqrt(dp3);//rsqrt (x) = 1 / sqrt(x), reciprocal of square root, excluding 0 vector?
//! [insert picture description here]( https://img-blog.csdnimg.cn/20200623104554480.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTA3NzgyMjk=,size_16,color_FFFFFF,t_70)
inline half SmithJointGGXVisibilityTerm (half NdotL, half NdotV, half roughness)
    //For occlusion relation calculation, let's look at this function chart, when: 1.a is 0.5, NdotL is 0.5, 2.a is 0.1, NdotL is 0.53, a is 0.1, NdotL is 0.01
    //! [insert picture description here]( https://img-blog.csdnimg.cn/20200623143136808.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTA3NzgyMjk=,size_16,color_FFFFFF,t_70)
    //This shows the numerical distribution of occlusion relation relative to the viewing angle.
    half a = roughness;
    half lambdaV = NdotL * (NdotV * (1 - a) + a);
    half lambdaL = NdotV * (NdotL * (1 - a) + a);	
    return 0.5f / (lambdaV + lambdaL + 1e-5f);
//GGX formula calculation --! [insert picture description here]( https://img-blog.csdnimg.cn/20200623104613684.png)
inline float GGXTerm (float NdotH, float roughness)
    float a2 = roughness * roughness;
    //Roughness is 0, GGX is 0, roughness is 1, GGX is PI. Ah? Shouldn't it be the opposite? The rougher the highlight, the higher?
    //What's the distribution of functions when d is 0.01, 0.5 and 0.8? (for and
    //Standard GGX formula for matching, I'm out of PI, in fact, will not affect the curve trace)
    //! [insert picture description here]( https://img-blog.csdnimg.cn/20200623140736730.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTA3NzgyMjk=,size_16,color_FFFFFF,t_70)
    //From this we can see that the higher the roughness is, the smoother it will be. When the roughness is lower, there will be a higher mutation when it is close to 0, which is why
    //When the roughness is close to 0, the highlight spot will be smaller and smaller. But what happens if the roughness is 0? Nani, GGX is zero. No, it's impossible
    //In my opinion, the roughness is 0, the use of GGX does not conform to the conservation of energy, it can only be infinitely close, but not 0,.
    //So there is a minimum roughness value of 0.02 above. I don't know if it's right
    float d = (NdotH * a2 - NdotH) * NdotH + 1.0f; 
    return UNITY_INV_PI * a2 / (d * d + 1e-7f); // This function is not intended to be running on Mobile,


Finally, it's over. In addition to wasting a little time, it's still very helpful for me. Looking back at the nearly 40000 pages, it's really disgusting, but it's very comfortable to see the source code again. After all, after a long time of unit work, it's better to take notes ~ ~ ~ this series intends to continue to do the follow-up work, one is shadow, the other is to make custom PBR. As for the principle part of PBR, it's not planned to do it for the time being, which is quite common. OK ~ get up.

Tags: Unity Mobile Fragment less

Posted on Tue, 23 Jun 2020 03:34:24 -0400 by RobM