opengl advanced lighting point shadow

Official documents

Point shadow

Advanced lighting

  • This technology is called point light shadow, which used to be called omnidirectional shadow maps.

    The algorithm is similar to directional shadow mapping: we generate a depth map from the perspective of light, sample the depth map based on the current fragment position, and then compare the stored depth value with each fragment to see if it is in the shadow. The main difference between directional shadow mapping and universal shadow mapping is the use of depth map.

    For depth map, we need to render all scenes from a point light source, and ordinary 2D depth map cannot work; What happens if we use cube mapping? Because the cube map can store the environment data of 6 faces, it can render the whole scene to each face of the cube map and sample them as the depth values around the point light source.

  • Because there are six faces, there are six view s

Create a depth map

  • Create a depth map: note that 6 depth maps are required for 6 faces
    // build and compile shaders
    // -------------------------
    Shader shader("3.2.1.point_shadows.vert", "3.2.1.point_shadows.frag");//Output scene to screen
    Shader simpleDepthShader("3.2.1.point_shadows_depth.vert", "3.2.1.point_shadows_depth.frag", "3.2.1.point_shadows_depth.glsl");//Generate depth map texture; In fact, there are some problems with the order. It should be vertex geometry fragment
    // configure depth map FBO
    // -----------------------
    const unsigned int SHADOW_WIDTH = 1024, SHADOW_HEIGHT = 1024;
    unsigned int depthMapFBO;
    glGenFramebuffers(1, &depthMapFBO);
	    // Generate depth cube map
	    unsigned int depthCubemap;
	    glGenTextures(1, &depthCubemap);
	    //Generate each face of the cube map (6 times) as a 2D depth value texture image:
	    glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubemap);
	    for (unsigned int i = 0; i < 6; ++i)
	    //Set the appropriate texture parameters
	    // attach depth texture as FBO's depth buffer
	    //Since we will use a geometry shader that allows us to render all faces in one pass, we can use glFramebufferTexture to directly attach the cube map to the depth attachment of the frame buffer
	    glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO);
	    glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthCubemap, 0);
    glBindFramebuffer(GL_FRAMEBUFFER, 0);

Transformation of light space

  • perspective projection
GLfloat aspect = (GLfloat)SHADOW_WIDTH/(GLfloat)SHADOW_HEIGHT;
GLfloat near = 1.0f;
GLfloat far = 25.0f;
//Here, the field of view parameter of glm::perspective is set to 90 degrees.
//90 degrees can ensure that the field of view is large enough to properly fill one face of the cube map, and all faces of the cube map can be aligned with other faces at the edge.
glm::mat4 shadowProj = glm::perspective(glm::radians(90.0f), aspect, near, far);
  • Because the projection matrix does not change in each direction, we can reuse it in six transformation matrices. We want to provide a different view matrix for each direction. Create 6 viewing directions with glm::lookAt, and each looks at one direction of the cube map in order: right, left, up, down, near and far:
std::vector<glm::mat4> shadowTransforms;
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3(1.0,0.0,0.0), glm::vec3(0.0,-1.0,0.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3(-1.0,0.0,0.0), glm::vec3(0.0,-1.0,0.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3(0.0,1.0,0.0), glm::vec3(0.0,0.0,1.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3(0.0,-1.0,0.0), glm::vec3(0.0,0.0,-1.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3(0.0,0.0,1.0), glm::vec3(0.0,-1.0,0.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3(0.0,0.0,-1.0), glm::vec3(0.0,-1.0,0.0));
  • Render
        // 1. Render the scene onto the cube map
        // --------------------------------
        glViewport(0, 0, SHADOW_WIDTH, SHADOW_HEIGHT);
        glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO);//Bound frame buffer
	        for (unsigned int i = 0; i < 6; ++i)
	            simpleDepthShader.setMat4("shadowMatrices[" + std::to_string(i) + "]", shadowTransforms[i]);
	        simpleDepthShader.setFloat("far_plane", far_plane);
	        simpleDepthShader.setVec3("lightPos", lightPos);
        glBindFramebuffer(GL_FRAMEBUFFER, 0);

        // 2. render scene as normal 
        // -------------------------
        glViewport(0, 0, SCR_WIDTH, SCR_HEIGHT);
        glm::mat4 projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f);
        glm::mat4 view = camera.GetViewMatrix();
        shader.setMat4("projection", projection);
        shader.setMat4("view", view);
        // set lighting uniforms
        shader.setVec3("lightPos", lightPos);
        shader.setVec3("viewPos", camera.Position);
        shader.setInt("shadows", shadows); // enable/disable shadows by pressing 'SPACE'
        shader.setFloat("far_plane", far_plane);
        glBindTexture(GL_TEXTURE_2D, woodTexture);
        glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubemap);
  • Note the difference between the rendering function and the previous one: zoom in 5 times. Because the rendering room is inside, you need to turn off face culling (otherwise it is empty), and then turn it on
void renderScene(const Shader& shader)
    // Room cube
    glm::mat4 model = glm::mat4(1.0f);
    model = glm::scale(model, glm::vec3(5.0f));
    shader.setMat4("model", model);
    glDisable(GL_CULL_FACE); // note that we disable culling here since we render 'inside' the cube instead of the usual 'outside' which throws off the normal culling methods.
    shader.setInt("reverse_normals", 1); // A small little hack to invert normals when drawing cube from the inside so lighting still works.
    shader.setInt("reverse_normals", 0); // and of course disable it
    // 5 cubes
    model = glm::mat4(1.0f);
    model = glm::translate(model, glm::vec3(4.0f, -3.5f, 0.0));
    model = glm::scale(model, glm::vec3(0.5f));
    shader.setMat4("model", model);
    model = glm::mat4(1.0f);
    model = glm::translate(model, glm::vec3(2.0f, 3.0f, 1.0));
    model = glm::scale(model, glm::vec3(0.75f));
    shader.setMat4("model", model);
    model = glm::mat4(1.0f);
    model = glm::translate(model, glm::vec3(-3.0f, -1.0f, 0.0));
    model = glm::scale(model, glm::vec3(0.5f));
    shader.setMat4("model", model);
    model = glm::mat4(1.0f);
    model = glm::translate(model, glm::vec3(-1.5f, 1.0f, 1.5));
    model = glm::scale(model, glm::vec3(0.5f));
    shader.setMat4("model", model);
    model = glm::mat4(1.0f);
    model = glm::translate(model, glm::vec3(-1.5f, 2.0f, -3.0));
    model = glm::rotate(model, glm::radians(60.0f), glm::normalize(glm::vec3(1.0, 0.0, 1.0)));
    model = glm::scale(model, glm::vec3(0.75f));
    shader.setMat4("model", model);
  • Depth map vertex shader
#version 330 core
layout (location = 0) in vec3 aPos;

uniform mat4 model;

void main()
    gl_Position = model * vec4(aPos, 1.0);//In the previous generation of transform matrices, projection and view were multiplied, and model was not multiplied
  • Depth map geometry shader
#version 330 core
layout (triangles) in;//Get 3 points
layout (triangle_strip, max_vertices=18) out;//Output 18 points

uniform mat4 shadowMatrices[6];

out vec4 FragPos; // FragPos from GS (output per emitvertex)

void main()
    for(int face = 0; face < 6; ++face)
        gl_Layer = face; // With the built-in variables in the geometry shader, which texture 11 is output to each time
        for(int i = 0; i < 3; ++i) // Three points of each triangle
            FragPos = gl_in[i].gl_Position;//FragPos multiplied by model 
            gl_Position = shadowMatrices[face] * FragPos;//Calculated as a clip shader
            EmitVertex();//Output vertex
  • Depth map clip shader
#version 330 core
in vec4 FragPos;

uniform vec3 lightPos;
uniform float far_plane;

void main()
    float lightDistance = length( - lightPos);//Calculate distance
    // Get a value between 0-1 as the depth value
    lightDistance = lightDistance / far_plane;
    // Assign values to depth attachments
    gl_FragDepth = lightDistance;
  • Clip shader for scene rendering
#version 330 core
out vec4 FragColor;

in VS_OUT {
    vec3 FragPos;
    vec3 Normal;
    vec2 TexCoords;
} fs_in;

uniform sampler2D diffuseTexture;
uniform samplerCube depthMap;

uniform vec3 lightPos;
uniform vec3 viewPos;

uniform float far_plane;
uniform bool shadows;

float ShadowCalculation(vec3 fragPos)
    // get vector between fragment position and light position
    vec3 fragToLight = fragPos - lightPos;
    // ise the fragment to light vector to sample from the depth map    
    float closestDepth = texture(depthMap, fragToLight).r;
    // it is currently in linear range between [0,1], let's re-transform it back to original depth value
    closestDepth *= far_plane;
    // now get current linear depth as the length between the fragment and light position
    float currentDepth = length(fragToLight);
    // test for shadows
    float bias = 0.05; // we use a much larger bias since depth is now in [near_plane, far_plane] range
    float shadow = currentDepth -  bias > closestDepth ? 1.0 : 0.0;        
    // display closestDepth as debug (to visualize depth cubemap)
     //FragColor = vec4(vec3(closestDepth / far_plane), 1.0);    
    return shadow;

void main()
    vec3 color = texture(diffuseTexture, fs_in.TexCoords).rgb;
    vec3 normal = normalize(fs_in.Normal);
    vec3 lightColor = vec3(0.3);
    // ambient
    vec3 ambient = 0.3 * color;
    // diffuse
    vec3 lightDir = normalize(lightPos - fs_in.FragPos);
    float diff = max(dot(lightDir, normal), 0.0);
    vec3 diffuse = diff * lightColor;
    // specular
    vec3 viewDir = normalize(viewPos - fs_in.FragPos);
    vec3 reflectDir = reflect(-lightDir, normal);
    float spec = 0.0;
    vec3 halfwayDir = normalize(lightDir + viewDir);  
    spec = pow(max(dot(normal, halfwayDir), 0.0), 64.0);
    vec3 specular = spec * lightColor;    
    // calculate shadow
    float shadow = shadows ? ShadowCalculation(fs_in.FragPos) : 0.0;                      
    vec3 lighting = (ambient + (1.0 - shadow) * (diffuse + specular)) * color;    
    FragColor = vec4(lighting, 1.0);

  • If you make a: FragColor = vec4(lighting, 1.0) in the clip shader; Delete and keep FragColor = vec4(vec3(closestDepth / far_plane), 1.0) in the shadow calculation function; The effect is depth rendering


With a simple PCF filter (averaging the added value) and adding a third dimension, the shadow now looks softer and smoother, resulting in a more realistic effect

float shadow = 0.0;
float bias = 0.05; 
float samples = 4.0;
float offset = 0.1;
for(float x = -offset; x < offset; x += offset / (samples * 0.5))
    for(float y = -offset; y < offset; y += offset / (samples * 0.5))
        for(float z = -offset; z < offset; z += offset / (samples * 0.5))
            float closestDepth = texture(depthMap, fragToLight + vec3(x, y, z)).r; 
            closestDepth *= far_plane;   // Undo mapping [0;1]
            if(currentDepth - bias > closestDepth)
                shadow += 1.0;
shadow /= (samples * samples * samples);

However, if samples is set to 4.0, we will get a total of 64 samples per fragment, which is too much!

Most of these samples are redundant. It is more meaningful to sample in the vertical direction of the sampling direction vector than near the original direction vector. However, there is no (simple) way to point out which sub direction is redundant, which is difficult. A trick to use is to use an array of offset directions, which are almost separated, each pointing in a completely different direction, eliminating those sub directions close to each other. The following is an array with 20 offset directions. Not only 0 or 1, there are some soft phenomena:

vec3 sampleOffsetDirections[20] = vec3[]
   vec3( 1,  1,  1), vec3( 1, -1,  1), vec3(-1, -1,  1), vec3(-1,  1,  1), 
   vec3( 1,  1, -1), vec3( 1, -1, -1), vec3(-1, -1, -1), vec3(-1,  1, -1),
   vec3( 1,  1,  0), vec3( 1, -1,  0), vec3(-1, -1,  0), vec3(-1,  1,  0),
   vec3( 1,  0,  1), vec3(-1,  0,  1), vec3( 1,  0, -1), vec3(-1,  0, -1),
   vec3( 0,  1,  1), vec3( 0, -1,  1), vec3( 0, -1, -1), vec3( 0,  1, -1)
//Then we adapt the PCF algorithm to the number of samples obtained from sampleOffsetDirections,
//Use them to sample from the cube map. The advantage of this is that compared with the previous PCF algorithm, we need less samples.
float shadow = 0.0;
float bias = 0.15;
int samples = 20;
float viewDistance = length(viewPos - fragPos);
float diskRadius = (1.0 + (viewDistance / far_plane)) / 25.0;
for(int i = 0; i < samples; ++i)
    float closestDepth = texture(depthMap, fragToLight + sampleOffsetDirections[i] * diskRadius).r;
    closestDepth *= far_plane;   // Undo mapping [0;1]
    if(currentDepth - bias > closestDepth)
        shadow += 1.0;
shadow /= float(samples);

The bias added to each sample is highly context dependent and is always fine tuned according to the scene. Try these values to see how they affect the scene. Here is the final version of the vertex and pixel shader.

I would also like to remind you that using geometric shaders to generate depth maps is not necessarily faster than rendering the scene 6 times per face. Using geometry shaders has its own performance limitations, and using them in the first stage may achieve better performance. It depends on the type of environment, specific graphics card drivers, etc., so if you are concerned about performance, make sure you have a general understanding of the two methods, and then choose the one that is more efficient for your scene. Personally, I prefer to use geometric shaders for shadow mapping for a simple reason because they are easier to use.

Tags: Computer Graphics Autonomous vehicles

Posted on Sat, 25 Sep 2021 04:34:20 -0400 by abriggs