|[17 Oct 2012]||
Shadow Mapping in OpenGL and GLSL
This tutorial shows how to implement shadow mapping with OpenGL and GLSL. Shadow mapping is the easiest method in implementation and support that creates interactive shadows in computer graphics.
Shadow mapping requires two rendering passes. That is, the scene (or part of the scene) has to be rendered two times. First rendering pass visualizes depth of the scene from light's point of view. Second rendering pass visualizes the scene from camera's point of view, and additionally for each fragment checks whether the fragment is in the light or the shadow. Following image shows results of shadow mapping and methods that improve quality of shadow mapping.
First rendering pass saves depth of each fragment from the light source's point of view into a texture. When looking from the light source's point of view on the scene (by placing the camera at position of the light source and orienting along direction of the light source), then you won't see any shadows that are created by any occluders. Distance (depth) from the light source to objects closest to the light source is saved to the texture - the shadow map. So each texel of the shadow map contains distance to the closest point to the light source, which is lightened, and doesn't lie in the shadow. Depth test discards distances to points that are farther from the light source. It doesn't matter how the scene looks from the light source's point of view. This data won't be presented to the user and will be used in second rendering pass to determine whether fragments are in the shadow.
Following image depicts objects in the scene (green figures), source of light and volume of space for which distances will be saved to the shadow map. Surfaces for which distance will be saved to the shadow map is marked with light blue. Surface behind the circle and the opposite side of the circle don't affect the shadow map as they are invisible from light source's point of view (closer part of the circle occludes surfaces behind it). This parts of the scene are in the shadow.
Second rendering pass visualizes the scene from the camera's point of view (as the user usually sees the scene). The rendering pass performs standard projection of vertices into screen space and outputs color to framebuffer. Additionally in a vertex shader, each vertex should be projected into the space, that was used in the first rendering pass (space of the light source). Position of the vertex in the light source's space is passed to the fragment shader. The fragment shader should have access to the shadow map (result of the first rendering pass). With the help of screen space position (relative to the light source) of the fragment, it's possible to sample the shadow map and to get the distance to the closest object on the direction from the light source to the current fragment. If the distance from the shadow map is lower than the distance from the current fragment to the light source, then current fragment is in the shadow as it lies behind another surface, distance to which was saved to the shadow map in the first rendering pass. Otherwise - fragment is in the light.
Following image is very similar to previous, but shows a camera and space that is visible through the camera. White marks show surfaces of the scene that are visible to the camera and visible from the light source's point of view, and so these surfaces are in the light. Black marks show surfaces that are visible to the camera, but not visible from the light source's point of view. These surfaces are in the shadow.
Let's consider plain surface behind the circle. Most of the surface is in the shadow. Point D0 is also in the shadow. Let's check why it's in the shadow. First D0 is transformed to the light source's space. Then the shadow map is sampled for distance to the closest object in the direction from the light source to D0 (yellow line). Sampling from the shadow map returns distance to point Dref. Distance from the light source to Dref is less than distance to D0, so D0 is in shadow.
Also, the image depicts part of the scene that is visible through the camera, but which is not present in the shadow map (marked with ? sign). There will be no shadows. For correct work of shadow mapping method, space that is saved into the shadow map should contain space that is visible through the camera.
To save depth of the scene from the light source's point of view, we need texture, which can store floating point numbers with great precision. Among the options is one component texture with data format GL_DEPTH_COMPONENT32. For rendering into the shadow map, we also need to create a framebuffer. The shadow map is attached to framebuffer to GL_DEPHT_ATTACHMENT slot. In first rendering pass distance from the light source to closest objects will be saved to this shadow map.
OpenGL allows the developer to setup texture in such a way that sampling operation will return not a color of a texel, but will compare the value in the texel with the value passed to sampling function together with texture coordinates. Automatic comparison is exactly what is required in the fragment shader of second rendering pass of shadow mapping. Distance from a fragment to the light source should be compared with the distance in the shadow map. Comparison mode is enabled with GL_COMPARE_REF_TO_TEXTURE value to texture parameter GL_TEXTURE_COMPARE_MODE. Also, you can specify comparison function by setting GL_TEXTURE_COMPARE_FUNC parameter of the shadow map. For shadow mapping, comparisson function should be set to GL_LEQUAL. GL_EQUAL means that sampling will return 1 in cases when the distance is smaller than the distance in the shadow map, and so the fragment is in the light. In other cases, sampling will return 0, which means that fragment is in shadow. It's ok to perform this comparison manually, but it's possible that GPU will execute sampling and comparison more quickly.
Following code snippet shows how to create the shadow map and framebuffer for rendering to the shadow map:
// size of the shadow map
GLuint shadowMapSize = 1024;
// create the shadow map
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, shadowMapSize, shadowMapSize,
0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// GL_CLAMP_TO_EDGE setups the shadow map in such a way that
// fragments for which the shadow map is undefined
// will get values from closest edges of the shadow map
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// comparison mode of the shadow map
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
// create framebuffer
// attach the shadow map to framebuffer
GL_TEXTURE_2D, shadowMap, 0);
// depth is stored in z-buffer to which the shadow map is attached,
// so there is no need for any color buffers
View and projection matrices for light source's point of view
The shaders of first and second rendering passes require view and projection matrices that transform vertices to light source's space.
View matrix shadowViewMat is built in the same way as standard view matrix for the camera - by position, direction and up vector. Position in this case is the position of the light source, direction - where the light source is oriented (direction is defined only for directional and spot lights). Point light sources don't have direction. This case is covered by following tutorial about usage of shadow mapping with point light sources.
Projection matrix shadowProjectionMat should be built in such a way that it defines view frustum that captures all space visible from the camera (second rendering pass). To decrease number of shadow mapping artifacts, near clip plane of projection should be as farther away from the camera as possible, and far clip plane - as close as possible to the camera. Directional light sources require orthogonal projection matrices and spot lights - perspective projection matrices. Calculation of matrices is same as for the camera.
Precision of the shadow map
During the comparison of distances in second rendering pass of shadow mapping, it's possible that a fragment lies on a lightened part of the scene, and distance from this fragment to the light source is nearly the same as the reference distance from the shadow map. But due to the discrete nature of the shadow map, in some cases comparison will determine that fragment is in shadow. It is z-fighting problem. In order to reduce artifacts, it's possible to offset polygons during first rendering pass by small distance from the source of light. Distance in the shadow map will insufficiently differ from the original distance, but this will allow the shader to avoid comparison errors. An offset can be applied manually or automatically thought OpenGL API (by glPolygonOffset function). The offset is required only for first rendering pass, so it should be disabled before second rendering pass.
// activate offset for polygons
// offset by two units equal to smallest value of change in the shadow map
// and offset by two units depending on the slope of the polygon
Rendering into the shadow map
Now, after creation of the shadow map and the framebuffer, calculation of view and projection matrices from the light source's point of view and when proper offset for polygons is set, we are ready to render the scene to the shadow map. Size of the viewport should be equal to the size of the shadow map, to properly map each fragment to each texel in the shadow map. Before rendering shadow map is cleared with value 1.0f - maximum possible depth from the source of light to any fragment. Render the scene. Now disable the offset, set the default framebuffer, restore the default size of the viewport. Following image depicts example of the shadow map. The brighter the texel, the greater the distance.
// size of the viewport should be equal to size of the shadow map
glViewport(0, 0, shadowMapSize, shadowMapSize);
// set framebuffer for first rendering pass
// clear the shadow map with default value
// set the shader, set the matrices and render the objects
glUniformMatrix4fv(pass1Shader->shadowViewMat, 1, GL_FALSE, shadowViewMat);
glUniformMatrix4fv(pass1Shader->shadowProjMat, 1, GL_FALSE, shadowProjectionMat);
foreach(auto obj : renderables)
glUniformMatrix4fv(pass1Shader->objModelMat, 1, GL_FALSE, obj->modelMat());
glViewport(0, 0, windowWidth, windowHeight);
Shaders that ouptputs distance to the shadow map
Vertex shader that saves distance from the light source to the vertex to z-buffer:
layout(location = 0) in vec3 i_position; // xyz - position
uniform mat4 u_modelMat;
uniform mat4 u_shadowViewMat;
uniform mat4 u_shadowProjMat;
// transform to screen space
// shader will save z/w value to z-buffer
gl_Position = u_shadowProjMat * u_shadowViewMat * u_modelMat * vec4(i_position, 1);
Fragment shader that is required for first render pass. Does nothing:
// color to framebuffer
layout (location = 0) out vec4 resultingColor;
// fragment shader by default is obligated to output color
resultingColor = vec4(0, 0, 0, 1);
Usage of the shadow map for shadowing tests
After first rendering pass, the shadow map contains distances to closest objects from light source's point of view in each texel. The shadow map is passed to the fragment shader of second rendering pass.
Rendering is performed from the camera's point of view. So you should pass view and projection matrices of the camera to the vertex shader.
Also, you have to pass to the vertex shader the view and projection matrices (from light source's point of view) that were used during first rendering pass. These matrices are used to transform vertices to device screen space (again from light source's point of view). Device screen space is defined in range [-1, 1] and is used to determine the place in the framebuffer, where output of each fragment should be saved. And such position was used during first rendering pass to fill the shadow map. Now screen space position of a vertex in second rendering pass can be used to determine texture coordinates for access to the shadow map. To do this screen space position is mapped from [-1, 1] range to texture coordinates [0, 1] range. Mapping can be done with offset matrix offsetMat. Additionally offset matrix can be combined in a single matrix with light's view and projection matrix. Let's call this new matrix - shadowMat. Through transform of a vertex with shadowMat we are getting texture coordinates that can be used to sample the shadow map. Following code snippet shows how to calculate shadowMat matrix:
glm::mat4 shadowViewMat = ...
// light's view matrix
glm::mat4 shadowProjMat = ...
// light's projection matrix
// offset matrix that maps from [-1, 1] to [0, 1] range
glm::mat4 offsetMat = glm::mat4(
glm::vec4(0.5f, 0.0f, 0.0f, 0.0f),
glm::vec4(0.0f, 0.5f, 0.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 0.5f, 0.0f),
glm::vec4(0.5f, 0.5f, 0.5f, 1.0f)
// combination of matrices into shadowMat
shadowMat = offsetMat * shadowProjMat * shadowViewMat;
It is worth noting that screen space position has vec4 format. First two components (xy) determine position on the screen, third (z) - distance from the point of view to the vertex, fourth (w) - required for correct affine transform and perspective division. If w isn't equal 1, then all other components should be divided by w. This can be done automatically by usage of textureProj() sampling function in the fragment shader. Function textureProj() behaves as simple texture() sampling function, but it additionally before sampling divides xyz components of passed value by w component. Sampler2DShadow sampler type should be used as format of the shadow map is depth format and as we want to use automatic comparison of depth values.
Screen space position of the vertex is passed from the vertex shader to the fragment shader. In the fragment shader, this position is used to sample from the shadow map. The shadow map is configured to automatic comparison, and as result sampling function will return 1, if the fragment is in light, or 0, if the fragment is in shadow. These values can be used directly as multipliers for diffuse and specular lighting.
Shaders for second rendering pass of Shadow Mapping
The vertex shader that transforms position of the vertex to both the camera screen space, and the light source's screen space. Screen space from light's point of view is passed to the fragment shader:
layout(location = 0) in vec3 i_position;
// xyz - position
uniform mat4 u_modelMat;
uniform mat4 u_viewProjectionMat;
uniform mat4 u_shadowMat;
// texture coordinates to the fragment shader
// for access to the shadow map
out vec4 o_shadowCoord;
// position of the vertex in the scene
vec4 worldPos = u_modelMat * vec4(i_position, 1);
// screen space position of the vertex from the light source's point of view
// plus it is mapped from range [-1, 1]
// to [0, 1] by shadowMat in order to be used
// as texture coordinates
o_shadowCoord = u_shadowMat * worldPos;
// screen space position of the vertex from camera's point of view
gl_Position = u_viewProjectionMat * worldPos;
The fragment shader that compares distance from the current fragment to the light source with reference distance from the shadow map:
// the shadow map from first rendering pass
layout(location = 0) uniform sampler2DShadow u_shadowMap;
// texture coordinates for access to the shadow map
in vec4 o_shadowCoord;
// color to the framebuffer
layout(location = 0) out vec4 resultingColor;
// sampling from the shadow map
// textureProj() divides o_shadowCoord.xyz by o_shadowCoord.w
// then o_shadowCoord.xy are used as 2D texture coords to sample the shadow map
// o_shadowCoord.z is automatically compared to the value from the shadow map
float shadowFactor = textureProj(u_shadowMap, o_shadowCoord);
// save write or black color to the framebuffer (light/shadow)
resultingColor.rgb = vec3(shadowFactor);
resultingColor.a = 1;
Results of shadow mapping
Following images show results of shadow mapping. They differ in quality from the first image in the tutorial. This is because simple shadow mapping requires improvements and additional processing. Improvements for shadow mapping is described in next tutorial.
First image depicts results of shadow mapping with size of the shadow map equal to 1024 texels. Only minor aliasing of shadow's edges is visible, but on the sphere and the mesh of the cat strange artifacts are present. Second image depicts results of shadow mapping with size of the shadow map equal to 256 texels. Now aliasing is clearly visible. Aliasing and artifacts are because the shadow map is the discrete texture and saves only finite number of distance from the source of light to objects in the scene. During second rendering pass, one texel (distance) in the shadow map might be used as reference texel (distance) for multiple distinct fragments. One aliasing "square" means that all fragments in that square are mapped to the single texel in the shadow map.
Encode floating point value as RGBA8 value
If floating point textures are unavailable, then you can encode and save floating point distance value [0, 1] in standard RGBA texture, where each component has 8 bits. Later, to use values from this texture, it's required to decode depth value from four 8 bit values. Following code snippet shows functions that encode and decode floating point values to RGBA8, and vice versa.
// function that encodes float as RBGA
vec4 encode(float val)
val *= 255;
o.r = floor(val);
val = (val - o.r) * 255;
o.g = floor(val);
val = (val - o.g) * 255;
o.b = floor(val);
val = (val - o.b) * 255;
o.a = floor(val);
// function that decodes RGBA into float
float decode(vec4 val)
Sun and Black Cat- Igor Dykhta () © 2007-2014