[23 Nov 2012]  Shadow Mapping for point light sources in OpenGL òà GLSL 
This tutorial shows how to use shadow mapping method with point light sources in OpenGL and GLSL. Shadow mapping for point light sources is different than for directional or spot light sources. For directional light or spot light source it's sufficient to render the scene to one shadow map, which saves depth along direction of the light source. But point light source doesn't have direction and projects light in all directions around it. So to calculate all shadows from point light source it's required to save information in shadow cube map for all space around the light source. You can find description of methods that improves quality of shadow maps in one of the previous tutorials.
It's not enough to use one twodimensional texture to save information about depth in all directions around point light source. And cube maps are intended specifically to store environment around a point. In most cases, cube textures are used for environmental mapping or imagebased lighting, but they can also be used in shadow mapping. To use cube map as shadow map it's required to render (save) scene for six times into each face of cube texture. In order to decrease number of draw calls, you can render simultaneously to multiple buffers. Latest versions of OpenGL allows the developer to render to six fases of cube map simultaneously (through geometry shader). You can ignore, for example, rendering to top face of cube shadow map in case if camera is always above the scene.
Let's consider following image. The image depicts the view of the scene from above. The green surfaces are objects around the point light source. Black lines are separators that show four parts of space that will be rendered to different faces of the shadow cube map (other two faces of the cube texture are ignored on this 2D view). Blue marks show surfaces that are visible from light source's point of view and distances to which will be saved to shadow cube map. These surfaces will be lightened (not in shadow). Other parts of the scene (behind the circles) are in shadow.
Following shaders implement functionality for the first pass of shadow mapping for point light source. For the convenience, they calculate linear depth from the point light source to each fragment (zbuffer stores nonlinear depth). Each face of the shadow cube map is attached to framebuffer as color attachment (GL_COLOR_ATTACHMENT). To linear depth the shader adds small offset, to decrease possible zfighting during second rendering pass. Following image shows inverted linear depth:


Following parametes are used to create each face of shadow cube map:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, shadowMapSize, shadowMapSize,
0, GL_RED, GL_FLOAT, nullptr);
The vertex shader that is used to calculate linear distance from the point light source to object in the scene:
#version 330
// attributes
layout(location = 0) in vec3 i_position; // xyz  position
// matrices
uniform mat4 u_modelMat;
uniform mat4 u_viewMat;
uniform mat4 u_projMat;
// world space position of the vertex to the fragment shader
out vec4 o_worldSpacePosition;
void main(void)
{
// to world space position
o_worldSpacePosition = u_modelMat * vec4(i_position, 1);
// to screen space position
gl_Position = u_projMat * u_viewMat * o_worldSpacePosition;
}
Fragment shader that saves linear depth from point light source to closest objects in the scene:
#version 330
// world space position of the fragment
in vec4 o_worldSpacePosition;
// distance that will be saved to framebuffer
layout (location = 0) out float resultingColor;
// position of point light sources
uniform vec3 u_lightPos;
// distance from light to near and far cliping planes
uniform vec2 u_nearFarPlane;
// additional offset from the light
uniform float u_depthOffset;
void main(void)
{
// distance to light
float distanceToLight = distance(u_lightPos, o_worldSpacePosition.xyz);
// normalize distance taking into account near and far cliping planes
// so valid distances should be in [0, 1] range
resultingColor = (distanceToLight  u_nearFarPlane.x) /
(u_nearFarPlane.y  u_nearFarPlane.x) + u_depthOffset;
// clamp distances to [0, 1] range
resultingColor = clamp(resultingColor, 0, 1);
}
Shadow mapping for directional light sources uses position of the vertex in the light source's space to sample the shadow map. But shadow mapping for point light sources doesn't require matrices for transformation to light's space in second rendering pass. To get a vector for sampling of the shadow cube map we simply calculate direction from point light source to fragment. Also, we need distance from current fragment to the point light source. The distance is calculated as in first rendering pass. If the distance from the shadow cube map is smaller than the distance to the current fragment then, the fragment is in shadow, otherwise  in light.


The vertex shader that implements shadow mapping for point lights (for second rendering pass). Just passes world space position of the vertex to the fragment shader.
#version 330
// attributes
layout(location = 0) in vec3 i_position; // xyz  position
layout(location = 1) in vec3 i_normal; // xyz  normal
// matrices
uniform mat4 u_modelMat;
uniform mat4 u_viewMat;
uniform mat4 u_projMat;
// position of the vertex in world space to the fragement shader
out vec4 o_worldPosition;
void main(void)
{
// world space position of the vertex
o_worldPosition = u_modelMat * vec4(i_position, 1);
// screen space position of the vertex
gl_Position = u_projMat * u_viewMat * o_worldPosition;
}
The fragment shader that implements shadow mapping for point lights (for second rendering pass). Calculates normalized distance from the light source to current fragment, direction for sampling from the shadow cube map, samples the shadow cube map once and determines shadowing:
#version 330
// shadow map
layout(location = 0) uniform samplerCube u_shadowCubeMap;
// world space position of the fragment
in vec4 o_worldPosition;
// color to the framebuffer
layout(location = 0) out vec4 resultingColor;
// position of the point light source
uniform vec3 u_lightPos;
// distances to near and far cliping planes
uniform vec2 u_nearFarPlane;
void main(void)
{
// difference between position of the light source and position of the fragment
vec3 fromLightToFragment = u_lightPos  o_worldPosition.xyz;
// normalized distance to the point light source
float distanceToLight = length(fromLightToFragment);
float currentDistanceToLight = (distanceToLight  u_nearFarPlane.x) /
(u_nearFarPlane.y  u_nearFarPlane.x);
currentDistanceToLight = clamp(currentDistanceToLight, 0, 1);
// normalized direction from light source for sampling
fromLightToFragment = normalize(fromLightToFragment);
// sample shadow cube map
float referenceDistanceToLight = texture(u_shadowCubeMap, fromLightToFragment).r;
// compare distances to determine whether the fragment is in shadow
float shadowFactor = float(referenceDistanceToLight > currentDistanceToLight);
// output color
resultingColor.rgb = vec3(shadowFactor);
resultingColor.a = 1;
}
As for twodimensional shadow maps, there is a special type of sampler for shadow cube maps  samplerCubeShadow. This sampler allows the shader to compare the distance from shadow cube map with distance from current fragment to the light source automatically and returns shadow factor. First three components of the second argument for sampling of samplerCubeShadow define direction of sampling and fourth  distance to comparison.
layout(location = 0) uniform samplerCubeShadow u_shadowCubeMap;
float shadowFactor = texture(u_shadowCubeMap, vec4(fromLightToFragment,
currentDistanceToLight));
And again, as for twodimensional shadow maps, it's possible to configure cube texture in such a way that single sampling from shadow cube map will return a result of a comparison for four texels. That is, sampling function will return following shadow factor values: 0.0, 0.25, 0.5, 0.75, 1.0. Set minifying and magnifying filters of the texture to GL_LINEAR to enable such sampling.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Percentage closer filtering (PCF) algorithm for the shadow cube map requires even more samples (or GPU time) than in the case of twodimensional shadow map in order to achieve good quality of shadows. The reason is that sampling from cube map is performed with a direction, but it's impossible to optimally precompute these directions (kernel) as in the case of twodimensional shadow map.
So in order to blur edges of hard shadows, the shader can use an array of precomputed offsets that cover all directions of environment, but half of these offsets are redundant. The redundant part of offsets will sample nearly same texels from the shadow cube map as correct half of the offsets.
It is possible to sample cube texture with nonnormalized vectors, so there is no need to normalize sampling vectors with added offsets.


Following image shows offset vectors that are added to main sampling direction (big green arrow). The red offset vectors (perpendicular to main sampling direction) are the best choice vectors for offset, and none of them are redundant. The orange offset vectors aren't perpendicular to main sampling direction and are less fit for PCF sampling. These offset vectors combined with main sampling direction might sample same texels from shadow cube map. The blue offset vectors have the same orientation as main sampling direction and won't change direction of sampling.
The part of fragment shader that performs PCF with precomputed directions (redundant samples):
// array of offset direction for sampling
vec3 gridSamplingDisk[20] = vec3[]
(
vec3(1, 1, 1), vec3(1, 1, 1), vec3(1, 1, 1), vec3(1, 1, 1),
vec3(1, 1, 1), vec3(1, 1, 1), vec3(1, 1, 1), vec3(1, 1, 1),
vec3(1, 1, 0), vec3(1, 1, 0), vec3(1, 1, 0), vec3(1, 1, 0),
vec3(1, 0, 1), vec3(1, 0, 1), vec3(1, 0, 1), vec3(1, 0, 1),
vec3(0, 1, 1), vec3(0, 1, 1), vec3(0, 1, 1), vec3(0, 1, 1)
);
// function compares distance from shadow map with current distance
void sampleShadowMap(in vec3 baseDirection, in vec3 baseOffset,
in float curDistance, inout float shadowFactor, inout float numSamples)
{
shadowFactor += texture(u_shadowCubeMap,
vec4(baseDirection + baseOffset, curDistance));
numSamples += 1;
}
void main(void)
{
// ...
float shadowFactor = 0;
float numSamples = 0;
// radius of PCF depending on distance from the light source
float diskRadius = (1.0 + (1.0  currentDistanceToLight) * 3) / sizeOfCubeTex;
// evaluate each sampling direction
for(int i=0; i<20; i++)
{
sampleShadowMap(fromLightToFragment, gridSamplingDisk[i] * diskRadius,
currentDistanceToLight, shadowFactor, numSamples);
}
// average shadow factor
shadowFactor /= numSamples;
// output color to framebuffer
resultingColor.rgb = vec3(shadowFactor);
resultingColor.a = 1;
}
To sample only required direction it's possible to calculate tangent vectors that will determine base axes for offset vectors. First we need to get a perpendicular to main sampling direction. Let's consider it as X axis for offset. It can be determined as cross product of main sampling direction and an arbitrary vector (not parallel to main sampling direction). Then we can calculate vector for Y offset axis that is perpendicular to main sampling direction and to X offset axis. These two perpendicular vectors can be used as base X and Y axes for offset. You can even precompute 2D kernel (grid, poisson, randomly rotated, etc.) with offsets as for twodimensional shadow map. Following code snippet shows how 2D offset should be evaluated along 3D axes:
vec3 sampleDir = mainSampleDir + xAsis * kernel[i].x + yAxis * kernel[i].y;
Offset can be additionally multiplied by some sort of radius of deviation. The larger the radius, the larger the distances between samples from shadow cube map. Ideal radius should be such that each sampling operation will sample unique texels from the shadow cube map. But at the same time radius should be small enough to prevent sampling of texels that are too far away from the central texel (sampled with main sampling direction). Also, in some cases large radius of PCF filter can create an interesting effect as on the following image.


To imporove results of shadow mapping for point light sources you can use same methods as for other types of light sources. This tutorial shows how to increase quality of shadow mapping.
Sun and Black Cat Igor Dykhta () © 20072014