[25 Jul 2012]  Advanced lighting models 
BlinnPhong lighting model lacks realism. The main problem of this lighting model is that it doesn't depend on surrounding environment: it simulates ambient lighting with constant intensity and color, there're no changes of color of light rays after multiple reflections from objects in a scene, it doesn't support area light sources, etc. This tutorial shows how to use interactive methods for more accurate approximation of ambient, diffuse and specular lighting than in BlinnPhong lighting model. This tutorial shows how to implement following lighting models: hemisphere lighting, imagebased lighting and spherical harmonics.
Hemisphere lighting
You can use hemisphere lighting for additional approximation of ambient or diffuse lighting in scenes that have two distinct lighting colors. Imagine a scene with cloudy sky and grass. Sky illuminates all objects with gray color from above, and also the gray light rays reflect from grass and light the objects with green color from below. Simple BlinnPhong model uses only constant color and can't reproduce such effect. On the image you can see example of hemisphere lighting, where one color represents sky and second color represents grass.


Color of each fragment in hemisphere lighting depends only on the world space normal in the fragment. If the direction of the normal points in the direction of the upper hemisphere (L), then fragment has color of the sky (takes color only from the upper hemisphere). If the direction of the normal is opposite, and points in the direction of the lower hemisphere (L), then fragment takes color of the grass (takes color only from the lower hemisphere). In other cases, color is determined as interpolation between colors of upper and lower hemispheres, depending on how much the normal is oriented to the upper and the lower hemisphere. If normal is perpendicular to the direction to the upper hemisphere (L), then color of the fragment is determined as half of the color of the sky and half of the color of the grass.
The main problem of hemisphere lighting is absence of selfshadowing. Parts of the model that are oriented to the upper hemisphere, but are in the bottom of the object, are still lightened with color of the upper hemisphere though there're other parts of the object between upper hemisphere and the part of the object.
Vertex shader for hemisphere lighting:
#version 330
// attributes
layout(location = 0) in vec3 i_position; // xyz  position
layout(location = 1) in vec3 i_normal; // xyz  normal
// matrices
uniform mat4 u_modelViewProjectionMat;
uniform mat3 u_normalMat;
// parameters of hemisphere lighting
uniform vec3 u_topHemisphereDirection;
uniform vec3 u_skyColor;
uniform vec3 u_groundColor;
// data for fragment shader
out vec3 o_color;
///////////////////////////////////////////////////////////////////
void main(void)
{
// normal to world space
vec3 N = normalize(u_normalMat * i_normal);
// cosine between normal and direction to upper hemisphere
// 1  normal is oriented to upper hemisphere
// 1  normal is oriented to lower hemisphere
float NdotL = dot(N, u_topHemisphereDirection);
// from [1, 1] to [0, 1] range
float lightInfluence = NdotL * 0.5 + 0.5;
// interpolate colors from upper and lower hemispheres
o_color = mix(u_groundColor, u_skyColor, lightInfluence);
// vertex to screen space
gl_Position = u_modelViewProjectionMat * i_position;
}
Fragment shader for hemisphere lighting:
#version 330
// color from vertex shader
in vec3 o_color;
// color to framebuffer
out vec4 resultingColor;
void main(void)
{
resultingColor.xyz = o_color;
resultingColor.a = 1;
}
Imagebased lighting
Information about intensity and color of lighting around an object can be saved to cube texture. Cube textures save information about all lighting around the object: multiple light sources, light sources of different size and shape, colors of all objects that are visible from the location of the object, and also objects that occlude light sources. Standard BlinnPhong lighting model can't take into account most of these details. And to get the color of lighting in imagebased lighting only one or two samples from cube texture are required. Following image shows result of imagebased lighting:


In imagebased lighting, you have to sample cube textures with normal (N) for diffuse and ambient lighting, and with view vector (V) reflected with respect to normal (N) for specular lighting.
Different types of lighting requires different cube textures with lighting information. In general, you can use one cube texture for ambient, diffuse and specular lighting, but for better results you can use separate lighting cube textures for each component of lighting.
You can acquire cube textures from photos of the real environment or rendering of a scene to a cube texture. But you need to prepare these cube textures in order to be used as lighting cube textures. Correct diffuse lighting for each point should take into account light rays from all directions of a hemisphere oriented along the normal at the point. But cube textures (created from photos or rendering to texture) contain precise color and intensity from each direction, and don't take into account other colors and intensities from the hemisphere. The task is to find out the color and intensity of lighting for each texel in lighting cube texture, taking into account half of the texels from initial cube texture.


However, it is impossible to do this in realtime in a shader (you have to sample many values from cube texture). You have to calculate blurred cube texture from initial cube texture. Each texel of blurred cube texture is calculated as the average of texels from part of initial cube texture. Examples of blurred cube textures are on the following image. Such blurred textures are good for diffuse and ambient lighting. To preserve more details in diffuse lighting, you can take into account only part of the full hemisphere of values from initial cube texture for each texel of blurred cube map. For example, you can take into account only texels with deviation from normal less than 45 degrees. For specular lighting you have to take into account fewer values around each texel, not all texels from the hemisphere as specular lighting is formed only from reflected light rays (diffuse light scatters lighting in random directions). You can use texels that have the maximum deviation of 110 degrees to simulate different level of glossiness of specular lighting. Also, different materials might require different levels of blur for lighting cube textures, depending on whether material has sharp reflections or whether it blurs reflection.
Following tutorial shows how to blur cube texture with shader taking into account full hemisphere of texels, or smaller part of texels along normal. This method takes a lot from Gaussian filter for 2D blur.
As blurred cube textures does not contain highfrequency details, so size of blurred cube textures can be smaller than the size of original cube textures. For ambient lighting, you can use textures with size of 816 texels, and for diffuse  32  64 texels. Size of lighting cube maps for specular lighting should be sufficient to preserve required level of details, for example 128 size of texels is sufficient.
Imagebased lighting is good for static scenes, where you can calculate lighting cube textures, and they won't change in time. For dynamic objects, you have to update lighting cube textures after each movement of any object. But in most cases you can't blur original cube textures in realtime. In many cases lighting cube textures are updated only after significant changes in the scene, or calculated in a separate process, or blur takes into account not whole, but only small part of cube texture.
Vertex shader for imagebased lighting:
#version 330
// attributes
layout(location = 0) in vec3 i_position; // xyz  position
layout(location = 1) in vec3 i_normal; // xyz  normal
layout(location = 2) in vec2 i_texcoord0; // xy  texture coords
// matrices
uniform mat4 u_modelMat;
uniform mat4 u_viewProjMat;
uniform mat3 u_normalMat;
// position of camera
uniform vec3 u_cameraPosition;
// data for fragment shader
out vec3 o_normal;
out vec3 o_cameraVector;
///////////////////////////////////////////////////
void main(void)
{
// position in world space
vec4 worldPosition = u_modelMat * vec4(i_position, 1);
// normal in world space
o_normal = normalize(u_normalMat * i_normal);
// direction to camera
o_cameraVector = normalize(u_cameraPosition  worldPosition.xyz);
// screen space position of vertex
gl_Position = u_viewProjMat * worldPosition;
}
Fragment shader for imagebased lighting:
#version 330
// data from vertex shader
in vec3 o_normal;
in vec3 o_cameraVector;
// lighting cube textures
layout(location = 0) uniform samplerCube u_ambientTexture;
layout(location = 1) uniform samplerCube u_diffuseTexture;
layout(location = 2) uniform samplerCube u_specularTexture;
// color to framebuffer
out vec4 resultingColor;
// different lighting components
const float ambientIntensity = 0.4;
const float diffuseIntensity = 0.6;
const float specularIntensity = 0.5;
//////////////////////////////////
void main(void)
{
vec3 N = normalize(o_normal);
// view vector reflected with respect to normal
vec3 R = normalize(reflect(normalize(o_cameraVector), o_normal));
// get colors and intensities for lighting components
vec3 ambLighting = texture(u_ambientTexture, N).rgb;
vec3 difLighting = texture(u_diffuseTexture, N).rgb;
vec3 speLighting = texture(u_specularTexture, R).rgb;
// combine lighting components
resultingColor.xyz = ambLighting * 0.4 + difLighting * 0.6 + speLighting * 0.4;
resultingColor.a = 1;
}
Spherical harmonics
Spherical harmonics can be used to approximate diffuse and ambient lighting. This is realtime method that allows to increase the quality of lighting to a level similar to imagebased lighting (taking into account full hemisphere values) without the usage of cube textures with saved lighting. Difference between results of spherical harmonics and diffuse lighting in imagebased rendering is negligible. Parameters of spherical harmonics can be computed and saved from original (not blurred) cube texture. Example of lighting with spherical harmonics is on the following image.


Spherical harmonics represents a projection of lighting into frequency space on a sphere. You can treat this method as analytical BDRF, coefficients of which determine the amount of lighting from different directions of space. Final amount of lighting for each direction of space is equal to a combination of values of basis functions of spherical harmonics with respect to the direction. Basis functions of spherical harmonics form bands. The larger the band index, the more high frequency detail can be approximated by this band. Following image shows original, not blurred cube texture, which we want to use as the source of diffuse lighting (the first sphere from the bottom), first three bands of spherical harmonics and combination of these bands (the first sphere from the top), which is the approximation of diffuse lighting.


Band with index 0 contains only one basis function that represents constant value. This basis function doesn't depend on orientation in the space and is analogous to constant ambient lighting in BlinnPhong lighting model. On the image this is second sphere from the bottom, colored with gray color.
Band with index 1 contains three linear basis functions. Each basis function is oriented along one of the X, Y or Z main axes. During lighting calculations, for example, for basis function that is oriented along Y axis, basis function determines whether lighting from +Y direction is greater than lighting from Y direction. Usually this value is near 1 and positive, as most of the scenes are lightened from the top. On the image, band 1 is shown as three spheres (third row from the bottom). You can see that each basis function determines color along respective axis.
Band with index 2 contains five quadratic basis functions. Functions aren't oriented along main axes, but rather along some parts of the space that are determined with a combination of main axes. On the image, this band is represented with five spheres (the fourth from the bottom).
Bands with higher indices determine more highfrequency details than band with index 2.
Spherical harmonics (or parts of space to which spherical harmonics divides space) can be visualized with spheres. Each sphere is scaled along normal depending on the absolute value of respective basis function for the current direction (from center of the sphere to point on the sphere). Sign of value of basis function determines color of the point. Red for places that become lightened when value of basis function is positive, and blue for negative values. Following image depicts influence zones for each basis function in first three bands of spherical harmonics (without scale along normal).


Following implementation uses only first three bands of spherical harmonics, which require 9 coefficients (for each basis function). You can calculate coefficients from cube textures that store lighting information or you can adjust coefficients manually in spherical harmonics editor. Then you can use coefficients of spherical harmonics in a shader to reconstruct diffuse lighting.
Spherical harmonics with first three bands can reconstruct only lowfrequency lighting, exactly with the quality that is required for diffuse lighting that takes into account incident light from full hemisphere. To simulate blurry specular lighting you can try to use spherical harmonics with more bands that allow to reconstruct more details from basis functions. But for glossy specular lighting you have to use other methods, like imagebased lighting.
You can adjust coefficients of spherical harmonics manually. Making absolute value of a coefficient closer to 1, will make positive or negative part of space (divided by each basis function) brighter than other parts of space. If coefficient is greater than zero positive parts of space (red parts on the previous image) become lightened. Coefficient with value near zero doesn't add noticeable detail to positive or negative parts of space. Negative values of coefficients lead to brighter negative parts of the space (blue parts on the previous image).
Coefficients of spherical harmonics are calculated separately for each RGB channel. So you need 9 vec3 values to pass spherical harmonics correctly to a shader. Besides coefficients (l) of spherical harmonics, to reconstruct lighting, you need constant values (c). These constant values determine influence of different basis functions during reconstruction of lighting. Also, you need a value that scales final reconstructed lighting (s) and normal at each vertex (N). It's possible to reconstruct lighting in vertex shader, as first three bands of spherical harmonics doesn't contain highfrequency details, which could disappear during interpolation from vertex to fragment shader.
Among advantages of spherical harmonics is that you need only one check for each texel in a cube texture to determine coefficients of spherical harmonics from this cube texture. To get similar results in imagebased lighting, you have to blur cube texture as described in section about imagebased lighting, and this process isn't realtime. So you can calculate coefficients of spherical harmonics a lot faster, even in realtime.
Another advantage is that you don't need to save data about lighting in heavy cube textures as you can save coefficient of spherical harmonics as 27 floating point numbers. If shader requires usage of cube texture, but you have only coefficients of spherical harmonics, then you can render spherical harmonics to cube texture with simple render to texture shader that restores lighting from spherical harmonics (scene has sphere, and camera is inside that sphere).
To get coefficients of spherical harmonics from cube texture, you have to project value of each texel of cube texture onto basis functions of spherical harmonics. You can do this numerically via evaluation of basis functions in the direction of each texel of cube texture. Following tutorial shows how to calculate coefficients of spherical harmonics from a cube texture.
Also, spherical harmonics are very flexible and allows many operations on them. Among these operations are:
addition  add two different spherical harmonics to get combined result
scale  linear interpolation between spherical harmonics.
rotation  rotation of coefficients. Operation can be interpreted as a rotation of lighting. Also to simulate rotation of lighting you can rotate normal in each point of the surface (in the direction opposite to the direction of the required rotation of lighting).
Vertex shader that reconstructs lighting from coefficients of spherical harmonics:
#version 330
// attributes
layout(location = 0) in vec3 i_position; // xyz  position
layout(location = 1) in vec3 i_normal; // xyz  normal
// matrices
uniform mat4 u_modelViewProjMat;
uniform mat3 u_normalMat;
// data for fragment shader
out vec3 o_color;
out vec3 o_normal;
// constant that are used to adjust lighting
const float C1 = 0.429043;
const float C2 = 0.511664;
const float C3 = 0.743125;
const float C4 = 0.886227;
const float C5 = 0.247708;
// scale for restored amount of lighting
uniform float u_scaleFactor;
// coefficients of spherical harmonics and possible values
uniform vec3 u_L00; // vec3(0.79, 0.44, 0.54);
uniform vec3 u_L1m1; // vec3(0.39, 0.35, 0.60);
uniform vec3 u_L10; // vec3(0.34, 0.18, 0.27);
uniform vec3 u_L11; // vec3(0.29, 0.06, 0.01);
uniform vec3 u_L2m2; // vec3(0.26, 0.22, 0.47);
uniform vec3 u_L2m1; // vec3(0.11, 0.05, 0.12);
uniform vec3 u_L20; // vec3(0.16, 0.09, 0.15);
uniform vec3 u_L21; // vec3(0.56, 0.21, 0.14);
uniform vec3 u_L22; // vec3(0.21, 0.05, 0.30);
///////////////////////////////////////////
// function restores lighting at a vertex from normal and
// from coefficient of spherical harmonics
vec3 sphericalHarmonics(vec3 N)
{
return
// band 0, constant value, details of lowest frequency
C4 * u_L00 +
// band 1, oriented along main axes
2.0 * C2 * u_L11 * N.x +
2.0 * C2 * u_L1m1 * N.y +
2.0 * C2 * u_L10 * N.z +
// band 2, values depend on multiple axes, higher frequency details
C1 * u_L22 * (N.x * N.x  N.y * N.y) +
C3 * u_L20 * N.z * N.z  C5 * u_L20 +
2.0 * C1 * u_L2m2 * N.x * N.y +
2.0 * C1 * u_L21 * N.x * N.z +
2.0 * C1 * u_L2m1 * N.y * N.z;
}
void main(void)
{
// normal to world coordinates
vec3 N = normalize(u_normalMat * i_normal);
// calculate diffuse lighting as reconstruction of spherical harmonics
o_color = sphericalHarmonics(N) * u_scaleFactor;
// screen space position of a vertex
gl_Position = u_modelViewProjMat * vec4(i_position, 1);
}
Fragment shader for lighting with spherical harmonics:
#version 330
// data from vertex shader
in vec3 o_color;
// color to framebuffer
out vec4 resultingColor;
void main(void)
{
// just output color
resultingColor.rgb = o_color;
resultingColor.a = 1;
}
Each of the described lighting models can be used as substitution or addition to standard lighting model.
For dynamic objects, you have to update parameters of lighting depending on the position of the objects. Or you can use many textures/colors/coefficients for different parts of the scene, for example, for two rooms you can use different parameters of lighting. Before rendering, you should determine the location of the object, and select parameters of lighting that are closest to the object. You can also use multiple sets of parameters at once and interpolate between them in a shader, taking into account distance from the object to positions bonded to the parameters. For better quality, you can define 2D or 3D grid of lighting parameter probes, and determine parameters of lighting through interpolation.
The main disadvantage of all methods is that some parts of objects are lightened with colors that should be blocked by selfshadowing.
Sun and Black Cat Igor Dykhta () © 20072014