sunLoadingImage
whowedImag
decoration left 1
decoration rigth
English

Show/Hide search bar
black cat logo variable logo
[5 Jan 2013]

Simulation of reflective and transparent objects using cube textures

This article describes methods for simple simulation of reflective and transparent objects, and physical phenomena related to them. Surface of reflective object is partly or almost completely reflects environment, and color of transparent object is formed with taking into account refraction of light rays. How to achieve interactivity in visualization of such objects? Renderering engine should determine what is reflected by the surface, and what is visible throught object after refraction of the rays. Ray tracing methods are accurate, but they aren't interactive on modern hardware. First and most easy solution is to use environmental mapping method. This method is used for simulation of reflective and transparent objects, and to create non-constant ambient lighting.
Environmental mapping
Environmental mapping method is based on storing of environment around the reflective/transparent object in a texture. We can use cube texture to save panoramic view around the object, and views above and below the object. Cube texture contains six usual 2D square textures. Each of these subtextures represents view of the world (scene) along or agains one of the main axes (XYZ). Texture is called cube texture, because if we will try to map each 2D subtexture to face of a cube, we will get complete 3D environment. 2D subtextures should be seamless (there shouldn't be gaps between them or high difference of colors on the border between two adjacent textures).
We can't sample cube texture with usual 2D texture coordinates. We need a direction vector. Lets imagine that start of the vector is at the center of coordinate system. Cube with applied cube texture is also placed at the center of the coordinate system. Now look from the start of the vector in direction of the end of the vector. Right in front, you will see a pixel that will be sampled from cube texture. So, the sampled color will be equal to color at the intersection point of the vector and the cube. It's not required to normalize vector before sampling, but the vector should be transformed from model space coordinates to world space coordinates. For example you can sample cube texture with normal at a point of a surface:
// IN OPENGL
glActiveTexture (GL_TEXTURE0); // activate first texture slot
glBindTexture (GL_TEXTURE_CUBE_MAP, cubeTexture);// and bind cube texture to it
// tell shader to use texture in slot 0 as source for "environmentTexture"
glUniform1i(glGetUniformLocation(shaderProgram,"environmentTexture"), 0);

// IN SHADER
uniform samplerCube environmentTexture; // sampler for cube texture
vec3 worldNormal = ModelMat * modelNormal; // normal to world space
vec3 color = texture(environmentTexture, worldNormal); // sample cube texture
Unwrapped cube texture
Sampling from cube texture
In environmental mapping method it's assumed that all objects that are stored in cube texture are infinitely far from the center of the cube. This leads to some inaccuracies, for example, for objects that are near reflective/transparent object, but usually these errors are unnoticed by viewers. Cube textures are ideal for static scenes, and also for scenes, where other objects are far from reflective/transparent object. If reflective/refractive object moves, or objects move around this object, then cube texture should be updated with actual environment. Ideally each reflective/transparent object in the scene should have its own cube texture (where environment is stored relative to the center of the object) and cube textures should be updated at every change of the environment. But this approach dramatically reduces performance. So in most cases one cube texture is used for all objects in the scene, and it's updated only after significant changes of the environment.
How to create cube texture?

You can create cube texture from wide panoramic photo. Apply panoramic photo to a sphere. Then place a camera inside the sphere and render the scene for six times (with different orientations of the camera: along X axis, against Y axis, etc). Select perspective as type of projection, field of view of the camera should be 90 degrees and aspect ratio - 1. Save results to six files when rendering is finished. Open images that correspond to top and bottom views of the camera in a photo editor. Blur centers of these two images if there're rendering errors (rendering errors are possible due to fact that rectangular panoramic photo was wrapped around the sphere).

If you have nice 3D scene, then you can create cube texture from it. It's done in the very similar way as cube texture from panoramic photo. Render the scene for six times (no need for a sphere!) with different orientations of the camera, and save the results. In this case there's no need to fix top and bottom views.

Loading of cube texture
Loading of cube texture is straightforward: we should load six 2D images and bind them together to create cubemap. For example, if there're six images on a hard drive, with names env_posx.jpg, env_negx.jpg, env_posy.jpg, env_negy.jpg, env_posz.jpg and env_negz.jpg:
void loadCubeTexture(const QString path){ // function for loading of cube texture
    // path - in this example it is "env.jpg"

    qs pre = path.mid(0,path.lastIndexOf()); // path to image without extension

    qs ext = path.mid(path.lastIndexOf()+1,3); // extension of the images

    GLenum cubesides[6] = { // faces of cube texture
        GL_TEXTURE_CUBE_MAP_POSITIVE_X,
        GL_TEXTURE_CUBE_MAP_NEGATIVE_X,
        GL_TEXTURE_CUBE_MAP_POSITIVE_Y,
        GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
        GL_TEXTURE_CUBE_MAP_POSITIVE_Z,
        GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
    };

    qs cubepaths[6] = { // names for image files that correspond to cube map faces
        pre + "_posx." + ext,
        pre + "_negx." + ext,
        pre + "_posy." + ext,
        pre + "_negy." + ext,
        pre + "_posz." + ext,
        pre + "_negz." + ext
    };

    // create new cube texture and set its parameters; very similar to 2D texture
    glGenTextures(1, &texture);
    glEnable(GL_TEXTURE_CUBE_MAP);
    glBindTexture(GL_TEXTURE_CUBE_MAP, texture);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R,GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_GENERATE_MIPMAP, GL_TRUE);

    for(short i=0;i<6;i++){ // for each face of the cube map
        QImage imageToConvert;
        if(!imageToConvert.load(cubepaths[i])){ // load image to Qt
            qDebug()<<"Cube face texture loading failed: "+cubepaths[i];
            return;
        }
        // Convert Qt image to OpenGL format
        QImage GL_formatted_image = QGLWidget::convertToGLFormat(imageToConvert);
        if(GL_formatted_image.isNull()){
            qDebug()<<"Cube face - fail convert to gl format: "+path;
            return;
        }
        // Copy contents of formated image to OpenGL texture
        glTexImage2D (cubesides[i], 0, GL_RGBA, GL_formatted_image.width(),
            GL_formatted_image.height(), 0, GL_RGBA,
            GL_UNSIGNED_BYTE, GL_formatted_image.bits());
    }

    glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
    glBindTexture(GL_TEXTURE_CUBE_MAP, 0); // unbind texture after creation

}
Rendering to cube texture

You need to perform three steps to render to cube texture:

 1) create cube texture and allocate memory for each side of the cube texture

 2) create framebuffer object with depth renderbuffer and one color attachment

 3) render scene to each side of the cube texture

GLuint texCube, fb, drb; // ids of cube texture, framebuffer and depth renderbuffer
int size = 256; // size of the cube map

//////////////////////////////
// CREATE EMPTY CUBEMAP //////
//////////////////////////////

glGenTextures(1, &texCube);
glBindTexture(GL_TEXTURE_CUBE_MAP, texCube);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

// allocate space for each side of the cube map
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA, size,
             size, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGBA, size,
             size, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
// ... same calls for other 4 sides of cubemap

/////////////////////////////////////////
// CREATE FRAMBUFFER ////////////////////
/////////////////////////////////////////

glGenFramebuffers(1, &fb); // generate new frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, fb); // and bind it as active framebuffer
glGenRenderbuffers(1, &drb); // generate new render buffer for depth buffer,
glBindRenderbuffer(GL_RENDERBUFFER, drb);// bind it as renderbuffer and set parameters
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, size, size);
//Attach one of the faces of the cubemap texture to current framebuffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                       GL_TEXTURE_CUBE_MAP_POSITIVE_X, texCube, 0);
//Attach depth buffer to framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, drb);
// set current draw buffer
glDrawBuffer(GL_COLOR_ATTACHMENT0);

//Check if current configuration of framebuffer is correct
GLenum status;
status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
switch(status){
    case GL_FRAMEBUFFER_COMPLETE:
        break;
    default:
        qDebug()<<"bad framebuffer!";
        break;
}

// set default framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);

//////////////////////////////////
// RENDER SCENE TO CUBE TEXTURE //
//////////////////////////////////
glViewport(0, 0, size, size); // set size of the viewport as size of cube map
// HERE: set projection matrix: perspective, 90 FOV, 1 as aspect ratio
// HERE: set view matrix, look in positive direction of X axis
glBindFramebuffer(GL_FRAMEBUFFER, fb); // bind FBO to render to the texture

// render to positive X face
// bind cube map face to current framebuffer as render target
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                       GL_TEXTURE_CUBE_MAP_POSITIVE_X, texCube, 0);
renderCurrentScene(); // should clear the depth

// HERE: same rendering to other five faces with appropriate view matries

// restore projection matrix for normal rendering
// restore view matrix for normal rendering
// restore previous viewport
glBindFramebuffer(GL_FRAMEBUFFER, 0); // unbind FBO, set default framebuffer
Reflective objects
Color of reflective object is formed from color of object's material and from color of the environment reflected in the object. In case of mirror, chrome or similar materials, color of the object is almost fully formed by color of the reflected environment. In environment mapping method reflected color is calculated in the following way. We have a normal at a point of the surface, and we have a vector from the camera position to the point on the surface (view vector). Calculate reflected vector with help of GLSL function reflect(I, N), where incident vector I is equal to our view vector V, and normal vector N is equal to normal at the point of the surface (calculation of reflected vector). Light ray that is reflected by the object at the current point of the surface is same as R vector (as shown on the previous image). With reflected vector R we can sample color from cube texture. Then, we can mix reflected color with diffuse color of the object. As you can see from following image, reflective environment mapping effect requires smooth normals and curved detailed surfaces. Sphere reflects environment quite well, but the cat has strange reflections, as it has sharp edges. Environment mapping method doesn't allow to reproduce effect of multiple refrections, e.g., when object reflects its own reflection in another object (actually it allows, but you have to do a lot of rendering to cube textures). We can control reflectivity of the object by special texture - reflectivity map. It defines places on the object that are reflective and that aren't. For example, parts of the mesh that corresponds to white color in the texture is reflective, and if color is gray, then the point is partly reflective.
vec3 reflectedVector = reflect(view, normal);
vec3 reflectedColor = texture(environmentTexture, normalize(reflectedVector)).rgb;
Reflective object
Transparent objects
Color of transparent object is formed from object's material color and from color of the environment that is visible throught the object. To calculate color, that is visible through the object, it's required to calculate refracted ray and use it to sample cube map. You can calculate refracted vector with help of Snell's Law. In GSLS these calculations are implemented in refract(I, N, IOR) function. Incident vector I is our view vector V (from the camera to the point on the surface), and normal N is the normal at the point of the surface. IOR (index of refraction) - ratio of refractivity index of first medium to refractivity index of second medium (more info about calculation of refracted vector). Refract() function returns zero vector (with 0 length) if there're total internal reflection of the vector. Next, you can sample cube texture with refracted vector to get refracted color (as shown on previous image), and it will be a color that is visible through object. Only one refraction of light ray is taken into account as environment mapping method is only simple interactive simulation. More precise simulations take into account multiple refractions of the vector, e.g., when it enters the object and when it exits the object, and so on. Quality of refractive environmental mapping depends on smoothness of normals of the model in the same way as for reflections. And as for reflections, it's possible to use special texture (refractivity map) to control refractivity of the object.
vec3 refractedVector = refract(view, normal, IOR);
vec3 refractedColor = texture(environmentTexture, normalize(refractedVector)).rgb;
Transparent object
Reflected and refracted vectors can be calculated in vertex or in fragment shader. Results are the same as for per-vertex and per-fragment lighting. Calculations in vertex shader give lowest quality, but better performance, and calculations in fragment shader - better quality, but lower FPS. In most cases environmental mapping is calculated in vertex shader as the method is only rough approximation.
Fresnel coefficient
Transparent object is more transparent on low angles between normal of the surface and view vector, than when angle between normal and view vector is high. Lets consider surface of the water: when we are looking from above (view vector is parallel to normal of the surface), we can see bottom through the water, but when we are looking at the horizon (view vector is nearly perpendicular to normal of the surface), we can see only reflections of the sky. This is due to the fact that light partly reflects and partly refracts when reaches border between two mediums. Ratio of reflected part of the light to total light intensity is called Fresnel coefficient. With this value we can linearly interpolate between reflected color and refracted color. But precise caclulation of this value is complicated, and we can approximate it for our simple simulation. We can use any formula, which gives high value for perpendicular angle between view and normal vectors, and low values for parallel angles. For example:
Simple approximation of Fresnel coefficient
Approximation of Fresnel coefficient with some additional parameters

Results for first and second formula:

Fresnel coefficient. White color indicates that only refracted color will be used
Linear interpolation of reflected and refracted colors
Chromatic dispersion
Chromatic dispersion of the light lies in the base of the phenomenon of splitting of white light into a rainbow (dispersive prism). Dispersion can be described in the following way: lightwaves with different frequencies have different indices of refraction. E.g., red lightwaves refract more, and blue refract less. OpenGL uses RGB color system, and as result there're only three different frequencies of the light: red, green and blue (In real life there're infinite number of frequencies). Simulation of chromatic dispersion is easy. We should calculate three refracted vectors, each with different index of refraction value. Then sample three colors from cube texture with these vectors. We will get three different colors, that corresponds to red, green and blue lightwaves. Final color is calculated as in the following code snipet: vec3 Tr = refract(I, N, IOR_red);
vec3 Tr = refract(I, N, IOR_green);
vec3 Tg = refract(I, N, IOR_blue);
vec3 finalColor;
finalColor.r = texture(u_envTexture, normalize(Tr)).r;
finalColor.g = texture(u_envTexture, normalize(Tg)).g;
finalColor.b = texture(u_envTexture, normalize(Tb)).b;
Example: background is white-black, but due to chromatic dispersion, transparent objects have more colors
Chromatic dispersion in action
Chromatic dispersion with high index of refraction for green light wave
How to use cube texture as source of color of a light
Another application of cube textures in environmental mapping is to use them as source of color and intensity of ambient lighting. Sample color from cube texture with normal vector of the surface, and use this color as color or intensity of ambient lighting. On the following image you can see results of mixing of sampled color with model's diffuse color.
If you like such ambient lighting and want to use it, then you should decrease size of the cube texture to 8x8 or 16x16 pixels. Such textures won't contain small details of high frequency, and transitions between different light colors will be smooth. Use special software to create smaller cube textures, like CubeMapGen from AMD. If you decrease size of cube texture in photo editor, then cube texture will contain different colors on the borders between two adjacent 2D subtextures. Last image depicts cube map with reduced size. It doesn't contain small details and gaps on borders.
vec3 ambientColor = texture(ambientTexture, normalize(normalVector)).rgb;
Color and intensity of ambient lighting are taken from cube texture
Decreased cube texture
  • Vertex Shader #version 330

    layout(location = 0) in vec3 i_position;
    layout(location = 1) in vec3 i_normal;

    uniform mat4 u_model_mat;
    uniform mat4 u_viewProj_mat;
    uniform vec3 u_camera_position;

    out vec3 v_directionToCamera;
    out vec3 v_normalVector;

    ///////////////////////////////////////////////////////////////////
    ///////////////////////////////////////////////////////////////////

    void main(void){
       vec4 worldPos = u_model_mat * vec4(i_position,1.0f);
       v_normalVector = normalize(u_model_mat * vec4(i_normal,0.0f));
       v_directionToCamera = normalize(u_camera_position - vec3(worldPos));
       gl_Position = u_viewProj_mat * worldPos;
    }
  • Fragment Shader #version 330

    layout(location = 0) out vec4 o_FragColor;

    in vec3 v_directionToCamera;
    in vec3 v_normalVector;

    uniform samplerCube u_environmentTexture;

    ///////////////////////////////////////////////////////////////////
    ///////////////////////////////////////////////////////////////////

    void main(void){
       vec3 N = normalize(v_normalVector);
       vec3 V = normalize(-v_directionToCamera);

       vec3 ambientColor = texture(u_environmentTexture, N).rgb;

       //////////////////////////////////////////////////////////////

       vec3 reflectedVector = reflect(V, N);
       vec3 reflectedColor = texture(u_environmentTexture, reflectedVector).rgb;

       //////////////////////////////////////////////////////////////

       float IOR = 0.8f;
       float offset = 0.05f;
       vec3 Tr = refract(V, N, IOR + offset);
       vec3 Tg = refract(V, N, IOR);
       vec3 Tb = refract(V, N, IOR - offset);

       vec3 refractedColor;
       refractedColor.r = texture(u_environmentTexture, Tr).r;
       refractedColor.g = texture(u_environmentTexture, Tg).g;
       refractedColor.b = texture(u_environmentTexture, Tb).b;

       //////////////////////////////////////////////////////////////
       //////////////////////////////////////////////////////////////

       float frenel = clamp(dot(N, V)/-1.0f,0.0f,1.0f);
       if(length(Tb)<0.5f){ // total internal reflection
          frenel = 0.0f;
       }
       frenel = pow( frenel, 1.55f);

       vec3 col = reflectedColor * frenel + refractedColor * (1.0f - frenel);

       //////////////////////////////////////////////////////////////
       //////////////////////////////////////////////////////////////

       o_FragColor = vec4(frenel, 1.0f);
    }



  • Sun and Black Cat- Igor Dykhta (igor dykhta email) 2007-2014