OpenGL LIGHTING AND ENVIRONMENT MAPPING WITH GLSL


https://www.keithlantz.net/2011/10/lighting-and-environment-mapping-with-glsl/


OpenGL LIGHTING AND ENVIRONMENT MAPPING WITH GLSL

LIGHTING AND ENVIRONMENT MAPPING WITH GLSL

In this post we will expand on our skybox project by adding an object to our scene for which we will evaluate lighting contributions and environment mapping. We will first make a quick edit to our Wavefront OBJ loader to utilize OpenGL'sVertex Buffer Object. Once we can render an object we will create a shader program to evaluate the lighting and reflections. Below are a couple of screen grabs of the final result.

OpenGL LIGHTING AND ENVIRONMENT MAPPING WITH GLSL

A rendering of a teapot with lighting and environment mapping.

OpenGL LIGHTING AND ENVIRONMENT MAPPING WITH GLSL

A rendering of a dragon with lighting and environment mapping.

A couple of video captures are below.

The relevant modifications to our Wavefront OBJ loader are below. These methods expect normals to be specified in the obj file in addition to the faces being rendered with triangles. The setupBufferObjects method is just a quick way to load our vertices and normals into a Vertex Buffer Object once our OpenGL context has been created. We've defined a structure, v, padded to 32 bits, to store our data. We have a render method to render our model. Note that we have specified an offset for the normals using the glVertexAttribPointer function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
struct v {
    GLfloat x, y, z;
    GLfloat nx, ny, nz;
    GLfloat padding[2];
};
 
void cObj::setupBufferObjects() {
    int size = faces.size();
    v *vertices_ = new v[size*3];
    unsigned int *indices_ = new unsigned int[size*3];
    for (int j = 0, i = 0; i < size; i++) {
        vertices_[j].x  = vertices[faces[i].vertex[0]].v[0];
        vertices_[j].y  = vertices[faces[i].vertex[0]].v[1];
        vertices_[j].z  = vertices[faces[i].vertex[0]].v[2];
        vertices_[j].nx = normals[faces[i].normal[0]].v[0];
        vertices_[j].ny = normals[faces[i].normal[0]].v[1];
        vertices_[j].nz = normals[faces[i].normal[0]].v[2];
        indices_[j]     = j;
        j++;
 
        vertices_[j].x  = vertices[faces[i].vertex[1]].v[0];
        vertices_[j].y  = vertices[faces[i].vertex[1]].v[1];
        vertices_[j].z  = vertices[faces[i].vertex[1]].v[2];
        vertices_[j].nx = normals[faces[i].normal[1]].v[0];
        vertices_[j].ny = normals[faces[i].normal[1]].v[1];
        vertices_[j].nz = normals[faces[i].normal[1]].v[2];
        indices_[j]     = j;
        j++;
 
        vertices_[j].x  = vertices[faces[i].vertex[2]].v[0];
        vertices_[j].y  = vertices[faces[i].vertex[2]].v[1];
        vertices_[j].z  = vertices[faces[i].vertex[2]].v[2];
        vertices_[j].nx = normals[faces[i].normal[2]].v[0];
        vertices_[j].ny = normals[faces[i].normal[2]].v[1];
        vertices_[j].nz = normals[faces[i].normal[2]].v[2];
        indices_[j]     = j;
        j++;
    }
 
    glGenBuffers(1, &vbo_vertices);
    glBindBuffer(GL_ARRAY_BUFFER, vbo_vertices);
    glBufferData(GL_ARRAY_BUFFER, size*3*sizeof(v), vertices_, GL_STATIC_DRAW);
 
    glGenBuffers(1, &vbo_indices);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo_indices);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, size*3*sizeof(unsigned int), indices_, GL_STATIC_DRAW);
     
    delete [] vertices_;
    delete [] indices_;
}
 
void cObj::render(GLint vertex, GLint normal) {
    glBindBuffer(GL_ARRAY_BUFFER, vbo_vertices);
    glEnableVertexAttribArray(vertex);
    glVertexAttribPointer(vertex, 3, GL_FLOAT, GL_FALSE, sizeof(v), 0);
    glEnableVertexAttribArray(normal);
    glVertexAttribPointer(normal, 3, GL_FLOAT, GL_FALSE, sizeof(v), (char *)NULL + 12);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo_indices);
    glDrawElements(GL_TRIANGLES, faces.size()*3, GL_UNSIGNED_INT, 0);
}
 
void cObj::releaseBufferObjects() {
    glDeleteBuffers(1, &vbo_indices);
    glDeleteBuffers(1, &vbo_vertices);
}

We have also create a function, createProgram, to create our shader program which we discussed in the previous post on rendering a skybox.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
void createProgram(GLuint& glProgram, GLuint& glShaderV, GLuint& glShaderF, const char* vertex_shader, const char* fragment_shader) {
    glShaderV = glCreateShader(GL_VERTEX_SHADER);
    glShaderF = glCreateShader(GL_FRAGMENT_SHADER);
    const GLchar* vShaderSource = loadFile(vertex_shader);
    const GLchar* fShaderSource = loadFile(fragment_shader);
    glShaderSource(glShaderV, 1, &vShaderSource, NULL);
    glShaderSource(glShaderF, 1, &fShaderSource, NULL);
    delete [] vShaderSource;
    delete [] fShaderSource;
    glCompileShader(glShaderV);
    glCompileShader(glShaderF);
    glProgram = glCreateProgram();
    glAttachShader(glProgram, glShaderV);
    glAttachShader(glProgram, glShaderF);
    glLinkProgram(glProgram);
    glUseProgram(glProgram);
 
    int  vlength,    flength,    plength;
    char vlog[2048], flog[2048], plog[2048];
    glGetShaderInfoLog(glShaderV, 2048, &vlength, vlog);
    glGetShaderInfoLog(glShaderF, 2048, &flength, flog);
    glGetProgramInfoLog(glProgram, 2048, &flength, plog);
    std::cout << vlog << std::endl << std::endl << flog << std::endl << std::endl << plog << std::endl << std::endl;
}

Now we can discuss our lighting model. Our lighting model will be the sum of four contributions, emissive, ambient, diffuse, and specular.

colorfinal=coloremissive+colorambient+colordiffuse+colorspecular(1)(1)colorfinal→=coloremissive→+colorambient→+colordiffuse→+colorspecular→

The emissive color is simply the color emitted by our object.

coloremissive=emissivecoloremissivecontribution(2)(2)coloremissive→=emissivecolor→⋅emissivecontribution

The ambient term is independent of the location of the light source, but does depend on the object's materials. Think of the material as reflecting ambient light. If our light source is white and the object is green, clearly it should only reflect the green portion of the light spectrum.

colorambient=materialambientambientcolorambientcontribution(3)(3)colorambient→=materialambient→∘ambientcolor→⋅ambientcontribution

The diffuse term depends on the object's materials, the color of the diffuse light, and also the location of the light source. We will have two vectors at the location we are evaluating, a normal vector and a vector towards the light source. We will use the inner product of these two vectors.

ab=|a||b|cosθ(4)(4)a→⋅b→=|a→||b→|cos⁡θ

If both vectors are of unit length, we have,

ab=cosθ(5)(5)a→⋅b→=cos⁡θ

where  will range from . We are not interested in values less than  because this would indicate an angle between the normal and light vector of more than  radians. When the vectors are parallel and have the same direction, we have a maximum contribution of  for the diffuse light. For the normal, , and the light vector, , we have,

colordiffuse=materialdiffusediffusecolormax(^n^l,0)diffusecontribution(6)(6)colordiffuse→=materialdiffuse→∘diffusecolor→⋅max(n^⋅l^,0)⋅diffusecontribution

The final specular term depends on the object's materials, the color of the specular light, the location of the light source, and also the location of the viewer. We will implement the Blinn-Phong Shading Model. To do this we need to evaluate the halfway vector between the light vector, , and the vector towards the viewing position, ,

^h=^l+^v|^l+^v|(7)(7)h^=l^+v^|l^+v^|

provided . The dot product evaluated for the diffuse contribution indicates whether we apply a specular reflection. If that dot product is greater than , we evaluate the specular contribution,

colorspecular=materialspecularspecularcolor(^n^h)αspecularcontribution(8)(8)colorspecular→=materialspecular→∘specularcolor→⋅(n^⋅h^)α⋅specularcontribution

where  is the shininess constant.

Before we delve into our shader program, we need to discuss something related to coordinate spaces. Our lighting calculations require all of our vectors and vertices to exist in the same coordinate space. We will pass our model, view, and projection matrices to our shader, and use the model and view matrices to transform our coordinates into eye space. However, due to the nature of nonuniform scaling, applying the model and view matrices to our normals may yield vectors that are no longer normal to the surface. We will instead use the inverse of the transpose of the model view matrix as described below.

For our tangent, , and normal, , and transformed  and , all with homogeneous coordinate , we have,

tn=0tn=0(9)(10)(9)t→⋅n→=0(10)t′→⋅n′→=0

so if  is our model view matrix and  is the matrix we are seeking to transform our normals, we have,

(Mt)(Nn)=0(Mt)T(Nn)=0tTMTNn=0(11)(12)(13)(11)(Mt→)⋅(Nn→)=0(12)(Mt→)T(Nn→)=0(13)t→TMTNn→=0

and if , we have,

tTMTNn=0tTIn=0tTn=0(14)(15)(16)(14)t→TMTNn→=0(15)t→TIn→=0(16)t→Tn→=0

thus,

MTN=IN=(MT)1(17)(18)(17)MTN=I(18)N=(MT)−1

So when transforming normals, we will use the inverse of the transpose of the model view matrix. Let's have a look at our shader. Our vertex shader will accept a vertex and the normal in addition to our light position and projection, view, and model matrices. It will pass the light vector, normal vector, halfway vector, and texture coordinate to the fragment shader. Note that our texture coordinate has three coordinates for sampling from our cube map for environment mapping. In our vertex shader we first transform the incoming vertex; we also transform that vertex into eye space. Our light position uniform is already specified in model space, so we simply apply the view matrix to move it to eye space. We evaluate the light vector as the vector from the vertex to the light source in addition to the normal and halfway vectors using the equations above. We transform our normal vector to model space to get our texture coordinates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#version 330
 
in vec3 vertex;
in vec3 normal;
uniform vec3 light_position;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 Model;
out vec3 light_vector;
out vec3 normal_vector;
out vec3 halfway_vector;
out vec3 texture_coord;
 
void main() {
    gl_Position = Projection * View * Model * vec4(vertex, 1.0);
 
    vec4 v = View * Model * vec4(vertex, 1.0);
    vec3 normal1 = normalize(normal);
 
    light_vector = normalize((View * vec4(light_position, 1.0)).xyz - v.xyz);
    normal_vector = (inverse(transpose(View * Model)) * vec4(normal1, 0.0)).xyz;
    texture_coord = (inverse(transpose(Model))        * vec4(normal1, 0.0)).xyz;
    halfway_vector = light_vector + normalize(-v.xyz);
}

Our fragment shader accepts the normal, light, and halfway vectors in addition to the texture coordinates and cube map. We sample the cube map to get the object's material property and specify colors and contributions for the emissive, ambient, diffuse, and specular components. Lastly, we apply our lighting equations from above to output a fragment color. Note that our colors and contributions are built into our shader. We could have specified these as uniforms to make our shader a bit more configurable.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#version 330
 
in vec3 normal_vector;
in vec3 light_vector;
in vec3 halfway_vector;
in vec3 texture_coord;
uniform samplerCube cubemap;
out vec4 fragColor;
 
void main (void) {
    vec3 normal1         = normalize(normal_vector);
    vec3 light_vector1   = normalize(light_vector);
    vec3 halfway_vector1 = normalize(halfway_vector);
 
    vec4 c = texture(cubemap, texture_coord);
 
    vec4 emissive_color = vec4(0.0, 1.0, 0.0, 1.0); // green
    vec4 ambient_color  = vec4(1.0, 1.0, 1.0, 1.0); // white
    vec4 diffuse_color  = vec4(1.0, 1.0, 1.0, 1.0); // white
    vec4 specular_color = vec4(0.0, 0.0, 1.0, 1.0); // blue
 
    float emissive_contribution = 0.02;
    float ambient_contribution  = 0.20;
    float diffuse_contribution  = 0.40;
    float specular_contribution = 0.38;
 
    float d = dot(normal1, light_vector1);
    bool facing = d > 0.0;
 
    fragColor = emissive_color * emissive_contribution +
            ambient_color  * ambient_contribution  * c +
            diffuse_color  * diffuse_contribution  * c * max(d, 0) +
            (facing ?
                specular_color * specular_contribution * c * pow(dot(normal1, halfway_vector1), 80.0) :
                vec4(0.0, 0.0, 0.0, 0.0));
    fragColor.a = 1.0;
}

This shader program yields some nice results, but it only supports one light source and is not very configurable. If we spent some more time, we could create a shader that supports multiple light sources with configurable properties for each source.

Download this project: teapot.tar.bz2