# Per-Pixel Lighting

In this tutorial we'll look at the mechanics of lighting in computer graphics (specifically using the Phong model) and implement them per-pixel using OpenGL and a GLSL shader program.

## Background

We see objects by virtue of the fact that they reflect light. In most cases, the reflecting surface absorbs some of the light energy and we perceive this as colour; a blue object, for example, absorbs all but the blue part of the spectrum.

Rather than simulate rays of light bouncing around, it is simpler (and computationally cheaper) to simulate the effects of light reflection using an approximate mathematical model. In the 1970s, Bui Tuong Phong presented such a model of light reflectance, which is known as the Phong reflection model. The Phong model is very simple; it is a local model which expresses light reflection in terms of 3 components:

- the
*ambient*component is the indirect illumination of a surface by light which has been reflected many times before reaching the eye. Shadowed areas tend to be faintly illuminated by the light reflected off the directly illuminated surfaces (hence you can still see detail under a table, for example). In the Phong model, ambient light has no directional characteristics - the
*diffuse*component is the direct illumination of a surface. The amount of diffuse light reflected depends on how much the surface faces the light source. A surface facing away from a light source will receive ambient, but not diffuse light (think of that underside of the table again) - the
*specular*component is the total (or near-total) reflection of incident light. From viewing directions close to the angle of reflection this appears as a bright highlight

I mentioned that the Phong model is a local model. This means that we only consider the point being illuminated and the light source doing the illuminating; every point in the scene is treated independently of every other - there are no shadows, since points cannot occlude one another. If we want shadows, we have to introduce them artificially by using techniques such as shadow mapping.

The Phong model describes the reflection of light at each point on a surface, which means performing the lighting calculations per-pixel. Up until pretty recently this was not feasible in realtime (on commodity hardware) and so a per-vertex approximation to the Phong model known as Gouraud shading was used. Fortunately, the march of progress in cheap graphics cards means that, nowadays, we can enjoy our interactive graphics full-Phong. So here's how...

## Implementation

We'll implement 3 types of light source: a point light, a spot light and a directional light. In each case we want to calculate the contribution of a light source at a particular pixel, given some information about the light source and about the surface at that pixel:- position of the light source
- orientation of the light source
- position of the point being illuminated
- orientation of the point being illuminated (the surface normal)

In order for our calculations to be meaningful, all of the vectors we use must be transformed into the same coordinate space. If you are using the (deprecated) built-in OpenGL lights, setting a light's position via `glLightfv()`

transforms it by the current modelview matrix. In other words, when the light position arrives at the shader as `gl_LightSource[i].position`

it has already been transformed into view space (or eye space, if you prefer). If you are managing your own transformations (as you should be) you'll need to transform all the properties into view space before passing them to the shader (multiplying by your view matrix).

### Vertex Shader

We'll use the same vertex shader for all three types of light. It takes as inputs the *object space* vertex position and normal and outputs the (interpolated) *view space* position and normal:

// INPUTS: uniform mat3 NORMAL_MATRIX; uniform mat4 MODEL_VIEW_MATRIX; uniform mat4 MODEL_VIEW_PROJECTION_MATRIX; in vec4 POSITION; in vec3 NORMAL; // OUTPUTS: smooth out vec4 VIEW_POSITION; noperspective out vec3 VIEW_NORMAL; void main() { VIEW_NORMAL = NORMAL_MATRIX * NORMAL; VIEW_POSITION = (MODEL_VIEW_MATRIX * POSITION); gl_Position = MODEL_VIEW_PROJECTION_MATRIX * POSITION; }

The `NORMAL_MATRIX`

is the upper-left 3x3 portion of the inverted, then transposed `MODEL_VIEW_MATRIX`

### Fragment Shader

The meat of the implementation is in the fragment program. We'll write a function for each type of light which calculates the light contribution at a point for a particular source. First we need a way of representing the attributes of the light source and of the surface point being illuminated:

struct LIGHT_SOURCE_ATTRIBUTES { vec3 ambient, diffuse, specular; vec4 view_position; // in view space }; struct SURFACE_ATTRIBUTES { // supplied by the application: vec3 ambient, diffuse, specular; float shininess; // supplied by the vertex shader: vec4 view_position; vec3 view_normal; };

In `LIGHT_SOURCE_ATTRIBUTES`

, the ambient diffuse and specular members represent the RGB intensity of the light emitted. In `SURFACE_ATTRIBUTES`

, the same members represent the RGB intensity of the light reflected. The final colour of the light contribution is the emitted intensity multiplied by the reflected intensity.

We'll also use a structure to accumulate the contributions from each light source at the fragment being handled during the execution of the shader:

struct LIGHTING_RESULTS { vec3 ambient, diffuse, specular; };

#### Point Light Source

A point light is local to the scene, radiating light evenly in all directions. Because it is local, we need to calculate the direction to the light from the position of the point being illuminated; we subtract the view space surface position (`surface.view_position`

) from the view space light position (`light.view_position`

):

void point(in SURFACE_ATTRIBUTES surface, in LIGHT_SOURCE_ATTRIBUTES light, inout LIGHTING_RESULTS results) { // get direction to light: vec3 light_direction = light.view_position.xyz - surface.view_position.xyz; light_direction = normalize(light_direction); }

Now, along with the view space normal(`surface.view_normal`

), we have enough information to begin calculating the ambient, diffuse and specular contributions.

##### Ambient Contribution

The simplest of the three components is ambient light, which has no directional properties in the Phong model. We just accumulate the light's ambient contribution property in our output:

// accumulate ambient: results.ambient += surface.ambient * light.ambient;

##### Diffuse Contribution

The diffuse contribution at a point is proportional to the light's diffuse intensity and the cosine of the angle between the surface normal and the point-to-light direction vector.

This is known as Lambertian reflectance.

We calculate the latter as the dot product between the normal and light direction:

// accumulate diffuse: float n_dot_l = max(0.0, dot(surface.view_normal, light_direction)); results.diffuse += (surface.diffuse * light.diffuse) * n_dot_l;

`n_dot_l`

gets stored, as it comes in handy when computing (or not computing) the specular contribution.

##### Specular Contribution

The specular contribution at a point is proportional to the light's specular intensity and the cosine of the angle between the view direction and the *reflected* light direction. In other words if the viewing angle coincides with the angle of reflection then the specular intensity will be maximum (as the emitted light is bouncing straight into your eye).

We need only do the specular computation if the point is receiving diffuse illumination, i.e. if `n_dot_l > 0`

. The eye direction is the normalized `surface.view_position`

; in view space the eye's position is at the origin, hence the surface point's position (when normalized) is a direction from the eye to that point. We can get the reflected light vector using the built in `reflect()`

function on our previously calculated light direction vector.

if (n_dot_l > 0.0) { // if fragment is illuminated // accumulate specular: vec3 view_direction = normalize(surface.view_position.xyz); vec3 reflection = reflect(light_direction, surface.view_normal); float specular = max(0.0, dot(reflection, view_direction)); results.specular += surface.specular * light.specular * pow(specular, surface.shininess); }

By raising the result to the power of `surface.shininess`

, we can control the size of the specular highlight. A larger value of `surface.shininess`

produces a smaller highlight and an apparently glossier surface, a smaller value will produce a larger, more diffuse highlight.

#### Attenuation

In the real world, contributions from a light source tend to decrease with distance. The illumination from a candle, for example, diminishes to zero over a few metres (except in films, it seems). Currently our implementation doesn't model this falloff, hence we need to add *attenuation*, which is a spatial characteristic based on the distance to the light source.

The OpenGL fixed-function light model calculates an attenuation factor in terms of *constant*, *linear* and *quadratic* coefficients of the point-to-light distance:

This factor is then used to scale the ambient, diffuse and specular contributions. This method tends to require a lot of fiddling to get good results, hence I like to use a more intuitive (but less physically accurate) method of calculating the attenuation factor, specifying a *start* and *end* radius and using smooth-step interpolation to modulate the light intensity between the two distances.

We must add a property to our `LIGHT_SOURCE_ATTRIBUTES`

in order to model this:

struct LIGHT_SOURCE_ATTRIBUTES { vec3 ambient, diffuse, specular; vec4 view_position; // in view spacevec2 attenuation; // x = start, y = end};

Then calculate the attenuation factor at the top of our `point()`

function and use it to scale each of the contributions:

void point(in SURFACE_ATTRIBUTES surface, in LIGHT_SOURCE_ATTRIBUTES light, inout LIGHTING_RESULTS results) { // get direction to light: vec3 light_direction = light.view_position.xyz - surface.view_position.xyz;// compute attenuation factor: float light_distance = length(light_direction); float attenuation = smoothstep(light.attenuation.y, light.attenuation.x, light_distance);light_direction = normalize(light_direction); // accumulate ambient: results.ambient += surface.ambient * light.ambient* attenuation;// accumulate diffuse: float n_dot_l = max(0.0, dot(surface.view_normal, light_direction)); results.diffuse += (surface.diffuse * light.diffuse* attenuation) * n_dot_l; if (n_dot_l > 0.0) { // if fragment is illuminated // accumulate specular: vec3 view_direction = normalize(surface.view_position.xyz); vec3 reflection = reflect(light_direction, surface.view_normal); float specular = max(0.0, dot(reflection, view_direction)); results.specular += surface.specular * light.specular * pow(specular, surface.shininess)* attenuation; } }

#### Spotlight Source

Like point lights, spotlights are local light sources. The light from a spotlight falls in a cone, originating at the light position. In order to define this cone within our `LIGHTS_SOURCE_ATTRIBUTES`

struct, we'll need to add two new properties: `spot_view_direction`

and `spot_cutoff`

. There's a third new property, `spot_exponent`

, which we'll return to shortly. Note also that we're storing the *cosine* of the cutoff angle, as that's what is used in the calculations.

struct LIGHT_SOURCE_ATTRIBUTES { vec3 ambient, diffuse, specular; vec4 view_position; // in view space vec2 attenuation; // x = start, y = endvec3 spot_view_direction; // in view space float spot_cutoff; // cosine of the cutoff angle float spot_exponent;};

By finding the cosine of the angle between the spot direction vector and light-to-point direction vector (`-light_direction`

) we can determine whether or not the point falls inside the light cone; if cos(-) > light.spot_cutoff, the light contributes to the point's illumination.

This will cast a hard-edged circle of light, which may not be desirable. We can soften this by modulating the light's attenuation according to `cos(-)`

, raised to the power of `light.spot_exponent`

which controls radial falloff from the cone's centre. Here's the entire `spot()`

function:

void spot(in SURFACE_ATTRIBUTES surface, in LIGHT_SOURCE_ATTRIBUTES light, inout LIGHTING_RESULTS results) { // get direction to light: vec3 light_direction = light.view_position.xyz - surface.view_position.xyz; float spot_dot_l = dot(normalize(light.spot_view_direction), normalize(-light_direction)); // compute attenuation factor: float light_distance = length(light_direction); float attenuation = smoothstep(light.attenuation.y, light.attenuation.x, light_distance); light_direction = normalize(light_direction); // accumulate ambient: results.ambient += surface.ambient * light.ambient * attenuation; if (spot_dot_l > light.spot_cutoff) { // incorporate spot direction into attenuation factor: attenuation *= pow(spot_dot_l, light.spot_exponent); // accumulate diffuse: float n_dot_l = max(0.0, dot(surface.view_normal, light_direction)); results.diffuse += (surface.diffuse * light.diffuse * attenuation) * n_dot_l; if (n_dot_l > 0.0) { // if fragment is illuminated // accumulate specular: vec3 view_direction = normalize(surface.view_position.xyz); vec3 reflection = reflect(light_direction, surface.view_normal); float specular = max(0.0, dot(reflection, view_direction)); results.specular += surface.specular * light.specular * pow(specular, surface.shininess) * attenuation; } } }

#### Directional Source

A directional light source simulates a light source that is effectively an infinite distance away (like the sun), such that the incident light rays are parallel. In practice this means that the light direction is the same for all points in the scene. Also, since the source is infinitely far away, we can disregard attenuation. This makes directional light sources very simple to implement; it's really just a stripped-down version of the `point()`

function:

void directional(in SURFACE_ATTRIBUTES surface, in LIGHT_SOURCE_ATTRIBUTES light, inout LIGHTING_RESULTS results) { // get direction to light: vec3 light_direction = normalize(light.view_position.xyz); // accumulate ambient: results.ambient += surface.ambient * light.ambient; // accumulate diffuse: float n_dot_l = max(0.0, dot(surface.view_normal, light_direction)); results.diffuse += (surface.diffuse * light.diffuse) * n_dot_l; if (n_dot_l > 0.0) { // if fragment is illuminated // accumulate specular: vec3 view_direction = normalize(surface.view_position.xyz); vec3 reflection = reflect(light_direction, surface.view_normal); float specular = max(0.0, dot(reflection, view_direction)); results.specular += surface.specular * light.specular * pow(specular, surface.shininess); } }

## In Practice

So how do we determine which function to call at runtime? We could set a flag to indicate to the shader which type of light source we want, but it's equally possible to do using the light source properties themselves as indicators:

// INPUTS: const int MAX_LIGHT_SOURCES = 32; uniform int N_LIGHT_SOURCES; uniform LIGHT_SOURCE_ATTRIBUTES LIGHT_SOURCES[MAX_LIGHT_SOURCES]; uniform SURFACE_ATTRIBUTES SURFACE; smooth in vec4 VIEW_POSITION; noperspective in vec3 VIEW_NORMAL; // OUTPUTS: out vec3 FRAG_COLOR; void main() { // init surface properties: SURFACE_ATTRIBUTES surface; surface.ambient = SURFACE.ambient; surface.diffuse = SURFACE.diffuse; surface.specular = SURFACE.specular; surface.shininess = SURFACE.shininess; surface.view_position = VIEW_POSITION; surface.view_normal = VIEW_NORMAL; // init results accumulator: LIGHTING_RESULTS results; results.ambient = vec3(0.0); results.diffuse = vec3(0.0); results.specular = vec3(0.0); // accumulate results: for (int i = 0; i < N_LIGHT_SOURCES; ++i) { if (LIGHT_SOURCES[i].view_position.w != 0.0) { // w = 1; local if (LIGHT_SOURCES[i].spot_exponent != 0.0) { // spot light spot(surface, LIGHT_SOURCES[i], results); } else { // point light point(surface, LIGHT_SOURCES[i], results); } } else { // w = 0; directional directional(surface, LIGHT_SOURCES[i], results); } } FRAG_COLOR = results.ambient + results.diffuse + results.specular; }

This is the kind of shader you see in most per-pixel lighting examples. In practice I like to simplify the light model, to save on the number of uniforms passed to the shader and (in my opinion) to make setting up the lights more intuitive:

- replace the light ambient/diffuse/specular attributes with a single colour attribute and provide an 'ambient scale' which controls the level of ambient light contributed by the light source
- replace the surface ambient/diffuse/specular attributes for a single colour attribute which specifies the intensity of just the ambient/diffuse reflection. The specular reflection should match the colour of the light, which is correct for non-metallic materials (dielectrics) but incorrect for metallic ones. For most realtime applications you can probably get away with this deviation from reality...