CS488 Project: Extended Ray Tracer
I’m currently taking the course, CS488: Introduction to Computer Graphics, at the University of Waterloo. The course’s final project is open-ended and you’re expected to create a set of objectives, within reasonable constraints, and implement them into one cohesive graphics program. This post provides a brief overview of the ray tracer I wrote as my final project.
Ray tracing is fundamentally a computer image generation technique where simulated light rays are fired from a virtual camera into a virtual scene. The light rays might intersect with objects in the scene and these intersections determine the final image’s pixel colours. Ray tracing is generally very slow but it can create photorealistic images. This is why it’s often used in film and rarely used in video games.
For my final project, I had to choose and implement 10 different ray tracer features. I’ve provided a brief description for each of them below, and also included sample renderings.
The ray tracer is written in C++ and Lua is used as the modelling language.
Objective 1: Cone and Cylinder primitives
This ray tracer is an extension of the ray tracer built in assignment 4. The previous ray tracer supports sphere, cube, and mesh primitives. I’ve added cone and cylinder primitives in this version.

Objective 2: Reflection and Refraction
Reflection is implemented by firing secondary reflection rays when light rays hit reflective objects. The new ray’s direction is determined using the law of reflection. Reflection rays can recursively generate more reflection rays if they hit reflective objects.
Refraction is similar to reflection, except that we use Snell’s law instead of the law of reflection to determine the direction of secondary rays. The law takes into account the indices of refraction between the two mediums that the ray is passing through.
In this ray tracer, refractive objects are also reflective. Fresnel coefficients are used to determine the amount of light that is refracted and reflected at different points on the object.

Objective 3: Glossy reflection and refraction
Glossiness is implemented by generating multiple secondary reflection/refraction rays per intersection and averaging the results. The direction of each ray is randomly perturbed by an amount proportional to the object’s glossiness.
The number of glossy samples taken per ray intersection determines how noisy the glossy effect is. The more samples, the less noise.
The sharpness of the glossy effect can be adjusted using the material’s phong exponent. The larger the phong exponent, the sharper the image. As the phong exponent approaches infinity, glossiness turns into mirror reflection.



Objective 4: Texture mapping
Texture mapping is implemented by mapping surface coordinates on an object to texture coordinates. The texture coordinates are then used to query a texture (rectangular image) to determine the object’s colour.



Objective 5: Bump mapping
Like texture mapping, bump mapping also requires mapping object surface coordinates to texture coordinates. However, instead of using texture colours to colour the object’s surface, we use them to determine how normals on the surface are perturbed. Perturbing surface normals creates different lighting patterns that simulate the appearance of a bumpy surface.

Objective 6: Phong shading
Phong shading is implemented by interpolating vertex normals across a triangle’s surface to create the appearance of a smooth surface.

Objective 7: Soft shadows
Soft shadows are implemented by sending multiple shadow rays to different sections of a spherical area light. The shadow intensity is determined by the number of shadow rays that are blocked over the total number of shadow rays sent. Area lights are partitioned using a uniform grid and a shadow ray is sent to a random location in each cell. This randomness creates local noise and removes an unnatural “banding” effect in the shadow.
The number of shadow ray samples taken determine global noisiness of the soft shadow. The more samples, the less noise.




Objective 8: Adaptive anti-aliasing
Anti-aliasing eliminates “jaggies”, a type of image artifact where straight or smooth lines have a staircase-like appearance.
Anti-aliasing is performed by partitioning a pixel into uniformly sized subpixels and sending rays into the center of each subpixel. The final colour of a pixel is the average of the colours returned by its subpixel rays. The more samples taken per pixel, the smoother the final image looks.
In adaptive anti-aliasing, not all pixels are anti-aliased. Ideally, only pixels that need it are anti-aliased. My adaptive anti-aliasing implementation uses two passes to generate the image. The first pass is a standard ray trace. The resulting image is fed to an edge detection algorithm and a second pass, this time with anti-aliasing turned on, retraces the detected edge pixels.



Objective 9: Bounding Volume Hierarchy (BVH)
This feature optimizes the speed of the ray tracer by eliminating unnecessary intersection checks. The algorithm starts out by bounding all objects in a scene in one Axis Aligned Bolume Box (AABB) and storing this AABB and the objects it contains in a root node. It then splits these objects into three groups based on location. An AABB and a corresponding child node are created for each group. This process is repeated recursively for each child node and stops when a node contains only one object. The resulting data structure is a tree.
We can speed up intersection checking using BVH through the observation that if a ray fails to intersect with a node’s AABB, it will fail to intersect any of the nodes descendent nodes. Because of the tree structure, BVH allows intersection testing of one ray to be run on $x$ number of model objects, where $x$ is logarithmic in the total number of model objects.
BVH’s performance is highly dependent on the object partitioning algorithm (i.e. tree balancing algorithm) used to create the BVH tree.
To demonstrate the performance increase through BVH, I generated scenes with 1000, 20000, and 50000 spheres. I rendered each scene using non-BVH mode and BVH mode and recorded render times. Here are the results.
Spheres | non-BVH | BVH |
---|---|---|
1000 | 5s | 0s |
20000 | 150s | 4s |
50000 | 542s | 13s |

Objective 10: Final scene

Special thanks goes towards Professor Baranoski for being a great teacher and for entertaining all the questions I had after class and during office hours. I thoroughly enjoyed CS488 and learned a great deal.