CSE 40166 / 60166 - Computer Graphics

|    Home  |   Syllabus  |   Assignments  |   Schedule  |   Resources   |

Extra Credit 3 - Raytacing

This extra credit assignment is due by December 14, 2012 by 5:00pm.


Part I - Raytracing

For this assignment, you will write a basic raytracer and implement some of the functionality that raytracers enable you as a programmer to add with relative ease. You will start the assignment with base code, and will implement a number of elements of functionality: the ability to calculate a camera ray for a given pixel, the ability to intersect rays against spheres and planes, Phong reflectance to shade the point that gets hit by a camera ray, the addition of a shadow ray cast at each intersection point to implement hard shadows, and the addition of a reflection ray to simulate shiny objects. Your final product will be able to support multiple objects, multiply lights, and varying material properties, all of which can be read in from file.


a rendered frame from the raytracer.


The base code that you are given is a basic OpenGL program that can load spheres and planes in from file and display them with OpenGL in a "preview mode"; the user can navigate around the scene using a camera that follows the free-camera movement paradigm, and the objects drawn in the OpenGL preview accurately reflect the object data that is read in from file. When the user presses the 'r' key, a callback is triggered that will perform the raytracing code. This callback takes the list of objects and lights in the scene, as well as the camera object, and generates, as its output, a framebuffer that is the size of the window when it was triggered, containing the final image. That image is then attached to a texture, which is rendered over a quad covering the screen. The frame is also written to the file "output.ppm" in the current directory.


the previous frame, in OpenGL preview mode.


Base Code

You can dowload the base code this assignment here. There are a number of classes that encapsulate the work that needs to be done in an OpenGL pipeline; familiar to you should be the Point, FreeCamera, Material, and PointLight classes, which are all very similar to the classes you have used or created for previous homework assignments. There are a few new classes that are also particularly relevant to this assignment. They are:

Ray: this class is a storage class for rays: it holds two points, representing an origin and a direction. Note that it doesn't have member functions to ensure that direction is unit when set or read; adding these might be helpful.

Shape: this is an abstract class that allows us to lump all of our renderable objects into the same vector inside of main.cpp. Instead of having separate vectors for Spheres, Planes, Triangles, Meshes, etc., all of those classes extend the Shape class and can be used interchangeably. (i.e. shapes[i]->draw(); regardless of which object type it is.) All classes that inherit from the Shape class must implement the draw() function, which draws the object using OpenGL calls, and the intersectWithRay function, which computes the intersection between a ray and the object and returns information about the intersection, if it ocurred.

Sphere: has a position (center of the sphere) and a radius. Already contains an implemented draw() function so that it can be used in Preview Mode. You will need to implement its intersectWithRay function.

Plane: defined by a point on the plane and the normal of the plane. Already contains an implemented draw() function, but note that it draws the plane as a finite, bounded rectangle (in reality, the plane extends infinitely). Because OpenGL does lighting per-vertex, this means that the plane preview will likely be much darker than the real plane (since the far-off vertices will be at sharp angles to the light source). You will need to implement its intersectWithRay function.

IntersectionData: this is a storage class to hold information about an intersection, if one occurs. It is returned from the intersectWithRay() functions: if the ray passed into that function intersected that object, then the wasValidIntersection boolean member variable should be set. If this intersection was valid, then its member variables will hold a) the point of intersection, b) the surface normal of the object at the point of intersection, and c) a copy of the object's Material object. Your intersection functions must fill these member variables.



Places where you need to write code are indicated with large, block comments. Pressing 'r' calls a raytraceImage() function which is located in helper.cpp. This function needs to allocate an output image and fill each pixel with a color based on the scene.


Camera Rays

The first step of your raytracer is computing the camera ray (primary ray) for each pixel of an image. You will need to fill in the FreeCamera::getPrimaryRay() function in freeCamera.cpp. (See also the getPrimaryRayThroughPixel function.) For more information on computing primary rays, see this website. Note that thanks to our OpenGL formulation, we have the vertical field of view and must compute the horizontal field of view.

You may find it easier to think of the terms involved in camera ray computation as a vector math problem. If we assume that the image plane is a unit distance away from the camera, we can get a point on the center of the image plane by adding the camera's position to its direction. We can compute the width and height of the image plane using the tangent functions as described in the link above. Lastly, we can determine the vectors that lie in the image plane -- the first is the camera's local X axis, perpendicular to the viewing direction and global up vector (0,1,0), and the second is the camera's local Y axis, perpendicular to its local X axis and local Z axis (direction) -- and move along these vectors based on the percentage of the image plane's width and height.


Intersection Testing and Shading

After being able to calculate the camera ray through each pixel, you need to intersect that ray against all objects in the scene and save the nearest intersection, if any. This can be done by iterating over each object and calling its intersectWithRay() function -- but you will need to implement these functions for the Sphere and Plane classes. Start with Sphere and move on to Plane after that is confirmed to work.

Once you know a ray has intersected with an object, and your intersectWithRay() function has appropriately filled in the intersection point, surface normal, and material in the IntersectionData object, you must compute a final color for the pixel. You must do this using the Phong Reflectance Model as discussed in class. Basically, shading a pixel requires iterating over all lights and summing the contribution from each light. The diffuse intensity is the dot product of the surface normal and light direction (or 0, if negative), and this component is multiplied by the diffuse colors of the material and the light. Specularity calculation is carried out as in the slides as well, as the dot product of the viewing vector (camera ray) and reflection vector raised to the shininess exponent. Ambient terms are simply added to the point (light ambient * material ambient). Note that the Point class included with the base code overrides the * operator for two points to mean the element-wise product of those two vectors (which you will likely use a lot in your lighting calcluations).


Shadows and Reflections


Before computing the diffuse and specular contributions from a light, you must determine whether that point is in shadow. To do this, you need to test a ray for intersection against the scene whose origin is the intersection point and whose direction points toward that light. (NOTE: to prevent rounding errors, move the intersection point a small amount along the direction vector before doing testing.) If there are any intersections, the point should receive only the ambient component of the lighting equation.

Lastly, you will notice that the Material class has one extra parameter -- reflectivity. When shading a point, if the reflectivity of an object is greater than zero, you must cast a reflection ray to determine the reflected color. You can compute the direction of this reflected ray using the equations (direction - 2*normal*(dot(direction,normal))), and the same NOTE: as with shadow rays applies: you should move the intersection point a small amount along the reflection vector to avoid rounding errors. After finding the reflected color, you must compute the final color of the point as (reflectivity * reflectedColor + (1 - reflectivity)*phongColor).


Structuring Your Code

Since the operations involved with determining shadows and reflection are recursive, it makes sense to write functions to support this. For instance, write a function called intersectRayAgainstScene that intersects a ray against all objects and returns the closest intersection, if any. This will keep your raytraceImage function much cleaner. Additionally, you will want to have a second function, shadeIntersection, that computes the color for a given intersection point and returns that. It makes sense to put this as its own function, since you will have to handle shadow rays (use the intersectRayAgainstScene function) and reflection rays (intersectRayAgainstScene with a reflection ray and then recursively call shadeInterscetion to determine the reflected color). For shadow rays, you may want to write a version of intersectRayAgainstScene that returns true as soon as possible if it intersects any object, and doesn't bother computing the nearest intersection, for speed purposes. Additionally, inside of shadePoint(), you may want to clamp your color value before returning it, in case it lies outside the range of [0,1].


Part II - Website

Update the webpage that you submitted with the Final Project to include an entry for this extra credit assignment. As usual, include a screenshot (or two) and a brief description of the program, intended to showcase what your program does to people who are not familiar with the assignment.


Documentation

With this and all future assignments, we expect you to appropriately document your code. This includes writing comments in your source code - remember that your comments should explain what a piece of code is supposed to do and why; don't just re-write the code says in plain English. Comments serve the dual purpose of explaining your code to someone unfamiliar with it and assisting in debugging. If you know what a piece of code is supposed to be doing, you can figure out where it's going awry more easily.

Proper documentation also means including a README.txt file with your submission. In your submission folder, always include a file called README.txt that lists:
  • Your Name / netID
  • Homework Number / Project Title
  • A brief, high level description of what the program is / does
  • A usage section, explaining how to run the program, which keys perform which actions, etc.
  • Instructions on compiling your code
  • Notes about bugs, implementation details, etc. if necessary


Grading Rubric

This submission will count towards an additional 1.25% of your overall grade. Your submission will be graded according to the following rubric:

PercentageRequirement Description
20% Camera rays are computed correctly.
30% Intersection testing is implemented for spheres and planes. Intersection testing includes computation of surface normal.
15% Shading is performed correctly according to the Phong reflectance model.
15% Shadows are implemented correctly.
15% Reflections and refractions are implemented correctly.
5% Submission compiles and runs correctly, including Makefile and README.


Submission

Please update your Makefile so that it produces an executable with the name xc3. When you are completed with the assignment, submit the source code, Makefile, and READEME.txt to the following directory:

/afs/nd.edu/coursefa.12/cse/cse40166.01/dropbox/<afsid>/xc3/

Similarly, title your webpage <afsid>.html (e.g. jpaone.html) and submit it to:

/afs/nd.edu/coursefa.12/cse/cse40166.01/dropbox/<afsid>/www/

Place any screenshots or other images used on the webpage in:

/afs/nd.edu/coursefa.12/cse/cse40166.01/dropbox/<afsid>/www/images/

This extra credit assignment is due by December 14, 2012 by 5:00pm.