Raytracing Algorithm in a Nutshell

https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/raytracing-algorithm-in-a-nutshell

The phenomena described by Ibn al-Haytham explains why we see objects. Two interesting remarks can be made based on his observations: firstly, without light we cannot see anything and secondly, without objects in our environment, we cannot see light. If we were to travel in intergalactic 星际 space, that is what would typically happen. If there is no matter around us, we cannot see anything but darkness even though photons are potentially moving through that space.

Forward Tracing

If we are trying to simulate the light-object interaction process in a computer generated image, then there is another physical phenomena which we need to be aware of. Compared to the total number of rays reflected by an object, only a select few of them will ever reach the surface of our eye. Here is an example. Imagine we have created a light source which emits only one single photon at a time. Now let’s examine what happens to that photon. It is emitted from the light source and travels in a straight line path until it hits the surface of our object. Ignoring photon absorption, we can assume the photon is reflected in a random direction. If the photons hits the surface of our eye, we “see” the point where the photon was reflected from (figure 1).

Raytracing Algorithm in a Nutshell
Figure 1: countless photons emitted by the light source hit the green sphere, but only one will reach the eye’s surface.

We can now begin to look at the situation in terms of computer graphics. First, we replace our eyes with an image plane composed of pixels. In this case, the photons emitted will hit one of the many pixels on the image plane, increasing the brightness at that point to a value greater than zero. This process is repeated multiple times until all the pixels are adjusted, creating a computer generated image. This technique is called forward ray-tracing because we follow the path of the photon forward from the light source to the observer.

The problem is the following: in our example we assumed that the reflected photon always intersected the surface of the eye. In reality, rays are essentially reflected in every possible direction, each of which have a very, very small probability of actually hitting the eye. We would potentially have to cast zillions of photons from the light source to find only one photon that would strike the eye. In nature this is how it works, as countless photons travel in all directions at the speed of light. In the computer world, simulating the interaction of that many photons with objects in a scene is just not practical solution for reasons we will now explain.

So you may think: “Do we really need to shoot photons in random directions? Since we know the eye’s position, why not just send the photon in that direction and see which pixel in the image it passes through, if any?” That would certainly be one possible optimization, however we can only use this method for certain types of material. For reasons we will explain in a later lesson on light-matter interaction, directionality is not important for diffuse surfaces. This is because a photon that hits a diffuse surface can be reflected in any direction within the hemisphere centered around the normal at the point of contact. However, if the surface is a mirror, and does not have diffuse characteristics, the ray can only be reflected in a very precise direction; the mirrored direction (something which we will learn how to compute later on). For this type of surface, we can not decide to artificially change the direction of the photon if it’s actually supposed to follow the mirrored direction. Meaning that this solution is not completely satisfactory.
Even if we do decide to use this method, with a scene made up of diffuse objects only, we would still face one major problem. We can visualize the process of shooting photons from a light into a scene as if you were spraying light rays (or small particles of paint) onto an object’s surface. If the spray is not dense enough, some areas would not be illuminated uniformly.

Imagine that we are trying to paint a teapot by making dots with a white marker pen onto a black sheet of paper (consider every dot to be a photon). As we see in the image below, to begin with only a few photons intersect with the teapot object, leaving many uncovered areas. As we continue to add dots, the density of photons increases until the teapot is “almost” entirely covered with photons making the object more easily recognisable.

But shooting 1000 photons, or even X times more, will never truly guarantee that the surface of our object will be totally covered with photons. That’s a major drawback of this technique. In other words, we would probably have to let the program run until we decide that it had sprayed enough photons onto the object’s surface to get an accurate representation of it. This implies that we would need watch the image as it’s being rendered in order to decide when to stop the application. In a production environment, this simply isn’t possible. Plus, as we will see, the most expensive task in a ray-tracer is finding ray-geometry intersections. Creating many photons from the light source is not an issue, but, having to find all of their intersections within the scene would be prohibitively expensive.

Conclusion: Forward ray-tracing (or light tracing because we shoot rays from the light) makes it technically possible simulate the way light travels in nature on a computer. However, this method, as discussed, is not efficient or practical. In a seminal paper entitled “An Improved Illumination Model for Shaded Display” and published in 1980, Turner Whitted (one of the earliest researchers in computer graphics) wrote:

“In an obvious approach to ray tracing, light rays emanating from a source are traced through their paths until they strike the viewer. Since only a few will reach the viewer, this approach is wasteful. In a second approach suggested by Appel, rays are traced in the opposite direction, from the viewer to the objects in the scene”.
We will now look at this other mode, Whitted talks about.

Backward Tracing
Instead of tracing rays from the light source to the receptor (such as our eye), we trace rays backwards from the receptor to the objects. Because this direction is the reverse of what happens in nature, it is fittingly called backward ray-tracing or eye tracing because we shoot rays from the eye position?(figure 2). This method provides a convenient solution to the flaw of forward ray-tracing. Since our simulations cannot be as fast and as perfect as nature, we must compromise and trace a ray from the eye into the scene. If the ray hits an object then we find out how much light it receives by throwing another ray (called a light or shadow ray) from the hit point to the scene’s light. Occasionally this “light ray” is obstructed by another object from the scene, meaning that our original hit point is in a shadow; it doesn’t receive any illumination from the light. For this reason, we don’t name these rays light rays?but instead shadow rays. In CG literature, the first ray we shoot from the eye into the scene is called a primary ray, visibility ray, or camera ray.
Raytracing Algorithm in a Nutshell
Figure 2: backward ray-tracing. We trace a ray from the eye to a point on the sphere, then a ray from that point to the light source.

If in this lesson we have used forward tracing to describe the situation where rays are cast from the light as opposed to backward tracing where rays are shot from the camera. However some authors use these terms the other way around. Forward tracing means to them shooting rays from the camera because it is the most common path tracing techniques used in CG. To avoid confusion you can also use the term light and eye tracing which are more explicit These terms are more often used in the context of bi-directional path tracing (see the Light Transport section).

Conclusion
In computer graphics the concept of shooting rays either from the light or from the eye is called path tracing. The term ray-tracing can also be used but the concept of path tracing suggests that this method of making computer generated images relies on following the path from the light to the camera (or vice versa). By doing so in an physically realistic way, we can easily simulate optical effects such caustics or the reflection of light by other surface in the scene (indirect illumination). These topics will be discussed in other lessons.