Wicked Billiard

by Christian Engels and Alex Busenius

Video

Image from our scene
View video

Story

Our scene consists of a billiard room where different kinds of billiard can be played. Billiard balls are made of a very different looking materials, some of them have rough surface and some are even completely transparent. Nevertheless, due to a very expensive manufacturing process they still have the same mass and friction properties like normal balls, which allows us to use them in all types professional billiard games.
This exclusive billiard is for people who want to risk a lot of money and are confident of their abilities.

Let the tournament begin!

Technical part

We used many different shaders for the balls to get for everyone a unique look. Here is a list what we used on which ball:
BallTexture usedShader usedPicture
1 Blue CirclePhongshader with MirrorshaderPicture
2 Red CircleMirrorshader with Phongshader for lightingPicture
3 Green CircleRefractiveshader for glassPicture
4 No TextureRefractiveshader diamond with Phongshader for lightingPicture
5 Orange CircleCook-Torranceshader with gold like reflection termsPicture
6 Purple CircleCook-Torranceshader with metal like reflection terms and MirrorshaderPicture
7 Full YellowPhongshader with Bump MappingPicture
8 Yellow CircleFull TransparencyPicture
9 No TextureDark TransparencyPicture
10Only the numberDark Refractionshader for glass with Phongshader for lightingPicture
11Full BlackCook-Torranceshader for polished black ball with MirrorshaderPicture
12No TextureCook-Torranceshader with Bump MappingPicture
13Marble TextureLight Transparency and Phongshader for lightingPicture
14No TextureMirrorshader with Bump MappingPicture
15Full OrangePhongshaderPicture
16No TextureCook-Torranceshader with terms for gypsum like materialPicture

Statistics

The wicked billiard scene contains in total 99779 triangles, where billiard table is composed of 18722 triangles, wine glass on the table has 25344 triangles, the small table 11932 triangles, mirror 6211 triangles and the rest comes from other furniture and walls.
We used 4 PC for rendering, AMD Athlon64 3500+, AMD Athlon XP 2200+, Intel Core 2 Duo 2.0Ghz and a Pentium 4 2.4Ghz.
Rendering time of one frame varied from 8-15 minutes depending on machine and frame complexity. Frames with depth of field effect took in range of 45-60 minutes each. Total rendering time for the whole video was approximately 3 days.

Topics

Unified Bump Mapping

Christian Engels

We implemented Unified Bump Mapping by changing the normal according to a Bump Map. The Unified means, that every shader can be bump mapped.
Our scene makes use of bumped mapped mirrors as seen in the main mirror in the room or on a specific ball and as bump mapped combined with a normal shader on 2 balls.
Here is a picture for the Bump Mapping on a mirror in our scene: Bump Mapped Mirror
Implemented in: Shader

Cook-Torrance Shader

Christian Engels

We implemented a Cook-Torrance Shader which is a flexible shading model. It uses microfasets-approximation to represent various kinds of specular reflections for different materials, for example metal or gypsum.
In contrast to this, the before implemented Phong shader is restricted to some kind of plastic look.
You can see this in the scene on different balls which represents gold, metal and gypsum.
Here is a picture for the Cook-Torrance applied as gold, metal gypsum and normal: Cook-Torrance
Implemented in: CookTorranceShader

Depth of Field Camera

Christian Engels

We implemented a Depth of Field Camera by sampling different positions on a circle to get a start position for a ray and set the direction to intersects a plane at the appropriate position. At render time different samples are taken and the average taken.
You find it in the scene at a point where we change the image plane to look first at the white ball and after that at the other balls.
We also used some kind of stratification for the angle to get a better image with lower rays used for Depth of Field.
Here a look at the balls with Depth of Field Camera: Depth of Field
Implemented in: DepthOfFieldCamera

Multithreading

Christian Engels

We implemented Multithreading to make use of all the cores in modern CPU architectures
At the moment our multithreading is restricted, that the number of threads should divide the resolution.
We use pthreads and divide the y-resolution of one frame into different parts which are rendered on different cores.
Implemented in: Renderer

SAH-KDTree

Alex Busenius

We have extended the simple KD-tree implementation from MicroTracer with the surface area heuristics. The tree is constructed using the fast O(n log n) algorithm and prefers big empty nodes.
The code was split into an abstract base class implementing the intersection test and derived classes implementing construction algorithms to prevent code duplication. We added some custom classes to make the algorithm implementation easier. KD-Tree is constructed once for each OBJObject after loading from disk and is unchanged during rendering of the whole animation.
To benchmark the implementation we used the "Happy Buddha" model from the Stanford 3D Scanning Repository in different resolutions:
  N (number of primitives) 15536 67240 293232 1087716
N log N 65116.6 324609.7 1603161.3 6566014.5
log N 4.19 4.83 5.47 6.04
SAH Build time in s. 1.64 7.51 37.96 153.54
Render time in s. 16.34 19.87 25.57 29.74
Build / N log N * 1.0e6 25.19 23.14 23.68 23.38
Render / log N 3.90 4.11 4.67 4.92
Render / log^2 N 0.931 0.852 0.855 0.815
Simple Build time in s. 0.39 1.27 5.39 20.35
Render time in s. 29.06 56.64 315.96 1255.97
Build / N log N * 1.0e6 5.99 3.91 3.36 3.10
Build / N * 1.0e6 25,10 18,89 18.38 18.71
Render / N * 1.0e3 1.87 0.84 1.08 1.15
Beside of the raw build and render times you can see the constants for the proposed complexity classes (raw time = c * O(N log N)). If we divide the raw build time by the N log N for the SAH KD-Tree, we can observe, that the constant remains nearly the same, and is not much worse than for the Simple KD-Tree which is simple limited in depth and has nearly linear build time. This is a good indicator, that the build algorithm is implemented correctly and indeed takes only O(N log N) time.
We can also see, that the render time for SAH KD-Tree is much smaller than for the Simple KD-Tree, especially for large number of triangles. While for SAH KD-Tree, the rendering time probably lies between O(log N) and O(log^2 N) (due to slightly growing and falling factor for dividing by log N and log^2 N respectively), for Simple KD-Tree it grows linear.
Implemented in: KDTree, SAHKDTree, Node, Event, SplitPlane

Reflection and Refraction

Alex Busenius

We have implemented reflective and refractive transparency with Fresnel term and adaptive recursion termination, which takes in account both recursion depth and ray influence on the final color. Adaptive recursion termination is implemented in all shaders that spawn secondary rays: MirrorShader, RefractiveShader, TransparentShader and CombineShader.
In turned out that maximal recursion depth of 100 and minimal color influence of 1/256 has no visible artefacts in images and do not slow down the renderer noticeable, so we used these values for all images and the final video.
The main topic is implemented as a shader, which adds a refraction index property under the assumption, that each primitive is on the boundary between object and air. The inner and outer side is calculated using orientation of the normal. Light is treated as unpolarized homogeneous source. Due to the use of Fresnel term, phenomena like total inner reflection are taken in account naturally.
Refractive billiard ball
Total inner reflection example
Implemented in: RefractiveShader, MirrorShader

Physics Engine

Alex Busenius

We have implemented a physics engine to simulate billiard balls motion. We introduced a new base class for physics controlled objects called PhysicalObject that hold all necessary properties like velocity, mass, angular velocity etc. This class also can update these properties to some time by applying physical laws of motion and friction.
During motion and collisions balls are controlled by the appropriate physical laws, they can rotate around arbitrary axis, rotation and motion influence each other. For example during a collision with border, a ball do not only jump back, but also his rotation is also reflected due to friction force (see). After the shot, balls first do not rotate and get the rotation slowly due to inertia etc.
Collisions are calculated in the time order, after each collision all objects are updated to ensure correct positions and further collisions, so the objects are not only sampled on the frame boundaries, but also as much as needed in between.
We experienced some numerical instability of the physics calculations, which caused different results on different computers. To solve this problem, objects and physics engine was extended to the ability to save all calculations into a file and load them later instead of recalculating.
Example video of a collision.
The same video in slow motion.
Implemented in: BilliardPhysics, Collision, BallCollision, BorderCollision, PhysicalObject, TableBorder

Code Rewrite

Both

We rewrote the whole Microtracer code to get a unified interface for all classes. Furthermore we changed some names to represent the function better, for example the Vec4f now has the name RGBAColor and the Vec3f is only used for specifying vectors. If we needed a 3 dimensional color vector we typedefed it. We also made the members private and added access methods. We added some base class called Renderer to simplify the main method and have all things needed for rendering encapsulated. Another thing was we changed the light sources to automatically calculate shadows and to let the light do the illumination per ray in QuadAreaLight. We added the missing functionality from the TexturedSmoothTriangle in the Sphere. We also defined two logging methods for debug output.
Overall this code is more object oriented.

Combine Shader

Both

We implemented a Combine Shader which combines 2 shaders. It has 2 different modes.
The first mode takes the first shader and takes the alpha value of the first shader and interpolates with this value the result of the second shader. The second mode just adds the shaders up.
By using a Combine Shader as first shader again you can stack an arbitrary number of shaders.
You find this in the scene on many balls. If you look closely many balls have a light mirror effect. Many other balls, like the full transparent one are made with the first mode to correctly have highlights at the texture.
Implemented in: CombineShader

Various Enhancements to Object

Alex Busenius

We have split the object class into abstract base Object class and a subclass that can read the OBJ file format. Furthermore Object is derived from PhysicalObject, that can be freely transformed and controlled by the physics engine.
The transformations (translation, rotation, scaling) are represented as matrices, we wrote a Matrix class for it. Each object stores 2 matrices: transformation matrix and inverse transformation matrix, these matrices are constructed by previous calls to rotate(), scale(), translate() and resetTransform().
Transformation itself do not change any vertices, instead of it, the ray is transformed to local object coordinates for the intersection, this minimizes numerical problems during long animations.
Another feature is the cheap object copy using reference counting. This allows us to use the same big model read from OBJ file in different places at virtually no additional cost (like memory usage). Copied objects can be transformed independently from each other and share the same KD-Tree, the only limitation is, since they have the same primitives and shaders, they also are "hard-linked" together and look the same. We used this for example for lamps and walls in the final animation.
We also wrote a SphereObject, that is an object only containing a Sphere (which was extended to allow textures etc.). This class has an advantage over a sphere OBJ file in terms of much smaller memory usage, better intersect times and perfectly round form.
Implemented in: PhysicalObject, Object, SphereObject, Matrix

Scene Parser

Alex Busenius

To simplify designing of the scene, we designed a scene file format and wrote a parser for it. Scene files can be specified on the command line, they are read by SceneBuilder class, which then constructs all needed cameras, lights, textures, shaders and objects, initializes them with each other if needed and assigns them to the Scene. It also can initialize objects by a combination of transformations and construct a list of Animators (at the moment the only one animator is PhysicsStep, for making shots). The scene then animates specific objects during rendering.
Scene file is human readable text file, all objects, textures, shaders etc. are referenced by names. Parsing itself is made by the classes from the SceneBuilderFactory hierarchy with use of some helper classes like SkipString and QuotedString.
Implemented in: SceneBuilder, SceneBuilderFactory, Animator, PhysicsStep

Camera Animation and Parser

Christian Engels

We implemented a simple Camera Animation System where we define different cameras and their start position, their end-position and how many steps the camera should need to get to the end-position. On the way the different vectors of the camera are interpolated and also the settings for the DepthOfFieldCamera are interpolated. So we can change dynamically such things as the aperture (seen in the video). Also we implemented a set method to activate different cameras in the frame, so that we can render the scene with PerspectiveCamera and switch to a DepthOfFieldCamera at the appropriate position. The next thing we implemented was a stay command for letting the camera stay for a while at the position.
Implemented in: CameraAnimator, CameraInstructions and the inheriting classes of CameraInstructions: CameraInstructionMove, CameraInstructionStay, CameraInstructionNormal and CameraInstructionSet.