Rendering Competition 2007/2008




Animation

Created by:

Torsten Gründel

Name of the animation:

The Venus Garden

Animation Link:

Video File

Used Techniques:


Description:

The first part takes place in front of the garden, where the camera moves through some fog on a sidewalk with lamps on both sides. It ends in front of a gate with two dragon statues, where the gate opens and gives a first sight into the garden and the silhouette of a Venus statue. For the dragon statues I used a procedural texture to give them a marble like look (more artificial marble). The lamp model in this part of the animation is taken from http://blender-archi.tuxfamily.org/Main_Page , the dragon statues are a high resolution model, taken from http://graphics.stanford.edu/data/3Dscanrep/.

In the second part, the camera moves through the gate and ends up in a misty garden, where some light from above shines through the clouds. The light is a textured light, which simulates the clouds on the sky, that let the light only in some areas pass through. The intensity and the direction of the light source moves and let some halo effects appear that illuminate the Venus statue that is in the center of the second part of the animation. I also used a small aperture here, in order to get a small Depth of Field effect which let the light strikes appear more unsharpened. The Venus model was taken from http://graphics.im.ntu.edu.tw/~robin/courses/cg03/model/

This is a sequence of pictures taken from the animation:
First Picture of Sequence Second picture of sequence
Third picture of sequence Fourth picture of sequence









SAH-KDTree

The SAH-KDTree follows the approach of the KDTree that was given in the MicroTrace solution, by successively splitting up the 3D space by splitting planes. The main difference lies in the position of these splitting planes. It is not calculated as the median, but is determined by the surface area heuristic. It approximates the traversal costs of a node created by a split plane by the surface area of the resulting node and the number of triangles in the potential child. I used the algorithm given by Ingo Wald and Vlastimil Havran, which allows it to build up the tree in O(n log(n)) time.

This table shows the differences between the creation and traversal of the Median-KDTree and my SAH-KDTree. The pictures that were rendered contained only the given model and had a resolution of 800 x 600. I supersampled the scenes with 16 rays per pixel. The models were taken from http://graphics.stanford.edu/data/3Dscanrep/



Creation of SAH-KDTree
Traversal of SAH-KDTree
Creation of Median-KDTree
Traversal of Median-KDTree
bunny.obj (69,451 Triangles)
19.918 sec
32.293 sec
1.366 sec
2 min 12.588 sec
dragon.obj (202,520 Triangles)
1 min 31.937 sec
1 min 23.584 sec
4.289 sec
11 min 4.056 sec


My implementation uses a new class, which provides the methods to generate a set of possible split events for a set of Triangles. For these events, the split plane is calculated, using the surface area heuristic, that provides the best result according to the heuristic. Then the Triangles are split up in the two childs. This is done recursively until the costs to split up the set of triangles is more costly than to build a leaf node, just stores the triangles. All these calculations follow the pseudo code given in the paper of Ingo Wald and Vlastimil Havran : "œOn building fast kd-Trees for Ray Tracing, and on doing that in O(N log N)".


Source Code:

SAH_KDTree.hxx









Bump Mapping

Bump mapping is a technique that allows objects to look like they have more geometric details than they really have. This is reached by modifying the normals of a surfaces according to a height map texture. This effect is frequently used in my scene. All surfaces expect the dragon and the Venus statues have a height map assigned, so that they look more realistic.

These two pictures show the wall in my scene with and without a height map assigned to the objects. On the left side, the same texture that is used to texture the object is used as height map. On the right side, no height texture was used.

Wall with bump mapping enabled
Wall with bump mapping disabled


For the implementation I wrote a new Shader, which calculates for the hit point of the ray with the object the partial derivatives at that point. These partial derivatives are computed using the values of the height map at the point. After that, the normal is perturbed according to these derivative values in the corresponding directions.

The Source Code:

BumpMappingPhongShader.hxx









Procedural Shading

Procedural Shading is the process of not using a texture that is not given by a picture, but the color at a point is determined by a mathematical function. In my case, I used 3D Perlin noise in order to generate a shader that produces marble like textures on objects. Therefore the noise values were used to determine the color for a point. The noise determined two clorors, which were weighted also by the noise value.

The effect can be seen on the dragon statues in the first part of the scene. Also the Venus statue uses such a shader. The following picture shows the right dragon in the scene on its own. Although the used colors are not common in nature, they give  a good expression of the abilities of the procedural shader.


Dragon Statue with procedural shader


My implementation uses a noise generator class, which generates 3D Perlin Noise. The code is oriented by the book of  Matt Pharr : "Physically Based Rendering". Furthermore I wrote a shader that uses the Noise value to determine the used colors at a pixel (out of chose which are used for the marble) and the weight of each color.

The Source Code:

Noise3D.hxx
MarbleShader.hxx









Depth of Field

Depth of Field is the effect that only a particular region of a picture is in focus, as it is the case in real life cameras. Using this effect, only object in a plane, defined by a focus length are sharp. All other objects, that lie before or behind that plane of focus are blurred. Since for this effect not a pinhole model is use, but a camera model in which there is a lens with some aperture, more than one ray is responsible for the color of a pixel, where all rays, "going out from the pixel through the lens" meet each other on one point on the plane of focus. The size of the aperture can have dramatic effects on the image. The following images use different apertures (from left to right: 0.01, 0.05, 0.18).

Scene with DoF and an aperture of 0.01
Scene with DoF and an aperture of 0.18


In order to get good images at large aperture sizes, on has to send a lot of rays per pixel into the scene. The following two images use both a aperture of 0.05, but in the left image 9 rays are send out per pixel, whereas in the right image 100 rays per pixel are send into the scene.

Scene with an apertue of 0.05 and 9 rays per pixel
Scene with an apertue of 0.05 and 100 rays per pixel


In my scene I used a very small aperture in order to the the halo effect in the second part of the animation not to be too sharp and let the light beams be a bit more blurred.

Scene with an apertue of 0.05 and 100 rays per pixel


For the implementation I wrote a new Camera class, which has an aperture-size and a focal length. The focal length is used in order to determine the plane of focus (which is in my case in camera z-direction). The aperture-size is used in order to distribute all rays that should be used to color a pixel evenly on a concentric disk with radius of the aperture-size.

The Source Code:

ConcentricDiskSampleGenerator.hxx
PerspectiveLensCamera.hxx









Volume Scattering

Volume scattering is the effect of simulating the traversal of light rays through volumes and simulate the effects that particles in this volume would have on the ray and thus on the scene. So effects like smoke or fog can be simulated using this technique. Since there are a numerous of effects that could be applied and the numerical effort for a complex volume scattering simulation can be extremely high, I focused on two effects: Emission and Scattering
These three pictures show the first part of my scene, without any volume, with a volume that only applies emission effects and finally a picture that contains an additional volume that also considers scattering. One can see, how the lights in the lamps only are really recognised when they are in the volume, and how the dragons can't be seen very clearly if the light rays have to travel through the volume to reach them.

Scene without volumes
Scene with Volume that only has Emission effect
Scene with additional Volume that has Emission and Scattering effects


For the implementation I used two kinds of objects: VolumeIntegrators and VolumeRegions. VolumeIntegrators are added to the scene and rays that are traced are intersect with the Volumes as if they were Objects. The Integrators are responsible to apply emission, scattering and other effects. The VolumeRegions are given to the VolumeIntegrators. They provide the values for the calculations of the Integrators and the intersection tests are forwarded by the Integrators to its regions. I implemented two classes of integrators, one which only considers emission effects (this one is used in the picture in the middle) and an integrator that considers a single scattering effect. Therefore the volume is sampled and at each sample point a light ray is send to a light source to determine the scattering influence at the sample point. This effect is used in the right picture. This single scattering allows it to generate volumetric shaddows, as can be seen in the following picture.

Sphere throwing volumetric shadow

I also implemented two kinds of regions, one that is homogeneous and thus simulates the same amount of particles and an density region, which provides a method the give points different particle density values.

The effect of volume scattering can be seen everywhere is my animation. Since the fog in the first part of the animation, as well as the halo effect of the second part of the animation are created by using Volumes.


The Source Code:
VolumeIntegrator.hxx
SingleScatteringVolumeIntegrator.hxx
SingleScatteringVolumeIntegrator.cxx
EmissionOnlyIntegrator.hxx
EmissionOnlyIntegrator.cxx

VolumeRegion.hxx
VolumeRegion.cxx
HomogeneousVolumeRegion.hxx
DensityVolumeRegion.hxx
NoiseDensityVolumeRegion.hxx
ExponentialVolumeRegion.hxx









Textured Light Source

In order to simulate clouds on the sky that let some light pass through, I implemented a textured light source. Although the classname let one imagine something else (I just forgot to rename it after tried to make a textured quad area light source), this is a point light source, that has a direction and a texture assigned to it. The concept follows the one of the projective camera model. The direction is points to the center of the texture "image" and all light/shadow rays that are illuminated by this light source are attenuated by the texture values. In my case the texture is the 2D perlin noise function that produces cloud like shaders (see the CloudShader in MicroTrace).

In my animation I used this light source to simulate some cloud like shape, that attenuates the light beams from above. By modifying a coefficient for the attenuation, I changed the amount of light passing through the attenuation texture of the light source. The left picture shows the illumination at the beginning and the right picture the illumination at the end of the animation and the manipluation of the attenuation coefficient. (Please note, that I also shifted the direction if the light source during the animation. This can also be seen in the picture below).

Picture from my animation, when the textured light transformation starts
Picture from my animation, when the textured light transformation ends


The Source Code:

TextureQuadAreaLight.hxx