Persephone HD

Stefan John ( sjohn@graphics.cs.uni-sb.de )


( Click above to see the preview picture in high resolution 1920x1080, 4.1Mb )

The Persephone animation sequence is rendered in High Definition 1080p and
about 30 seconds long. Due to the high resolution and compression rate a dual core
CPU is recommended for video playback. For optimal viewing your monitor should be
capable of Full HD playback and at least measure 24 inches.

Download msmpeg4v2 clip ( AVI, 800x600, ~8Mb )
Download HD1080p clip ( AVI, 1920x1080, 72Mb )

Background Story

A whole civilization wiped out by the dawn of a new ice age, leaving only ruins of a once glorious society.
Total annihilation are the words that come to mind when one thinks about the destructive forces of mother nature
daily faced by mankind and the sheer helplessness of technology when it comes to extremes. But despite all that
Persephone HD shows that human ideals can still be left in such a hostile environment, sleeping, and carved out
of pure ice, a statue of never dying beauty.

Technical Details

Models and textures composing the scene are courtesies of www.christophhesse.com

Reflective and Refractive Transparency



Reflection and refraction rays are generated to simulate the effects of ice.
The statue is completely made out of ice. The amount of reflectance depends on
the viewing angle and thus is combined with the refraction amount via the Fresnel term.
The Persephone HD animation sequence puts its focus on this effect and is rendered
in a high definition resolution to show the effects of reflection and refraction
from different viewing angles in every detail.
Code snippet

Depth of Field



Depth of field is an important effect to simulate real camera lenses which never produce
perfectly sharp pictures. A right placed point of focus directs the eyes of the viewer
only to the real important parts. To the eye focused objects appear more attractive than
out of focus ones. To maximize the rendering performance while sustaining visual quality
the effect was implemented as a post process step. The picture is filtered with a gaussian
blur kernel and later combined with the original image according to the depth buffer and
the focal depth.
Code snippet

Tone Mapping



While creating textures for virtual scenes and setting material attributes and colors
it is not always possible to keep complete track of the final overall color composition.
That is why it is important to use one of the many tonemapping algorithms to automaticly
finetune overall image attributes like color, temperature, contrast and brightness.
Code snippet

Normal Map Bump Mapping



Normal maps are today's standard for adding small surface details without increasing
geometry complexity. Persephone HD uses tangent space normal maps to add details
to the background geometry. The tangent space normals are transformed into world space
and used for the lighting equation instead of the original interpolated triangle normals.
Code snippet

Hemispheric Lighting



Hemispheric lighting is an efficient way to simulate image based lighting methods.
Instead of sampling an HDR probe two colors for the sky and ground are linearly interpolated
to achieve similar effects. See reference [1] for more information concerning hemispheric lighting.
Code snippet

Film Grain



To avoid perfect looking synthetic pictures it is necessary to add a certain degree
of imperfection to the final render result. A slight percentage of added noise makes
a picture look more realistic.
Code snippet

Alpha Tracing



To achieve interesting looking ray traced shadows one way is to use the interleaved
alpha channel of an object's diffuse texture. The scene's geometric complexity can stay
low while increasing the overall visual appearance. In Persephone HD the shadows casted
by the broken glass windows are caused through different alpha values in the color map.
Code snippet

Stereo Rendering



A nice stereo effect can be achieved by rendering one frame with two different cameras
that are slightly moved apart, focusing on the same point, and interleaving them per
different color channel into an anaglyph picture.
Code snippet

Screen Space Ambient Occlusion



Screen space ambient occlusion is a post process effect that simulates the effects of
ambient occlusion by computing several processing steps over the depth buffer to achieve
similar occlusion effects without actual tracing multiple rays. Due to visible artifacts
caused by SSAO the effect was deactivated for the final animation sequence. See the
reference section with [2,3,4] for additional information.
Code snippet

Camera Animation

The camera movement is done by interpolating two Catmul-Rom splines. One for the
camera's position, the second for the camera's viewing direction.
Code snippet

Fun Stuff


Depth value visualization used to perform depth of field as post processing effect.
( AVI, 800x600, 6Mb )


Number of intersection tests per kD tree node visualized with spectral colors.
( AVI, 800x600, 28.5Mb )


Visualized acceleration structure traversal steps for primary rays.
( AVI, 800x600, 11.6Mb )


Color map only display of the scene.
( AVI, 800x600, 20.6Mb )


Tangent space normal map visualization.
( AVI, 800x600, 12.8Mb )

References

[1] MSDN - Hemispheric Lighting
[2] GPU Screen Space Ambient Occlusion
[3] Screen Space Ambient Occlusion with Blender and Composite Nodes
[4] Wikipedia - Screen Space Ambient Occlusion

Links

- quantic3D
- ChristophHesse.com
- Max Planck Institute for Computer Science
- Computer Graphics Lab - Saarland University
- Saarland University



(C) Stefan John 2008