Deep Screen Space

Oliver Nalbach     Tobias Ritschel     Hans-Peter Seidel

MPI Informatik

A scene rendered with sub-surface scattering simulated using the deep screen space method.

Abstract

Computing shading such as ambient occlusion, subsurface scattering or indirect light in screen space has recently received a lot of attention. While being efficient to compute, screen space methods have several key limitations such as occlusions, culling, under-sampling of oblique geometry and locality of the transport. In this work we propose a deep screen space to overcome all these problems while retaining computational efficiency. Instead of projecting, culling, shading, rasterizing and resolving occlusions of primitives using a z-buffer, we adaptively tessellate them into surfels proportional to the primitive's projected size, which are optionally shaded and stored on-GPU as an unstructured surfel cloud. Objects closer to the camera receive more details, like in classic framebuffers, but are not affected by occlusions or viewing angle. This surfel cloud can then be used to compute shading. Instead of gathering, we propose to use splatting to a multi-resolution interleaved framebuffer. This allows to exchange detailed shading between pixels close to a surfel and approximate shading between pixels distant to a surfel.

Supplemental Video

The supplemental video can be found on YouTube.

Materials

Citation

Oliver Nalbach, Tobias Ritschel, Hans-Peter Seidel
Deep Screen Space
Proc. i3D 2014

@article{Nalbach:2014:DeepScreenSpace,
    author	= {Oliver Nalbach and Tobias Ritschel and Hans-Peter Seidel},
    title	= {Deep Screen Space},
    booktitle	= {I3D '14: Symposium on Interactive 3D Graphics and Games},
    publisher	= {ACM},
    year	= {2014},
    isbn	= {978-1-4503-2717-6}
 }

Acknowledgements

We would like to thank Oskar Elek for the video voice-over.