Robert Herzog, firstname.lastname@example.org
Martin Čadík, email@example.com
Tunç Ozan Aydın, firstname.lastname@example.org
Kwawng In Kim, email@example.com
Karol Myszkowski, firstname.lastname@example.org
Hans-Peter Seidel, email@example.com
Synthetically generating images and video frames of complex 3D scenes using some photo-realistic rendering software is often prone to artifacts and requires expert knowledge to tune the parameters. The manual work required for detecting and preventing artifacts can be automated through objective quality evaluation of synthetic images. Most practical objective quality assessment methods of natural images rely on a ground-truth reference, which is often not available in rendering applications. While general purpose reference-less image quality assessment is a difficult problem, as we also show in a subjective study, the performance of the reference-less metric presented in this paper matches the state-of-the-art metrics that require a reference. This level of predictive power is achieved exploiting information about the underlying synthetic scene (e.g., 3D surfaces, textures) instead of merely considering color, and training our learning framework with typical rendering artifacts. We show that our method successfully detects various non-trivial types of artifacts such as noise and clamping bias due to insufficient virtual point light sources, and shadow map discretization artifacts. We also briefly discuss an inpainting method for automatic correction of detected artifacts.
Acknowledgements and Credits: the presented dataset should not be used for commercial purposes without our explicit permission. Please acknowledge the use of the dataset by citing the publication.