LPLD: Learning to Predict Localized Distortions in Rendered Images

Martin Čadík,
Robert Herzog,
Rafal Mantiuk,
Radoslaw Mantiuk,
Karol Myszkowski,
Hans-Peter Seidel,



In this work, we present an analysis of feature descriptors for objective image quality assessment. We explore a large space of possible features including components of existing image quality metrics as well as many traditional computer vision and statistical features. Additionally, we propose new features motivated by human perception and we analyze visual saliency maps acquired using an eye tracker in our user experiments. The discriminative power of the features is assessed by means of a machine learning framework revealing the importance of each feature for image quality assessment task. Furthermore, we propose a new data-driven full-reference image quality metric which outperforms current state-of-the-art metrics. The metric was trained on subjective ground truth data combining two publicly available datasets. For the sake of completeness we create a new testing synthetic dataset including experimentally measured subjective distortion maps. Finally, using the same machine-learning framework we optimize the parameters of popular existing metrics.


[Paper (pdf)]
[Supplementary Material (html)]
[Presentation slides (pdf)]
[bibTeX entry (bib)]
[LPLD metric source code]   (Available upon request, please contact us if interested)

Acknowledgements and Credits: the presented metric, code and dataset should not be used for commercial purposes without our explicit permission. Please acknowledge the use of the dataset by citing the publication.