Synthesized View Quality Assessment Using Feature Matching and Superpixel Difference This publication appears in: IEEE Signal Processing Letters Authors: S. Mahmoudpour and P. Schelkens Volume: 27 Issue: - Pages: 1650 - 1654 Publication Year: 2020
Abstract: Depth Image-Based Rendering (DIBR) is the key technique in many multi-view 3D applications to synthesize virtual views using texture and depth information. However, DIBR induces distortions that disturb the visual quality of experience therefore, image quality assessment (IQA) methods are essential to evaluate the quality of the synthesized views. The characteristics of the DIBR-related distortions are different from those of the traditional video coding distortions and conventional objective IQA methods often fail to provide accurate quality predictions for synthesized views. In this letter, we proposed a new Full Reference (FR) objective metric for evaluation of DIBR synthesized views. We used a feature matching method at feature (key) points of reference and synthesized images to quantify the local differences. Moreover, global quality loss is computed in shift-compensated views by measuring the gradient difference in image superpixels. Performance evaluation on the three public data sets shows the effectiveness of the proposed model. A software release of the proposed method is available at https://gitlab.com/etro/ssdi
|