A fully object-space approach for full-reference visual quality assessment of static and animated 3D meshes
Özet
3D mesh models are exposed to several geometric operations such as simplification and compression. Several metrics for evaluating the perceived quality of 3D meshes have already been developed. However, most of these metrics do not handle animation and they measure the global quality. Therefore, a full-reference perceptual error metric is proposed to estimate the detectability of local artifacts on animated meshes. This is a bottom-up approach in which spatial and temporal sensitivity models of the human visual system are integrated. The proposed method directly operates in 3D mode space and generates a 3D probability map that estimates the visibility of distortions on each vertex throughout the animation sequence. We have also tested the success of our metric on public datasets and compared the results to other metrics. These results reveal a promising correlation between our metric and human perception.