Next Article in Journal
Modeling of Ethiopian Beef Meat Marbling Score Using Image Processing for Rapid Meat Grading
Previous Article in Journal
Integrated Building Modelling Using Geomatics and GPR Techniques for Cultural Heritage Preservation: A Case Study of the Charles V Pavilion in Seville (Spain)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Point Cloud Quality Assessment Using a One-Dimensional Model Based on the Convolutional Neural Network

by
Abdelouahed Laazoufi
1,*,
Mohammed El Hassouni
2 and
Hocine Cherifi
3,*
1
Research Laboratory in Computer Science and Telecommunications (LRIT), Faculty of Sciences, Mohammed V University in Rabat, Rabat 1014, Morocco
2
Faculty of Letters and Human Sciences, Mohammed V University in Rabat, Rabat 8007, Morocco
3
Carnot Interdisciplinary Laboratory of Burgundy (ICB) UMR 6303 CNRS, University of Burgundy, 21000 Dijon, France
*
Authors to whom correspondence should be addressed.
J. Imaging 2024, 10(6), 129; https://doi.org/10.3390/jimaging10060129
Submission received: 1 April 2024 / Revised: 16 May 2024 / Accepted: 22 May 2024 / Published: 27 May 2024

Abstract

:
Recent advancements in 3D modeling have revolutionized various fields, including virtual reality, computer-aided diagnosis, and architectural design, emphasizing the importance of accurate quality assessment for 3D point clouds. As these models undergo operations such as simplification and compression, introducing distortions can significantly impact their visual quality. There is a growing need for reliable and efficient objective quality evaluation methods to address this challenge. In this context, this paper introduces a novel methodology to assess the quality of 3D point clouds using a deep learning-based no-reference (NR) method. First, it extracts geometric and perceptual attributes from distorted point clouds and represent them as a set of 1D vectors. Then, transfer learning is applied to obtain high-level features using a 1D convolutional neural network (1D CNN) adapted from 2D CNN models through weight conversion from ImageNet. Finally, quality scores are predicted through regression utilizing fully connected layers. The effectiveness of the proposed approach is evaluated across diverse datasets, including the Colored Point Cloud Quality Assessment Database (SJTU_PCQA), the Waterloo Point Cloud Assessment Database (WPC), and the Colored Point Cloud Quality Assessment Database featured at ICIP2020. The outcomes reveal superior performance compared to several competing methodologies, as evidenced by enhanced correlation with average opinion scores.

1. Introduction

Recently, the utilization of 3D models has seen a significant expansion in various fields, including virtual and mixed reality, computer-aided diagnosis, architecture, and the preservation of cultural heritage. However, when these 3D models undergo operations such as simplification and compression, they can potentially introduce different distortion types that negatively impact the visual quality of 3D point clouds. To tackle this problem, there is a growing demand for robust methods to evaluate perceived quality. Traditionally, assessing distortion levels in 3D models has depended on human observers, which is time-consuming and resource-intensive. To streamline this, objective methods have emerged as a practical solution [1]. These methods involve the implementation of automated metrics that aim to replicate the judgments of an ideal human observer. These metrics can generally be categorized into three groups: full reference (FR) [2,3,4,5,6], reduced reference (RR) [7,8], and no-reference (NR) [9,10,11,12,13,14,15]. Among these, blind methods, which do not rely on reference models, have gained particular significance, especially in real-world applications [16,17,18,19].
Three-dimensional point cloud is a collection of points, each characterized by geometric coordinates and potentially additional attributes such as color, reflectance, and surface normals.
Unlike 2D media, such as images and videos, which are organized in a regular grid, the points in 3D point clouds are scattered throughout space. Therefore, there is a need to explore methods for extracting effective features from these scattered points to assess quality.
To date, only a limited set of metrics for assessing the quality of point clouds without reference, known as NR-PCQA (No-Reference Point Cloud Quality Assessment), have been developed. Chetouani et al. [20] adopted an approach involving extracting hand-crafted features at the patch level and using traditional CNN models for quality regression. PQA-net [9] employs multi-view projection as a method for feature extraction. Zhang et al. [13] took a distinct approach by using various statistical distributions to estimate quality-related parameters based on the distributions of geometry and color attributes. Fan et al. [21] focus on inferring the visual quality of point clouds through the analysis of captured video sequences. Liu et al. [10] utilized an end-to-end sparse CNN to predict quality. Yang et al. [22] extended their efforts by transferring quality information from natural images to enhance the understanding of the quality of point cloud rendering images, employing domain adaptation techniques.
Recently, Convolutional Neural Networks (CNNs) have emerged as the predominant choice for various Computer Vision and Machine Learning tasks. CNNs are feedforward Artificial Neural Networks (ANN) characterized by their convolutional and subsampling layer arrangement. Deep 2D CNNs, with numerous hidden layers and millions of parameters, can learn intricate objects and patterns, particularly when trained on extensive visual datasets with ground-truth labels. When properly trained, this capability positions them as the primary tool for various engineering applications that involve 2D signals, such as images and video frames.
However, this strategy may not always be feasible for numerous applications dealing with 1D signals, particularly when the training dataset is constrained to a specific application. To address this challenge, 1D CNNs have been recently introduced and have rapidly demonstrated cutting-edge performance across multiple domains. These domains include personalized biomedical data classification and early diagnosis, structural health monitoring, anomaly detection, identification in power electronics, and detecting faults in electrical motors. Among the technical applications of the 1D CNNs we quote automatic speech recognition, vibration-based structural damage detection in civil infrastructure, and real-time electrocardiogram monitoring [23].
Despite the growing importance of 1D CNNs in various applications, there exists a current void in the literature regarding point cloud quality assessment using these networks. In this context, we introduce a novel method in this paper for evaluating the visual quality of 3D point clouds. Our method revolves around a transfer learning model grounded in a one-dimensional CNN architecture. The main contributions of this paper are summarized as follows:
  • The introduction of a novel methodology that adapts a one-dimensional Convolutional Neural Network (1DCNN) architecture for evaluating the visual quality of 3D point clouds.
  • The design of the 1DCNN network tailored for point clouds by transforming a 2D CNN model into a 1D variant.
  • The incorporation of transfer learning using a pre-trained ImageNet model to initialize the 1DCNN network for point cloud quality evaluation.
The rest of this paper is structured as follows: Section 2 provides an overview of the related work, Section 3 introduces the proposed method, Section 4 presents the experimental setup and the results of a comparative evaluation against alternative solutions, and finally, Section 5 concludes the paper.

2. Related Work

In the literature, the most current PCQA approaches can be broadly grouped into point-based, feature-based, and projection-based methods. A point-based quality metric directly compares the geometry or characteristics between the reference and distorted point clouds, assessing them point by point and establishing necessary point correspondences. The Point-to-Point (Po2Po) [24] and Point-to-Plane (Po2Pl) [25] metrics stand out as the most popular point-based geometry quality evaluation methods. In the Po2Po metric, each point in a degraded or reference point cloud is matched with its nearest corresponding point in the opposite cloud, and subsequently, the Hausdorff distance or Mean Squared Error (MSE) distance is computed for all point pairs. One significant limitation of these metrics is their failure to consider that point cloud points represent the surfaces of objects in the visual scene. Tian et al. [25] introduced Point-to-Plane (Po2Pl) metrics to address this issue. These metrics represent the underlying surface at each point as a plane perpendicular to the normal vector at that specific point. This approach yields smaller errors for points closer to the point cloud’s surface, which is modeled as a plane. Currently, the MPEG-endorsed point cloud geometry quality metrics include Po2Po and Po2Pl and their corresponding Peak Signal-to-Noise Ratio (PSNR) [5]. In addition, Alexiou et al. [26] proposed a Plane-to-Plane (Pl2Pl) metric, which evaluates the similarity between the underlying surfaces associated with corresponding points in the reference and degraded point clouds. In this scenario, tangent planes are estimated for both the reference and degraded points, and the angular similarity between them is examined.
In their work [27], Javaheri et al. introduced a geometry quality metric that relies on the Generalized Hausdorff distance. This metric measures the maximum distance for a specific percentage of data rather than the entire dataset, effectively filtering out some outlier points. The Generalized Hausdorff distance is calculated between two point clouds, and it can be applied to both the Po2Po and Po2Pl metrics. Furthermore, in [28], Javaheri et al. proposed a Point-to-Distribution (Po2D) metric. This metric is based on the Mahalanobis distance between a point in one point cloud and its K nearest neighbors in the other point cloud. They compute the mean and covariance matrix of the corresponding distribution and employ it to measure the Mahalanobis distance between points in one point cloud and their respective set of nearest neighbors in the other point cloud. These distances are then averaged to determine the final quality score. In [29], they presented a joint color and geometry point-to-distribution quality metric. This metric leverages the scale-invariance property of the Mahalanobis distance. In [30], Javaheri et al. proposed resolution-adaptive metrics. These metrics enhance the existing D1-PSNR and D2-PSNR metrics by incorporating normalization factors based on the point cloud’s rendering and intrinsic resolutions.
A feature-based point cloud quality approach computes a quality score by analyzing the differences in local and global features extracted from reference and degraded point clouds. In [31], Meynet et al. introduced the Point Cloud Multi-Scale Distortion metric (PC-MSDM). This metric serves as a measure of the geometric quality of point clouds, drawing its foundations from structural similarity principles and relying on the statistical examination of local curvature.
Viola et al. introduced a quality metric for point clouds using the histogram and correlogram of the luminance component [32]. Then, they integrated the newly proposed color quality metric with the Po2Pl MSE geometry metric (D2) using a linear model. The weighting parameter for this fusion is determined through a grid search approach.
Diniz et al. introduced the Geotex metric, a novel approach based on Local Binary Pattern (LBP) descriptors developed for point clouds, particularly focusing on the luminance component [33]. To implement this metric to point clouds, the LBP descriptor is computed within a local neighborhood corresponding to the K-nearest neighbors of each point in the other point cloud. The histograms of the extracted feature maps are generated for both the reference and degraded point clouds, and are used to calculate the final quality score employing a distance metric, such as the f-divergence [34]. In [35], Diniz et al. presented an extension of the Geotex metric. This extension incorporates various distances, with a notable focus on the Po2Pl Mean Squared Error (MSE) for assessing geometry and the distance between Local Binary Pattern (LBP) statistics [33] for evaluating color. Additionally, Diniz et al. introduced a novel quality metric in their study [36], which calculates Local Luminance Patterns (LLP) based on the K-nearest neighbors of each point in the alternative point cloud.
Meynet et al. introduced the Point Cloud Quality Metric (PCQM) [3]. It integrates geometric characteristics from a previous study [31] with five color-related features, including lightness, chroma, and hue. The PCQM is calculated as the weighted mean of the geometric and color attributes differences between the reference and degraded point clouds. In an another study, Viola et al. [7] presented the first reduced-reference quality metric, which concurrently evaluates geometry and color aspects. The authors extracted seven statistical features, including measures such as mean and standard deviation, from point clouds in reference and degraded states across various domains, including geometry, texture, and normal vectors. This process yielded a total of 21 features. The reduced quality score is calculated as the weighted average of the differences in all these features between the reference and degraded point clouds.
Inspired by the SSIM quality metric designed for 2D images, Alexiou et al. introduced a quality metric in [37] that utilizes local statistical dispersion features. These statistical characteristics are derived within a local neighborhood surrounding each point within the reference and degraded point clouds, including four distinct attributes: geometry, color, normal vectors, and curvature information. The final quality metric is derived by aggregating the differences in feature values between corresponding points in the reference and degraded point clouds. In [6], Yang et al. proposed a quality metric based on graph similarity. They identify key points by resampling the reference point cloud and construct local graphs centered at these key points for both the reference and degraded point clouds. Several local similarity features are then computed based on the graph topology, with the quality metric value corresponding to the degree of similarity between these features. Additionally, in [38], Diniz et al. extracted local descriptors that capture geometry-aware texture information from the point clouds. These descriptors include the Local Color Pattern (LCP) and various adaptations of the Local Binary Pattern (LBP) descriptor. The statistics of these descriptors are computed and used to determine the objective quality score.
A quality metric for point clouds that relies on projection involves mapping the 3D reference and degraded point clouds onto specific 2D planes. The quality score is then determined by comparing these projected images using various 2D image quality metrics. The first projection-based point cloud quality metric was introduced by Queiroz et al. in [39]. This metric begins by projecting the reference and degraded point clouds onto the six faces of a bounding cube that includes the entire point cloud. It combines the corresponding projected images and evaluates the 2D Peak Signal-to-Noise Ratio (PSNR) between the concatenated projected images from the degraded and reference point clouds. In [40], Torlig et al. introduced rendering software for visualizing point clouds on 2D screens. This software accomplishes the orthographic projection of a point cloud onto the six faces of its bounding box. Then, a 2D quality metric is applied to the projected images obtained by rendering, both for the reference and degraded point clouds. The final quality score is determined by averaging the results from the six projected image pairs. In [41], Alexiou et al. investigated how the quantity of projected 2D images impacts the correlation between subjective and objective assessments in projection-based quality metrics. The study reveals that even a single view can yield a reasonable correlation performance. Furthermore, they proposed a projection-based point cloud quality metric that assigns weights to the projected images based on user interactions during the subjective testing phase. In [42], the quality metric proposed in [40] is evaluated using different parameters, such as the number of views and pooling functions, to establish benchmarks and assess its performance under various conditions.
In [9], Liu et al. introduced a no-reference quality metric based on deep learning named the Point Cloud Quality Assessment Network (PQA-Net). This method begins by projecting the point cloud into six distinct images which undergo feature extraction through a convolutional neural network. These features are then processed by a distortion-type identification network and a quality vector prediction network to derive the final quality score. In [43], Bourbia et al. utilized a multi-view projection in 2D, which is segmented into patches, in combination with a deep convolutional neural network for evaluating the quality of point clouds.
In [44], Wu et al. introduced two objective quality metrics based on projection: a weighted view projection-based metric and a patch-projection-based metric. In both cases, 2D quality metrics are employed to assess the quality of texture and geometry maps. In particular, the patch-projection-based metric demonstrates a significant performance advantage over the weighted view projection-based metric. In [45], Liu et al. proposed a quality metric for point clouds that leverages attention mechanisms and the principle of information content-weighted pooling. Their proposed metric involves translating, rotating, scaling, and orthogonally projecting point clouds into 12 different views, and it evaluates the quality of these projected images using the IW-SSIM [46] 2D metric.
Point-based methods often prioritize geometry at the point level, neglecting color information. This limitation can be a drawback in situations where color details are significant. Focusing only on geometry may lead to an incomplete evaluation, especially when color plays a crucial role in the overall quality of the content.
The quality of feature-based methods heavily relies on the effectiveness of feature extraction techniques. Inaccurate or inadequate features can result in biased assessments.
Projection-based methods encounter the challenge of unavoidable information loss during the projection process. This loss can affect the accuracy of quality assessment, especially when critical details are compromised. Furthermore, the quality of projected images may be influenced by the angles and viewpoints employed in the projection. This sensitivity can lead to variations in the assessment results based on different projection configurations.

3. Proposed Method

The proposed approach employs transfer learning using 1D CNN to evaluate the quality of a point cloud. Transfer learning allows leveraging the knowledge of weights and layers from a pre-existing model to speed up the learning process of a new, untrained model. We transform the 2D CNN model into a 1D CNN variant in a first time. Then, we fit the ImageNet weights of the 2D CNN model to the formed 1D CNN model. After, we use this model to produce robust features for quality regression. Finally, fully connected (F_C) layers are used as the regression model. The overall structure of the proposed method is depicted in Figure 1 and the architecture of the 1D CNN Model is illustrated in Figure 2.

3.1. Geometric and Perceptual Features

3.1.1. Geometry-Based Features

We have selected a set of relevant geometric features to assess the quality of point clouds. These features utilize eigenvalues and eigenvectors, which are calculated for each 3D point based on its neighbors within a specified radius. The associated covariance matrix C m , given a point Pc and its neighborhood P N g m , is represented by the following notation:
C m = 1 k i = 1 k P c i P c ¯ ) ( P c i P c ¯ T
where P c i and P c ¯ are vectors of dimension 3 × 1, C m is a matrix of dimension 3 × 3, K denotes the size of the neighborhood P N g m . The eigenvector for the covariance matrix C m can be represented as:
C m . V n = λ n . V n , n 1 , 2 , 3
where the relevant eigenvectors are represented by ( V 1 , V 2 , V 3 ) and the eigenvalues are denoted by ( λ 1 , λ 2 , λ 3 ), such as λ 1 λ 2 λ 3 .
For each point cloud P C = p o i n t s , we extract the set of geometric features defined by:
F g e o m = F e a t g e o m ( p c i )
where F e a t g e o m ( p c i ) signifies the geometry projection function, and p c i p o i n t s . The geometric feature formulations [47] are indicated as follows:
  • Linearity: is the characteristic that denotes the degree of resemblance to a straight line:
    L i n = λ 1 λ 2 λ 1
  • Planarity: is employed to assess the resemblance or similarity to a planar surface:
    P l a n = λ 2 λ 3 λ 1
  • Anisotropy: is employed to demonstrate discrepancies in geometrical characteristics across various directions:
    A n i s = λ 1 λ 3 λ 1
  • Sphericity: is the metric used to quantify the degree of resemblance between the shape of an object and that of a perfect sphere:
    S p h = λ 3 λ 1
  • Omnivariance: is a geometric descriptor used to quantify point cloud data’s overall variability or diversity in three-dimensional space. It captures the dispersion of points in all directions and provides a measure of the spatial distribution of the points:
    O m n i = ( λ 1 λ 2 λ 3 ) 1 3
  • Eigenentropy: is a mathematical measure that quantifies the level of disorder or randomness in a dataset, particularly in the context of analyzing 3D point clouds or spatial distributions. It assesses how dispersed or organized the data points are within a given neighborhood or region:
    E i g e n = i = 1 3 λ i ln ( λ i )
  • Sphere-fit: is often employed in various applications such as computer graphics, computer-aided design (CAD), and computer vision, where finding an accurate and robust approximation of a sphere to a set of scattered points is essential. It plays a significant role in many fields involving 3D point cloud data analysis and manipulation.

3.1.2. Perceptual-Based Features

For each point cloud, we extract the set of perceptual features F p e r c defined by:
F p e r c = F e a t p e r c ( p c i )
where F e a t p e r c ( p c i ) indicates the perceptual projection function. We have chosen color, curvature (Curv), and saliency (Sal) for perceptual features.
  • Color: This plays a crucial role in evaluating visual quality. In a colored point cloud, each point’s color is directly derived from its color information. Typically, the color information in 3D models is stored using RGB channels. However, the RGB color space has shown limited correlation with human perception. We choose to use the LAB color transformation for color feature projection as a solution. This method has been widely embraced for numerous quality assessment applications [48].
  • Saliency: This is a crucial aspect within the human visual system, involving allocating human attention or eye movements in a given scene. Identifying these remarkably perceptible areas is important in computer vision and computer graphics.
  • Curvature: This refers to the amount by which a curve, surface, or object deviates from being perfectly straight or flat at a specific point:
    C u r v = λ 3 λ 1 + λ 2 + λ 3

3.2. Feature Encoder

We convert the 2D CNN architecture into a 1D CNN version and adjust the weights derived from ImageNet for the 1D CNN model. Then, we produce the features through the following process:
F i = 1 D _ C N N ( F p c )
where F p c F g e o m , F p e r c , F i represents the produced features for each F p c , while 1 D C N N ( . ) refers to the module responsible for robust features production using the 1D CNN architecture.

3.3. Convert the 2D CNN Model to a 1D CNN Model

Deep 2D CNNs have played a crucial role in computer vision tasks and have achieved remarkable success in various applications, including image recognition, object detection, facial recognition, and more. The pre-trained models (e.g., VGG, ResNet, MobileNet) have been used to tackle specific tasks, depending on the dataset and problem.
Leveraging the 2D CNN pre-trained model enables the acceleration of deep model training for other tasks through transfer learning [49,50]. This advantage lies in its ability to achieve satisfactory results without requiring large amounts of labeled data or extensive computational resources.
Converting a 2D CNN model to a 1D CNN model typically involves modifying the fully connected layers of the model to handle 1D data instead of 2D data. The original 2D CNN architecture processes 2D images, where each image has height, width, and color channels (e.g., RGB). In contrast, a 1D model processes sequential data, such as text or time series data, with only one dimension. To convert 2D CNN to 1D CNN, we need to adapt the fully connected layers to work with the 1D input. Here is an overview of the followed steps:
  • Remove the last few layers: In a 2D CNN model, the final layers are usually fully connected layers responsible for image classification. We need to remove these layers since they are designed for 2D data.
  • Flatten the output: Since 1D data have only one dimension, we need to flatten the output of the last convolutional layer to convert it into a 1D format.
  • Add new fully connected layers: After flattening, we add new fully connected layers designed to handle 1D data. These layers should have an appropriate number of neurons and activations suitable for the specific task you want to solve.
  • Adjust input data: The input data fed into the model should also be converted to 1D format to match the new architecture.
  • Adjust Output Layer: Finally, the output layer of the 2D model may need to be modified to match the desired output for the 1D model. For example, for regression tasks, it may need to output a single value, while for classification tasks, it may need to produce class probabilities.
  • Transfer Weights: Once the architecture is adjusted, the weights of the 2D model can be transferred to the corresponding layers in the 1D model. However, since some layers may have been removed or modified, care must be taken to ensure the weights are transferred appropriately.
It is important to note that converting a model from 2D to 1D is not always straightforward, especially if the model was initially designed for 2D image. The success of such a conversion heavily depends on the nature of the problem you want to solve with the 1D model.

3.4. Quality Prediction

The feature extraction step permits us to obtain a feature vector that captures the distinctive attributes of the 3D model. For our experiment, we suggest utilizing a four-layer fully connected ( F _ c ), the associated hyperparameters are presented in Table 1. We integrate the aforementioned features to make predictions about perceptual quality. Finally, the estimated quality scores Q _ s can be calculated as follows:
Q _ s = F _ c ( F )
We minimize the regression loss during each training batch using the Mean Squared Error ( M S E ). This later serves the purpose of maintaining the proximity of predicted values to the quality labels, and this relationship can be expressed as follows:
L o s s M S E = 1 n i = 1 n Q s i M O S i 2
where n represents the number of distortions present in a given database. The database provides Mean Opinion Scores ( M O S i ) that define the subjective quality assessment, while Q s i represents the objective quality score obtained through a specific method.

4. Experimental Setup

4.1. Databases

In our study, we use three of the most popular in the field of point cloud quality assessment that are: the subjective point cloud assessment database (SJTU-PCQA) [51], the Waterloo point cloud assessment database (WPC) [45], and the ICIP2020 point cloud assessment database (ICIP2020) introduced in [52].
  • The SJTU-PCQA database [51] contains 420 point cloud samples derived from 10 reference point clouds. Each reference point cloud undergoes seven common types of distortions at six different levels. More precisely, the distortions are acquired using compression based on Octree (OT), noise in color (CN), noise in Gaussian geometry (GGN), Downscaling (DS), Downscaling and Color noise (D + C), Downscaling and Geometry Gaussian noise (DCG), and noise in color combined with Gaussian geometry (C + G). However, only 9 reference point clouds and their corresponding distorted samples are publicly available, resulting in 378 (9 × 6 × 7) point cloud samples for our experiment. Mean Opinion Scores (MOSs) are provided within the range of [0,1].
  • The WPC dataset [45] comprises 20 original reference point clouds and 740 altered point clouds created from these references using five distinct forms of distortion. These distortions encompass Downsampling (DS), contamination by Gaussian noise (GN), G-PCC (Trisoup), GPCC (Octree), and V-PCC.
  • The ICIP2020 database [52] comprises six reference point clouds that incorporate both texture and geometry information. Additionally, it includes 90 distorted versions obtained using 3 compression methods: G-PCC Octree, G-PCC Trisoup, and VPCC, each at 5 different quality levels ranging from low to high.
Figure 3 displays the reference samples of the WPC, SJTU-PCQA, and ICIP2020 databases.

4.2. Implementation Parameters

To assess the proposed approach against other learning-based NR-PCQA metrics, we partitioned ICIP2020, SJTU-PCQA, and WPC datasets into training and testing sets. The division of reference point clouds for these three datasets ensured that 80% of the samples were allocated for training, leaving 20% for testing purposes. The Adam optimizer is utilized throughout the training phase, initializing with a learning rate of 1 × 10−4, while maintaining a batch size of 10. Furthermore, the model is trained over 50 epochs. We performed the test using a computer equipped with an Intel (R) Core (TM) i7-11800H @ 2.30 GHz, 32 GB of RAM, and an NVIDIA GeForce RTX 3060 Laptop GPU on the Windows platform.

4.3. Evaluation Metrics

Four evaluation criteria are employed to find the relation between the predicted scores and MOSs. These criteria are the Spearman Rank Correlation Coefficient ( S R C C ), Kendall’s Rank Correlation Coefficient ( K R C C ), Pearson Linear Correlation Coefficient ( P L C C ), and Root Mean Squared Error ( R M S E ). These criteria are defined as follows:
P L C C = j = 1 n j Q s j Q s ¯ ) ( M O S j M O S ¯ j = 1 n j Q s j Q s ¯ ) 2 ( M O S j M O S ¯ 2
S R C C = 1 6 j = 1 n j r a n k ( M O S j ) r a n k ( Q s j ) 2 n j ( n j 2 1 )
K R C C = n c j n d j 1 2 ( n j 2 n j )
R M S E = 1 n j i = 1 n j Q s j M O S j 2
where n j represents the number of distortions present in a given database and n c j and n d j indicate the total number of consistent and inconsistent in the database. The dataset provides Mean Opinion Scores ( M O S j ) that define the subjective quality assessment, while Q s j represents the objective quality score obtained through a specific method.

5. Experimental Results

This section deals with our experimental results, including the performance analysis of the studied networks, the ablation study, results achieved through comparisons with state-of-the-art methods, and cross-database evaluations.

5.1. Network Performance

Convolutional Neural Networks (CNNs) come in various architectures and configurations, each designed for specific tasks and use cases. To explore how CNNs impact performance quality, we execute experiments using six distinct CNN architectures pre-trained on the ImageNet dataset: MobileNet, ResNet, DenseNet, ResNeXt, SE-ResNet, and VGG.
  • ResNet [53]: Kaiming et al. introduced the Residual Neural Network (ResNet). This network was designed to simplify the training process of deep networks by expediting training speeds. Various iterations of ResNet, such as ResNet 18, ResNet 34, ResNet 50, and ResNet 101, among others, have been suggested.
  • ResNeXt [54]: ResNeXt is a convolutional neural network architecture aiming to improve deep learning models’ efficiency and performance. ResNeXt builds upon the Residual Network (ResNet) architecture by introducing “cardinality”.
  • MobileNet [55]: MobileNet is a family of neural network architectures designed explicitly for efficient inference on mobile and embedded devices. MobileNets are known for their lightweight and computationally efficient nature while maintaining reasonable accuracy on various tasks, especially in computer vision.
  • DenseNet [56]: DenseNet, short for Densely Connected Convolutional Network, is a neural network architecture proposed by Huang et al. DenseNet introduces a unique connectivity pattern among layers, aiming to address some limitations of traditional neural network architectures, such as vanishing gradients, feature reusability, and ease of training deeper networks.
  • SEResNet [57]: SEResNet (Squeeze-and-Excitation ResNet) is an extension of the ResNet (Residual Network) architecture that incorporates a mechanism called “Squeeze-and-Excitation” to enhance feature learning and representation.
  • VGG [58]: The VGG network, a deep Convolutional Neural Network (CNN) demonstrated notable success in the ILSVRC 2014 competition. Diverse iterations of VGG, featuring distinct convolutional layer configurations have been created, including VGG11, VGG13, VGG16, and VGG19.
The characteristics of these networks are illustrated in Table 1 and their corresponding results can be found in Table 2 and Table 3.
We evaluate the impact of network architecture and depth on the proposed method’s performance. Residual Networks (ResNets) demonstrate progressive improvement with increasing depth, with ResNet34 achieving better inter-database correlation than ResNet18. However, ResNeXt50 exhibits superior performance on SJTU and ICIP2020 datasets, but struggles on the WPC dataset. MobileNetV2 stands out for its consistent performance across all datasets despite having fewer parameters and lower memory footprint. DenseNet201 suffers from performance degradation and higher Root Mean Squared Error (RMSE) across databases. SE-ResNet50 emerges as the top performer with exceptional metrics such as Pearson Correlation Coefficient (PLCC), Spearman Rank Correlation Coefficient (SRCC), Kendall Rank Correlation Coefficient (KRCC), and consistently low RMSE across all datasets. Surprisingly, VGG16 and VGG19, known for strong performance, excel on all metrics, demonstrating remarkable generalizability, particularly on SJTU and ICIP2020 datasets.
Results obtained from Table 1, Table 2 and Table 3 reveal a classic complexity-accuracy trade-off within the evaluated DCNN architectures. While deeper models such as VGG-16 and VGG-19 achieve higher accuracy, they exhibit significantly larger numbers of parameters and require more computational resources for inference compared to lightweight models such as MobileNetV2. ResNet-101, SEResNet-50, ResNeXt-50, and DenseNet-201 offer a potential middle ground, balancing accuracy with computational efficiency. However, these models come at the cost of increased memory footprint and processing power compared to shallower architectures such as ResNet-18 or ResNet-34. Our evaluation of the impact of varying depth and parameter counts across MobileNet, ResNet, DenseNet, and VGG models suggests that these factors may not significantly influence performance outcomes. This finding warrants further investigation to determine the optimal architecture for specific tasks considering the application’s resource constraints and target accuracy requirements.
Our experiments demonstrate that the choice of network architecture significantly impacts model performance, even more so than increasing the depth or number of parameters in the model. When selecting a model, it is crucial to consider the application’s specific requirements. If prioritizing real-time performance or running on mobile devices is essential, then a lightweight model like MobileNetV2 might be preferable. On the other hand, if achieving the highest possible accuracy is the primary goal, then deeper models such as VGG16 or SE-ResNet50 are more suitable options. To evaluate our method’s effectiveness against leading benchmarks, we opted to leverage the pre-trained VGG16 convolutional neural network (CNN). The training and validation curves for VGG16 on all three databases are shown in Figure 4, Figure 5 and Figure 6.
Since the suggested approach functions directly from the 3D model, we also pay attention to computational efficiency. The results of execution time on the SJTU-PCQA database are presented in Figure 7. The proposed method has an average time cost of 29.27 s, compared to approximately 39.14 s for PCQM. Although the FR method for PCQM needs to load both deformed and reference point cloud models simultaneously, while the proposed NR method only needs to load the deformed point cloud model, our method demonstrates a lower average time cost. This suggests that our approach achieves relatively significant computational efficiency.

5.2. Ablation Study

To assess the effectiveness and contributions of various types of features perceptual and geometry, we conducted individual performance tests for each feature. This allowed us to analyze the contributions of features by assessing their performance within various combinations. Table 4, Table 5 and Table 6 present the performance outcomes of the ablation study. Where ‘Anis’, ‘Lin’, ‘Plan’, ‘Sph’, ‘Omni’, ‘Sph_fit’, ‘Eigen’, ‘Curv’, ‘Sal’, ‘All_geom’, and ‘All_perceptual’ correspond to Anisotropy, Linearity, Planarity, Sphericity, Omnivariance, Sphere-fit, Eigenentropy, Curvature, Saliency, all geometry features, and all perceptual features, respectively. Additionally, ‘L’, ‘A’, and ‘B’ represent the luminance and chrominance channels within the LAB color space.
Results obtained on ICIP2020 dataset (Table 4) show distinct performances among various features (e.g., ‘Anis’, ‘Lin’, ‘Plan’). ‘Anis’ and ‘Sph’ exhibit strong individual performance, while ‘Lin’, ‘Plan’, and ‘Omni’ are less effective. However, progressively integrating these features with ‘Anis’ significantly improves all evaluation metrics. The ‘All_geom’ feature set, combining all geometric features, achieves superior performance throughout, highlighting the benefit of this combination.
For perceptual features, ‘Curv’ is the best performer, while ‘L’, ‘A’, ‘B’, and ‘Sal’ have less individual impact. Merging ‘L’, ‘A’, and ‘B’ followed by ’Curv’ integration results in substantial gains, emphasizing the importance of their collective influence. The ‘All_perceptual’ set, regrouping all perceptual features, demonstrates significant performance, suggesting a complementary interaction between these features.
Finally, integrating geometric and perceptual features, the ‘All’ combination outperforms all individual and grouped sets across all metrics. This comprehensive set not only achieves optimal performance but also underscores the efficacy of combining complementary features for enhanced point cloud quality assessment.
Analyzing the SJTU dataset (Table 5) reveals interesting insights into feature performance. Among geometric features, ‘Plan’ exhibits the strongest overall performance across all metrics (PLCC, SRCC, KRCC). ‘Sph’ follows closely, particularly excelling in PLCC and SRCC compared to ‘Plan’. While ‘Anis’ demonstrates decent performance, particularly in SRCC, it falls behind ‘Plan’ and ‘Sph’. ‘Omni’ achieves the highest PLCC but struggles with SRCC and KRCC compared to ‘Sph’. ‘Sph_fit’ stands out with significant improvements across all metrics, surpassing other geometric features in a performance jump. However, ‘Engein’ performs similarly to the moderate ‘Anis’. Combining all geometric features in ‘All_geom’ significantly improves individual or partial combinations, highlighting the benefit of feature fusion.
For perceptual features, ‘L’, ‘A’, and ‘B’ exhibit moderate performance, with ‘B’ slightly edging out the others. ‘Curv’ demonstrates improvement over this group, while ‘Sal’ substantially gains performance. As with geometric features, combining these perceptual features into ‘All_perceptual’ leads to further metric improvements.
Finally, the ‘All’ combination, integrating geometric and perceptual features, outperforms all individual and partial combinations across all metrics. This comprehensive feature set stresses the importance of incorporating diverse feature information for optimal point cloud quality assessment. The observed incremental improvements with feature additions, the impact of combining features on metrics, and the context-dependent selection of features all emphasize the effectiveness of a holistic approach in this task.
The scores reported on WPC dataset (Table 6) reveal distinct characteristics for each geometric feature (‘Anis’, ‘Lin’, ‘Plan’, ‘Sph’, ‘Omni’, ‘Sph_fit’, ‘Engein’). ‘Plan’ demonstrates superior performance with higher correlations (PLCC: 0.29) and lower error (RMSE: 22.57) compared to ‘Sph’ (PLCC: 0.10, RMSE: 23.45). However, feature fusion leads to significant improvements. Combining ‘Anis’ + ‘Lin’ + ‘Plan’ elevates PLCC to 0.41 and reduces RMSE to 21.57. The ‘All_geom’ feature set, which combines all the geometric features, greatly improves (PLCC: 0.67, RMSE: 17.42). This means that using all these features together works much better than using them alone. Similar observations hold for perceptual features (‘L’, ‘A’, ‘B’, ‘Curv’, ‘Sal’). While individual features exhibit varying performance impacts, their combination (‘L’ + ‘A’ + ‘B’ + ‘Curv’) leads to improved metrics compared to their contributions. Furthermore, the ‘All_perceptual’ set, regrouping all perceptual features, demonstrates significant gains in correlations (PLCC: 0.61) and error reduction (RMSE: 21.43).
The ‘All’ combination, integrating geometric and perceptual features, achieves the peak performance across all metrics. This comprehensive set boasts strong correlations (PLCC: 0.93) and minimal error (RMSE: 8.55). This analysis highlights the cumulative improvements observed as more features are incorporated, with the ‘All’ set demonstrably superior for point cloud quality assessment. This underlines the critical role of geometric and perceptual information in achieving optimal quality evaluation.
Across the three evaluated datasets, the results indicate that geometry features play a more significant role in determining the final quality score. This observation could be attributed to the fact that three databases contain a greater variety of geometry distortions than perceptual distortions, and human perception of point clouds tends to place a higher emphasis on geometry-related information.

5.3. Performance Comparison with the State-of-the-Art

In this section, we perform a comparative analysis of our proposed method against current benchmarks, including FR-PCQA (PSNR [5], SSIM [5], PB-PCQA [51], M-p2po [24], M-p2pl [4], H-p2po [24], H-p2pl [4], PSNRYUV [40], PCQM [3], GraphSIM [6], PointSSIM [37], TCDM [59], and MMD [28]), RR-PCQA (PCMRR [7]), and NR-PCQA (BRISQUE [11], PQA-Net [9], NIQE [12], ResSCNN [10], MVP-PCQA [43], MM-PCQA [60], and 3D-NSS [13]).
The experimental outcomes of PCQA using the SJTU-PCQA, WPC, and ICIP2020 databases are presented in Table 7, Table 8 and Table 9. The top-performing results in each column are highlighted in bold. Across all three databases, it is evident that the FR-PCQA methods (PSNR [5], M-p2po [24], M-p2pl [4], H-p2po [24], and H-p2pl [4]) demonstrate comparatively lower performance. This can be attributed to their reliance solely on geometric structure without incorporating color information. In contrast, superior performance is observed in metrics such as MMD [28], PSNRYUV [40], PCQM [3], GraphSIM [6], PointSSIM [37], and TCDM [59], which includes color information for assessing point clouds. However, it is important to note that evaluating these methods relies on reference information, a component often unavailable in practical applications. Regarding RR methods, The PCMRR metric produces poor results for all correlation metrics. This might be explained by the extensive use of features within their method, which could make it less generalized to different types of degradation.
For NR methods, one can see that our method presents the most superior performance across all three databases, outperforming the compared NR-PCQA methods by a significant margin. For example, our approach outperforms the NR-PCQA method in second place by approximately 0.04, 0.04 (PLCC, SRCC) for MM-PCQA on the SJTU-PCQA database, and by 0.08, 0.07 (PLCC, SRCC) for MVP-PCQA on the WPC database. There are significant performance drops from the SJTU-PCQA and ICIP2020 databases to the WPC database because the latter contains more complex distortion parameters, which are more difficult for PCQA models. Moreover, within the SJTUPCQA database, distorted point clouds contain mixed distortions, while the WPC database introduces a single type of distortion to individual point clouds. Point clouds with mixed distortions appear to exhibit greater quality distinguishability when subjected to similar distortion levels. Furthermore, the WPC database contains twice the number of reference point clouds compared to the SJTU-PCQA database. Our approach exhibits a relatively small drop in performance compared to most other methods. For instance, when moving from the SJTU-PCQA database to the WPC database, our method demonstrates a decrease of 0.03 in PLCC and 0.02 in SRCC values. Top-performing PCQA methods, except for PQA-Net, exhibit a larger performance decline of 0.15 and 0.14 in both PLCC and SRCC values, respectively. Therefore, it is clear that our approach is more robust to more complex distortions.
The overall effectiveness may not accurately reflect the performance for specific distortion types. Consequently, we assess how FR, RR, and NR metrics perform in the face of various point cloud distortions across the three databases. Evaluation measures such as PLCC and SRCC scores are presented in Table 10, Table 11 and Table 12. The top performance for each distortion type within each database is highlighted in bold, indicating the best results among all competing metrics.
On the ICIP2020 database (Table 10), our method surpasses all the compared methods across various distortion types (VPCC, G-PCC Trisoup, and G-PCC Octree), demonstrating superior performance across the entire database.
Within the SJTU-PCQA database (Table 11), our method shows the strongest correlation coefficient outcomes across all distortions, outperforming the state-of-the-art metrics in both PLCC and SRCC correlation coefficients. It is important to highlight that the correlation values of our method and all other state-of-the-art methods are lower in the SJTU-PCQA dataset compared to the ICIP2020 dataset. This difference could be explained by the various types of degradation present in the two databases. While the ICIP2020 database mostly features compression-related distortions, the SJTU database presents more difficult degradation types such as acquisition noise, resampling, and their combinations (Octree-based compression (OT), Color noise (CN), Geometry Gaussian noise (GGN), Downsampling (DS), Downscaling and Color noise (D + C), Downscaling and Geometry Gaussian noise (D + G), and Color noise and Geometry Gaussian noise (C + G)). We conduct a comparison of PLCC and SRCC values for each of the seven degradation types. As depicted in Table 11, our model demonstrates robust performance across all degradation types, exhibiting strong correlations with the subjective quality scores.
In the WPC database (Table 12), our method shows top correlation coefficient results across all distortions, outperforming all the methods compared throughout the entire database. It is important to highlight that our model performs better even on the larger and more complex databases, demonstrating its remarkable robustness.
Based on these results, it can be concluded that our proposed model ranks first among NR methods on SJTU-PCQA, WPC, and ICIP2020 databases. Additionally, our model achieves satisfactory results compared to state-of-the-art FR 3D-QA metrics. A notable advantage of our method is that it does not require original point clouds for reference, demonstrating its ability to extract quality-aware features and provide relatively accurate quality levels for colored point clouds.

5.4. Cross-Database Evaluation

A cross-database evaluation was performed to assess the generalization capability of the proposed method, and the experimental results are displayed in Table 13. Considering the size of the WPC database (740 samples), our primary focus was training the models using this database and conducting the test on the SJTU PCQA dataset (378 samples). Among the comparison models, 3D-NSS [13] demonstrates the lowest PLCC and SRCC values at 0.2344 and 0.1817, respectively. PQA-net [9] follows with improved scores of 0.6102 (PLCC) and 0.5411 (SRCC), displaying improved performance. MM-PCQA [60] further elevates the evaluation metrics, achieving 0.7779 (PLCC) and 0.7693 (SRCC). However, the best performance results come from the proposed model, with higher PLCC (0.8119) and SRCC (0.8193) values, surpassing all other models assessed in this study. These results suggest that the proposed model exhibits a higher correlation with ground truth data for evaluating point cloud quality, indicating its potential for more accurate quality evaluations compared to existing NR-PCQA methods such as 3D-NSS, PQA-net, and MM-PCQA within this context.

6. Conclusions

In this paper, we have introduced a novel methodology for assessing the quality of 3D point clouds using a one-dimensional model based on the Convolutional Neural Network (1D CNN). Through extensive experiments and evaluations, we have demonstrated the effectiveness of our approach in predicting subjective point cloud quality under various distortions. Our model consistently outperformed all competing methods by leveraging transfer learning and focusing on geometric and perceptual features.
The results of our evaluations across different distortion types and databases provide valuable insights into the performance of the proposed method. Our model achieves robust performance across all distortion types within each database in the ICIP2020 and WPC databases by recording top correlation coefficient results across all distortions.
The success of our approach can be attributed to its ability to effectively capture and analyze geometric and perceptual features in 3D point clouds, enabling accurate quality assessment without the need for reference information. The model’s generalization capability, as demonstrated in cross-database evaluations, further highlights its potential for real-world applications.
In conclusion, the proposed method is a promising solution for automated point cloud quality assessment, offering enhanced accuracy and reliability compared to existing techniques. By combining advanced deep learning strategies with transfer learning, our approach advances the field of point cloud quality assessment and opens up new possibilities for improving visual quality evaluation in diverse domains.

Author Contributions

Conceptualization, A.L.; methodology, A.L. and M.E.H.; software, A.L.; writing—original draft preparation, A.L. and M.E.H.; writing—review and editing, A.L., M.E.H. and H.C.; visualization, A.L.; supervision, M.E.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohammadi, P.; Ebrahimi-Moghadam, A.; Shirani, S. Subjective and objective quality assessment of image: A survey. arXiv 2014, arXiv:1406.7799. [Google Scholar] [CrossRef]
  2. Chen, T.; Long, C.; Su, H.; Chen, L.; Chi, J.; Pan, Z.; Yang, H.; Liu, Y. Layered projection-based quality assessment of 3D point clouds. IEEE Access 2021, 9, 88108–88120. [Google Scholar] [CrossRef]
  3. Meynet, G.; Nehmé, Y.; Digne, J.; Lavoué, G. PCQM: A full-reference quality metric for colored 3D point clouds. In Proceedings of the 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, 26–28 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar] [CrossRef]
  4. Tian, D.; Ochimizu, H.; Feng, C.; Cohen, R.; Vetro, A. Geometric distortion metrics for point cloud compression. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: Piscataway, NJ, USA, 2017; Volume 17597328, pp. 3460–3464. [Google Scholar] [CrossRef]
  5. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  6. Yang, Q.; Ma, Z.; Xu, Y.; Li, Z.; Sun, J. Inferring point cloud quality via graph similarity. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 3015–3029. [Google Scholar] [CrossRef]
  7. Viola, I.; Cesar, P. A reduced reference metric for visual quality evaluation of point cloud contents. IEEE Signal Process. Lett. 2020, 27, 1660–1664. [Google Scholar] [CrossRef]
  8. Abouelaziz, I.; Omari, M.; El Hassouni, M.; Cherifi, H. Reduced reference 3D mesh quality assessment based on statistical models. In Proceedings of the IEEE 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Bangkok, Thailand, 23–27 November 2015; pp. 170–177. [Google Scholar] [CrossRef]
  9. Liu, Q.; Yuan, H.; Su, H.; Liu, H.; Wang, Y.; Yang, H.; Hou, J. PQA-Net: Deep no reference point cloud quality assessment via multi-view projection. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 4645–4660. [Google Scholar] [CrossRef]
  10. Liu, Y.; Yang, Q.; Xu, Y.; Yang, L. Point cloud quality assessment: Dataset construction and learning-based no-reference metric. Acm Trans. Multimed. Comput. Commun. Appl. 2023, 19, 80. [Google Scholar] [CrossRef]
  11. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  12. Zhang, L.; Zhang, L.; Bovik, A.C. A feature-enriched completely blind image quality evaluator. IEEE Trans. Image Process. 2015, 24, 2579–2591. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Sun, W.; Min, X.; Wang, T.; Lu, W.; Zhai, G. No-reference quality assessment for 3d colored point cloud and mesh models. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7618–7631. [Google Scholar] [CrossRef]
  14. Abouelaziz, I.; El Hassouni, M.; Cherifi, H. No-reference 3d mesh quality assessment based on dihedral angles model and support vector regression. In Proceedings of the Image and Signal Processing: 7th International Conference, ICISP 2016, Trois-Rivières, QC, Canada, 30 May–1 June 2016; Springer International Publishing: New York City, NY, USA, 2016; pp. 369–377. [Google Scholar] [CrossRef]
  15. Abouelaziz, I.; Chetouani, A.; El Hassouni, M.; Cherifi, H.; Latecki, L. No-reference mesh visual quality assessment via ensemble of convolutional neural networks and compact multi-linear pooling. Pattern Recognit. 2020, 100, 107174. [Google Scholar] [CrossRef]
  16. Lin, Y.; Yu, M.; Chen, K.; Jiang, G.; Peng, Z.; Chen, F. Blind Mesh Quality Assessment Method Based on Concave, Convex and Structural Features Analyses. In Proceedings of the IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shanghai, China, 8–12 July 2019; pp. 282–287. [Google Scholar] [CrossRef]
  17. Abouelaziz, I.; El Hassouni, M.; Cherifi, H. A curvature based method for blind mesh visual quality assessment using a general regression neural network. In Proceedings of the IEEE Procedings 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Naples, Italy, 28 November–1 December 2016; pp. 793–797. [Google Scholar] [CrossRef]
  18. Abouelaziz, I.; El Hassouni, M.; Cherifi, H. A convolutional neural network framework for blind mesh visual quality assessment. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 755–759. [Google Scholar] [CrossRef]
  19. Abouelaziz, I.; Chetouani, A.; El Hassouni, M.; Latecki, L.; Cherifi, H. 3D visual saliency and convolutional neural network for blind mesh quality assessment. In Neural Computing and Applications 32; Springer: Berlin/Heidelberg, Germany, 2020; pp. 16589–16603. [Google Scholar] [CrossRef]
  20. Chetouani, A.; Quach, M.; Valenzise, G.; Dufaux, F. Deep learning-based quality assessment of 3d point clouds without reference. In Proceedings of the 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shenzhen, China, 5–9 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar] [CrossRef]
  21. Fan, Y.; Zhang, Z.; Sun, W.; Min, X.; Liu, N.; Zhou, Q.; He, J.; Wang, Q.; Zhai, G. A no-reference quality assessment metric for point cloud based on captured video sequences. In Proceedings of the 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), Shanghai, China, 26–28 September 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–5. [Google Scholar] [CrossRef]
  22. Yang, Q.; Liu, Y.; Chen, S.; Xu, Y.; Sun, J. No-reference point cloud quality assessment via domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 21179–21188. [Google Scholar] [CrossRef]
  23. Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. Mech. Syst. Signal Process. 2021, 151, 107398. [Google Scholar] [CrossRef]
  24. Cignoni, P.; Rocchini, C.; Scopigno, R. Metro: Measuring error on simplified surfaces. Comput. Graph. Forum 1998, 17, 167–174. [Google Scholar] [CrossRef]
  25. Mekuria, R.; Cesar, P. MP3DG-PCC, open source software framework for implementation and evaluation of point cloud compression. In Proceedings of the MM ’16: Proceedings of the 24th ACM international conference on Multimedia, Vancouver, BC, Canada, 26–31 October 2016; pp. 1222–1226. [Google Scholar] [CrossRef]
  26. Alexiou, E.; Ebrahimi, T. Point cloud quality assessment metric based on angular similarity. In Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, USA, 23–27 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
  27. Javaheri, A.; Brites, C.; Pereira, F.; Ascenso, J. A generalized Hausdorff distance based quality metric for point cloud geometry. In Proceedings of the 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, 26–28 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar] [CrossRef]
  28. Javaheri, A.; Brites, C.; Pereira, F.; Ascenso, J. Mahalanobis based point to distribution metric for point cloud geometry quality evaluation. IEEE Signal Process. Lett. 2020, 27, 1350–1354. [Google Scholar] [CrossRef]
  29. Javaheri, A.; Brites, C.; Pereira, F.; Ascenso, J. A point-to-distribution joint geometry and color metric for point cloud quality assessment. In Proceedings of the 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP), Tampere, Finland, 6–8 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar] [CrossRef]
  30. Javaheri, A.; Brites, C.; Pereira, F.; Ascenso, J. Improving PSNR-based quality metrics performance for point cloud geometry. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 3438–3442. [Google Scholar] [CrossRef]
  31. Meynet, G.; Digne, J.; Lavoué, G. PC-MSDM: A quality metric for 3D point clouds. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–3. [Google Scholar] [CrossRef]
  32. Viola, I.; Subramanyam, S.; Cesar, P. A color-based objective quality metric for point cloud contents. In Proceedings of the 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, 26–28 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar] [CrossRef]
  33. Diniz, R.; Freitas, P.G.; Farias, M.C. Towards a point cloud quality assessment model using local binary patterns. In Proceedings of the 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, 26–28 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar] [CrossRef]
  34. Vajda, I. On the f-divergence and singularity of probability measures. Period. Math. Hung. 1972, 2, 223–234. [Google Scholar] [CrossRef]
  35. Diniz, R.; Freitas, P.G.; Farias, M.C. Multi-distance point cloud quality assessment. IEEE 2020, 3443–3447. [Google Scholar] [CrossRef]
  36. Diniz, R.; Freitas, P.G.; Farias, M.C. Local luminance patterns for point cloud quality assessment. In Proceedings of the 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), Tampere, Finland, 21–24 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar] [CrossRef]
  37. Alexiou, E.; Ebrahimi, T. Towards a point cloud structural similarity metric. In Proceedings of the 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), London, UK, 6–10 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar] [CrossRef]
  38. Diniz, R.; Freitas, P.G.; Farias, M.C. Point cloud quality assessment based on geometry-aware texture descriptors. Comput. Graph. 2022, 103, 31–44. [Google Scholar] [CrossRef]
  39. De Queiroz, R.L.; Chou, P.A. Motion-compensated compression of dynamic voxelized point clouds. IEEE Trans. Image Process. 2017, 26, 3886–3895. [Google Scholar] [CrossRef]
  40. Torlig, E.M.; Alexiou, E.; Fonseca, T.A.; de Queiroz, R.L.; Ebrahimi, T. A novel methodology for quality assessment of voxelized point clouds. SPIE 2018, 10752, 174–190. [Google Scholar]
  41. Alexiou, E.; Ebrahimi, T. Exploiting user interactivity in quality assessment of point cloud imaging. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar] [CrossRef]
  42. Alexiou, E.; Viola, I.; Borges, T.M.; Fonseca, T.A.; De Queiroz, R.L.; Ebrahimi, T. A comprehensive study of the rate-distortion performance in MPEG point cloud compression. APSIPA Trans. Signal Inf. Process. 2019, 8, e27. [Google Scholar] [CrossRef]
  43. Bourbia, S.; Karine, A.; Chetouani, A.; El Hassouni, M.; Jridi, M. No-reference 3D Point Cloud Quality Assessment using Multi-View Projection and Deep Convolutional Neural Network. IEEE Access 2023, 11, 26759–26772. [Google Scholar] [CrossRef]
  44. Wu, X.; Zhang, Y.; Fan, C.; Hou, J.; Kwong, S. Subjective quality database and objective study of compressed point clouds with 6DoF head-mounted display. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 4630–4644. [Google Scholar] [CrossRef]
  45. Liu, Q.; Su, H.; Duanmu, Z.; Liu, W.; Wang, Z. Perceptual quality assessment of colored 3D point clouds. IEEE Trans. Vis. Comput. Graph. 2022, 29, 3642–3655. [Google Scholar] [CrossRef]
  46. Wang, Z.; Li, Q. Information content weighting for perceptual image quality assessment. IEEE Trans. Image Process. 2010, 20, 1185–1198. [Google Scholar] [CrossRef]
  47. Hackel, T.; Wegner, J.D.; Schindler, K. Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 177–184. [Google Scholar] [CrossRef]
  48. Nehmé, Y.; Dupont, F.; Farrugia, J.P.; Le Callet, P.; Lavoué, G. Visual quality of 3d meshes with diffuse colors in virtual reality: Subjective and objective evaluation. IEEE Trans. Vis. Comput. Graph. 2020, 27, 2202–2219. [Google Scholar] [CrossRef]
  49. Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar] [CrossRef]
  50. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  51. Yang, Q.; Chen, H.; Ma, Z.; Xu, Y.; Tang, R.; Sun, J. Predicting the perceptual quality of point cloud: A 3d-to-2d projection-based exploration. IEEE Trans. Multimed. 2020, 23, 3877–3891. [Google Scholar] [CrossRef]
  52. Perry, S.; Cong, H.P.; da Silva Cruz, L.A.; Prazeres, J.; Pereira, M.; Pinheiro, A.; Dumic, E.; Alexiou, E.; Ebrahimi, T. Quality evaluation of static point clouds encoded using MPEG codecs. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 3428–3432. [Google Scholar] [CrossRef]
  53. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  54. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
  55. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  56. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  57. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  58. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  59. Zhang, Y.; Yang, Q.; Zhou, Y.; Xu, X.; Yang, L.; Xu, Y. Evaluating point cloud quality via transformational complexity. arXiv 2022, arXiv:2210.04671. [Google Scholar] [CrossRef]
  60. Zhang, Z.; Sun, W.; Min, X.; Zhou, Q.; He, J.; Wang, Q.; Zhai, G. MM-PCQA: Multi-modal learning for no-reference point cloud quality assessment. arXiv 2022, arXiv:2209.00244. [Google Scholar] [CrossRef]
Figure 1. A suggested model based on transfer learning using 1D CNN.
Figure 1. A suggested model based on transfer learning using 1D CNN.
Jimaging 10 00129 g001
Figure 2. The architecture of the 1D CNN Model.
Figure 2. The architecture of the 1D CNN Model.
Jimaging 10 00129 g002
Figure 3. Displays the reference samples sourced from the WPC, SJTU-PCQA, and ICIP2020 databases [45,51,52].
Figure 3. Displays the reference samples sourced from the WPC, SJTU-PCQA, and ICIP2020 databases [45,51,52].
Jimaging 10 00129 g003aJimaging 10 00129 g003b
Figure 4. Training and validation loss of 1D CNN VGG16 on the ICIP2020 dataset.
Figure 4. Training and validation loss of 1D CNN VGG16 on the ICIP2020 dataset.
Jimaging 10 00129 g004
Figure 5. Training and validation loss of 1D CNN VGG16 on the SJTU dataset.
Figure 5. Training and validation loss of 1D CNN VGG16 on the SJTU dataset.
Jimaging 10 00129 g005
Figure 6. Training and validation loss of 1D CNN VGG16 on the WPC dataset.
Figure 6. Training and validation loss of 1D CNN VGG16 on the WPC dataset.
Jimaging 10 00129 g006
Figure 7. The results of the time cost of point clouds using 1D CNN VGG16 on the SJTU dataset.
Figure 7. The results of the time cost of point clouds using 1D CNN VGG16 on the SJTU dataset.
Jimaging 10 00129 g007
Table 1. The characteristics of the employed networks.
Table 1. The characteristics of the employed networks.
ModelParametersGPU Memory ManagementPerformance
ResNet18∼11.2 MModerateBalanced efficiency and accuracy
ResNet34∼21.3 MModerateImproved over ResNet18
ResNet101∼44.6 MModerate to HighIncreased capacity
ResNeXt50∼25 MModerateEnhanced capacity
MobileNetV2∼3.4 MLow to ModerateEfficient, lower memory footprint
DenseNet201∼20 MModerateDense connectivity, good accuracy
SE-ResNet50∼28.1 MModerate to HighAttention mechanisms, good performance
VGG16∼138 MHighHeavy memory usage, high accuracy
VGG19∼144 MHighSimilar to VGG16, increased parameters
Table 2. Performance of different architectures on the ICIP2020. The best two performances for each database is emphasized in bold.
Table 2. Performance of different architectures on the ICIP2020. The best two performances for each database is emphasized in bold.
ModelICIP2020
PLCC SRCC KRCC RMSE
ResNet180.91170.93190.78620.5076
ResNet340.92440.89760.74070.5164
ResNet1010.85420.91790.76190.6587
ResNeXt500.90660.85400.66660.5757
MobileNetV20.90000.89080.74070.6223
DenseNet2010.85150.86380.69840.7232
SE-ResNet500.93340.94860.84410.4503
VGG160.96780.96990.91000.2597
VGG190.89000.88150.70180.6129
Table 3. Performance of different architectures on the SJTU-PCQA and WPC. The best two performances for each database is emphasized in bold.
Table 3. Performance of different architectures on the SJTU-PCQA and WPC. The best two performances for each database is emphasized in bold.
ModelSJTU-PCQAWPC
PLCC SRCC KRCC RMSE PLCC SRCC KRCC RMSE
ResNet180.83730.83140.64811.31860.81120.80210.624114.4248
ResNet340.89210.85340.68531.06550.84960.85020.688911.3027
ResNet1010.83150.83400.65021.34780.81330.81650.640912.4597
ResNeXt500.86550.85880.71021.21760.76770.75240.560115.3643
MobileNetV20.88430.88230.70151.11300.86540.86680.680811.9486
DenseNet2010.76200.75520.57231.57330.80610.80440.626014.1506
SE-ResNet500.89870.89160.76471.05120.89020.88660.808210.7401
VGG160.96760.95760.84540.61330.90620.90560.755410.2492
VGG190.83070.80170.61511.38030.82730.82600.651013.6159
Table 4. Results of the ablation study’s performance on ICIP2020. The optimum performance for each database is highlighted in bold.
Table 4. Results of the ablation study’s performance on ICIP2020. The optimum performance for each database is highlighted in bold.
TypeFeatureICIP2020
PLCC SRCC KRCC RMSE
GeometryAnis0.60550.68170.49731.2251
Lin0.51720.27310.23281.4148
Plan0.39990.20390.08461.4498
Sph0.64470.64400.45501.0988
Omni0.51600.44540.28571.3031
Sph_fit0.58010.67640.49731.4379
Eigen0.23220.27690.21161.3650
Anis + Lin0.67770.62000.43381.0993
Anis + Lin + Plan0.83100.81110.62430.9880
Anis + Lin + Plan + Sph0.83840.80580.64550.7350
Anis + Lin + Plan + Sph + Omni0.87080.82310.66660.6675
Anis + Lin + Plan + Sph + Omni + Sph_fit0.88810.84800.67720.6303
All_geom0.90490.89240.75130.5917
PerceptualL0.19840.31670.20101.3735
A0.25630.19180.16931.3519
B0.20420.18130.13751.5143
Curv0.37050.39500.31741.4244
sal0.29770.21290.15871.2319
L+A+B0.85550.84650.63490.7312
L + A + B + Curv0.87150.85320.65600.6912
All_perceptual0.89610.85250.64550.6316
ALLAll0.96780.96990.91000.2597
Table 5. Results of the ablation study’s performance on SJTU. The optimum performance for each database is highlighted in bold.
Table 5. Results of the ablation study’s performance on SJTU. The optimum performance for each database is highlighted in bold.
TypeFeatureSJTU-PCQA
PLCC SRCC KRCC RMSE
GeometryAnis0.17470.29540.19442.4061
Lin0.14600.15210.11732.3887
Plan0.21800.23720.15512.3924
Sph0.23060.32890.21692.3885
Omni0.38280.16270.11412.2534
Sph_fit0.57680.39880.26492.3809
Eigen0.21070.22450.14592.3803
Anis + Lin0.24140.23660.17692.3939
Anis + Lin + Plan0.43800.41690.28362.2645
Anis + Lin + Plan + Sph0.58640.56560.40082.1564
Anis + Lin + Plan + Sph + Omni0.65530.62850.46281.8705
Anis + Lin + Plan + Sph + Omni + Sph_fit0.65980.65640.48921.8356
All_geom0.74890.71850.51471.6824
PerceptualL0.17710.16300.10492.3678
A0.19340.16230.11642.3939
B0.24500.25940.17132.3831
Curv0.16640.21920.13162.4045
sal0.32530.38400.28082.4053
L + A + B0.35720.34140.23822.2578
L + A + B + Curv0.44840.43250.29722.2555
All_perceptual0.60590.63510.46002.0780
ALLAll0.96760.95760.84540.6133
Table 6. Results of the ablation study’s performance on WPC. The optimum performance for each database is highlighted in bold.
Table 6. Results of the ablation study’s performance on WPC. The optimum performance for each database is highlighted in bold.
TypeFeatureWPC
PLCC SRCC KRCC RMSE
GeometryAnis0.19140.17120.120523.4895
Lin0.21830.12660.083423.0657
Plan0.29040.22220.162022.5680
Sph0.10150.10260.063723.4544
Omni0.13770.15660.098123.3771
Sph_fit0.27010.16900.111322.9875
Eigen0.19380.11760.080923.1301
Anis + Lin0.24130.16090.108322.8553
Anis + Lin + Plan0.41920.38430.260821.5743
Anis + Lin + Plan + Sph0.48250.43430.301920.7062
Anis + Lin + Plan + Sph + Omni0.54330.50120.356019.8065
Anis + Lin + Plan + Sph + Omni + Sph_fit0.55820.52140.366219.5343
All_geom0.66860.65820.470617.4209
PerceptualL0.13700.12350.081423.3516
A0.26080.24990.24500.2442
B0.13770.15660.098123.3771
Curv0.23170.08450.061322.9709
sal0.14890.12030.087623.4615
L + A + B0.42290.41890.287321.4179
L + A + B + Curv0.46280.50080.348521.7687
All_perceptual0.61320.62180.459621.4339
ALLAll0.93810.93620.79598.5528
Table 7. Performance comparison with the state-of-the-art on the ICIP2020.
Table 7. Performance comparison with the state-of-the-art on the ICIP2020.
TypeMetricICIP2020
PLCC SRCC KRCC RMSE
FRPSNR [5]0.700.70-0.82
M-p2po [5]0.8880.878-0.522
M-p2pl [4]0.9130.915-0.463
H-p2po [5]0.6010.542-0.908
H-p2pl [4]0.6490.602-0.865
MMD [28]0.8060.954--
PSNRYUV [40]0.8680.867-0.564
PCQM [3]0.9420.977--
GraphSIM [6]0.8900.872-0.518
PointSSIM [37]0.9040.865-0.486
TCDM [59]0.9420.935-0.382
RRPCMRR [7]0.6270.882--
NRMVP-PCQA [43]0.9580.982--
Proposed0.99940.99620.97350.0458
Table 8. Performance comparison with the state-of-the-art on the SJTU-PCQA.
Table 8. Performance comparison with the state-of-the-art on the SJTU-PCQA.
TypeMetricSJTU-PCQA
PLCC SRCC KRCC RMSE
FRPSNR [5]0.23170.24220.10772.3124
SSIM [5]0.34760.29870.19192.1770
PB-PCQA [51]0.60760.6020-1.8635
M-p2po [24]0.81230.72940.56171.3613
M-p2pl [4]0.59400.62770.48252.2815
H-p2po [24]0.77530.71570.54471.4475
H-p2pl [4]0.68740.64410.45652.1255
PSNRYUV [40]0.81700.79500.61961.3151
PCQM [3]0.86530.85440.65861.2162
GraphSIM [6]0.84490.84830.64481.5721
PointSSIM [37]0.71360.68670.49641.7001
TCDM [59]0.9300.910-0.891
MMD [28]0.6280.604--
RRPCMRR [7]0.61010.48160.33621.9342
NRBRISQUE [11]0.42140.39750.29662.0937
PQA-Net [9]0.85860.83720.63041.0719
NIQE [12]0.37640.22140.15122.2671
ResSCNN [10]0.86000.8100--
MVP-PCQA [43]0.870.83--
MM-PCQA  [60]0.92260.91030.78380.7716 -
3D-NSS [13]0.73820.71440.51741.7686
Proposed0.96760.95760.84540.6133
Table 9. Performance comparison with the state-of-the-art on the WPC.
Table 9. Performance comparison with the state-of-the-art on the WPC.
TypeMetricWPC
PLCC SRCC KRCC RMSE
FRPSNR [5]0.48720.42350.308015.8133
SSIM [5]0.49440.38780.323415.7749
M-p2po [5]0.48520.45580.318219.8943
M-p2pl [4]0.26950.32810.224922.8226
H-p2po [5]0.39720.27860.194320.8990
H-p2pl [4]0.27530.28270.169621.9893
PSNRYUV [40]0.53040.44930.319819.3119
PCQM [3]0.74990.74340.560115.1639
GraphSIM [6]0.61630.58310.419417.1939
PointSSIM [37]0.46670.45420.327820.2733
TCDM [59]0.8070.804-13.525
MMD [28]0.4200.411--
RRPCMRR [7]0.34330.30970.208221.5302
NRBRISQUE [11]0.31550.26140.208821.1736
PQA-Net [9]0.70000.69000.510015.1800
NIQE [12]0.39570.38870.255122.5502
MVP-PCQA [43]0.850.86--
MM-PCQA [60]0.85560.84140.651312.3506
3D-NSS [13]0.65140.64790.441716.5716
Proposed0.93810.93620.79598.5528
Table 10. Performance comparison with the state-of-the-art on each distortion type on ICIP2020.
Table 10. Performance comparison with the state-of-the-art on each distortion type on ICIP2020.
DistortionVPCCG-PCC OctreeG-PCC TrisoupAll
Type Metric PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC
FRM-p2po [24]0.6150.9540.8170.9630.8640.9440.6730.947
M-p2pl [4]0.6180.9710.8480.9320.6180.9710.6700.975
H-p2po [24]0.6150.6820.6920.9440.6150.9750.6730.656
H-p2pl [4]0.4910.7350.8380.9320.4910.7350.5210.704
MMD [28]0.7840.9600.8710.8310.9060.9060.8060.954
PCQM [3]0.9420.9770.9780.9660.9550.9770.9420.977
GraphSIM [6]-0.855-0.939-0.7700.8900.872
PointSSIM [37]0.2460.5460.6030.6280.2920.4470.7170.795
TCDM [59]-0.822-0.885-0.9700.9420.935
RRPCMRR [7]0.6270.8820.7490.8300.4070.5100.6270.882
NRMVP [43]0.9580.9820.9871.0000.9571.0000.9580.982
Proposed0.9931.0000.9991.0000.9991.0000.9990.996
Table 11. Performance comparison with the state-of-the-art on each distortion type on SJTU-PCQA.
Table 11. Performance comparison with the state-of-the-art on each distortion type on SJTU-PCQA.
DistortionOTCNGGNDSD + CD + GC + GAll
Type Metric PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC
FRM-p2po [24]0.4810.349--0.3850.8010.4990.6460.1650.6610.3850.8370.4160.7570.6060.803
M-p2pl [4]0.4700.345--0.3690.8460.4480.7570.1650.7460.3850.8370.4050.8090.5680.715
H-p2po [24]0.4960.2860.4960.2860.3950.8580.2600.4510.1670.3830.3860.7610.4170.8180.6060.687
H-p2pl [4]0.4920.377--0.3950.8580.3510.4510.1670.3830.4670.8010.4380.8280.5620.683
MMD [28]0.0750.2690.1900.0670.4620.7680.1880.5480.1660.7470.4780.7420.5010.7540.6280.604
PCQM [3]0.7860.7410.8010.8120.7710.9030.7870.8640.8570.9370.7120.8830.8130.9200.8130.855
GraphSIM [6]-0.693-0.778-0.916-0.872-0.886-0.888-0.9418410.856
PointSSIM [37]0.8310.8060.7650.7420.9640.9360.9020.8660.7410.7330.9550.9510.8110.8090.7150.733
TCDM [59]-0.793-0.819-0.921-0.876-0.934-0.944-0.9510.9300.910
RRPCMRR [7]0.2710.2790.0140.0290.1870.1750.3980.4280.0930.0060.5090.4300.2650.1320.2630.219
NRMVP [43]0.8160.6410.8300.8530.9750.9760.9780.9270.9660.9670.9800.9590.9860.9920.9430.915
Proposed0.9400.8900.8980.9270.9640.9720.8860.9360.9890.8540.9890.9790.9971.00.9670.957
Table 12. Performance comparison with the state-of-the-art on each distortion type on WPC.
Table 12. Performance comparison with the state-of-the-art on each distortion type on WPC.
DistortionVPCCDSGNG-PCC OctreeG-PCC TrisoupAll
Type Metric PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC
FRM-p2po [24]0.6840.6970.7790.9000.6860.728--0.5340.4640.3990.566
M-p2pl [4]0.7020.7050.7240.8490.6770.737--0.5210.4620.3950.446
H-p2po [24]0.2540.4450.7550.9040.6620.688--0.2430.2930.1660.258
H-p2pl [4]0.3770.5580.6140.8610.6640.692--0.2990.3550.2260.313
MMD [28]0.7340.7900.6880.7860.8260.8630.0320.0620.5090.5020.4200.411
PCQM [3]-0.643-0.875-0.886-0.894-0.8210.7510.743
GraphSIM [6]-0.612-0.898-0.840-0.855-0.8160.8560.841
PointSSIM [37]0.3790.3650.8720.8350.6700.5860.7830.7910.6570.6810.4600.450
TCDM [59]-0.640-0.882-0.857-0.795-0.8320.8040.807
PSNR [40]0.2900.1990.6780.5390.8290.6530.7730.7800.3290.1960.4980.460
RRPCMRR [7]0.2510.2820.6610.7370.7880.7800.6620.6720.3040.2430.3670.345
NRMVP [43]0.9660.9560.9710.9390.9991.0000.9951.0000.9360.9140.9250.930
Proposed0.9380.9570.9950.9860.9890.9830.9670.9260.9750.9590.9380.936
Table 13. Evaluation across different databases, where the label “WPC→SJTU” denotes that the model undergoes training on the WPC database and then validation using the standard testing configuration of the SJTU database.
Table 13. Evaluation across different databases, where the label “WPC→SJTU” denotes that the model undergoes training on the WPC database and then validation using the standard testing configuration of the SJTU database.
ModelWPC→SJTU
PLCC SRCC
3D-NSS [13]0.23440.1817
PQA-net [9]0.61020.5411
MM-PCQA [60]0.77790.7693
Proposed0.81190.8193
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Laazoufi, A.; El Hassouni, M.; Cherifi, H. Point Cloud Quality Assessment Using a One-Dimensional Model Based on the Convolutional Neural Network. J. Imaging 2024, 10, 129. https://doi.org/10.3390/jimaging10060129

AMA Style

Laazoufi A, El Hassouni M, Cherifi H. Point Cloud Quality Assessment Using a One-Dimensional Model Based on the Convolutional Neural Network. Journal of Imaging. 2024; 10(6):129. https://doi.org/10.3390/jimaging10060129

Chicago/Turabian Style

Laazoufi, Abdelouahed, Mohammed El Hassouni, and Hocine Cherifi. 2024. "Point Cloud Quality Assessment Using a One-Dimensional Model Based on the Convolutional Neural Network" Journal of Imaging 10, no. 6: 129. https://doi.org/10.3390/jimaging10060129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop