Next Article in Journal
Reinforcement Learning-Based Sequence Training for Robust Vehicle Tracking in Dynamic Traffic Scenes
Previous Article in Journal
Environmental Safety Assessment of Riverfront Spaces Under Erosion–Deposition Dynamics and Vegetation Variability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Framework for Dimensional Quality Control in Automotive Assembly: Integration of PCA-BP Neural Network with Traceable Deviation Source Identification

by
Xuemei Du
*,
Yutong Zhou
,
Lei Chen
,
Jingfei Li
and
Anli Ma
School of Economics and Management, Tongji University, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(1), 37; https://doi.org/10.3390/app16010037
Submission received: 18 November 2025 / Revised: 14 December 2025 / Accepted: 17 December 2025 / Published: 19 December 2025

Abstract

The intelligent transformation in the manufacturing industry poses challenges to traditional quality control methods, particularly in handling redundant data and ensuring model interpretability within high-dimensional, multivariate assembly processes. This study presents an integrated approach combining Principal Component Analysis (PCA), Back Propagation neural network (BP neural network), and permutation importance to improve quality prediction and traceability in the automotive body-in-white rear panel dimensional chain. The data for this study originates from the actual production process of an automotive manufacturer. It comprises direct geometric measurements from the rear panel of a specific vehicle model’s Body-in-White (BIW). The measurement points from key coordinates that influence rear panel matching serve as the numerical input variables. The corresponding measurement result from the Skeleton Assembly is utilised as the output variable, which represents the final assembly quality and is treated as a numerical variable in this model. PCA is first applied to reduce dimensionality and eliminate data redundancy. Then, two types of neural networks—single and sequential—are constructed to model nonlinear relationships, with the single neural network exhibiting superior performance in accuracy (average R2 > 95%) and generalisability (RMSE < 0.1). To address the lack of interpretability in conventional neural networks, the permutation importance of variables is assessed to pinpoint the primary sources of bias and to clarify the mechanisms of variable interactions. The automotive company’s practical validation demonstrates the model’s capability to predictively assess the effects of abrupt alterations in bodyside dimensions on rear panel matching quality. The close agreement between predicted (e.g., 1.053693) and actual (e.g., 1.01) values confirms model accuracy, diminishing the reliance on supplementary quality control resources. This study provides a traceable, data-driven framework for enhancing quality control in complex manufacturing assemblies.

1. Introduction

Although traditional product quality control methods have long served as the cornerstone of industrial competitiveness, the transition towards intelligent and high-end manufacturing introduces greater complexities and elevated standards for quality control. This is particularly evident in assembly processes, where dimensional compatibility directly determines product performance and stability. Conventional quality control approaches, which primarily rely on manual expertise and univariate statistical analysis [1], are often inadequate to address the error interactions and cumulative effects inherent to complex multi-stage manufacturing systems (MMS) [2]. As a result, quality variations often become difficult to trace back to specific processes or root causes.
With the advent of Industry 4.0, manufacturing is undergoing a data-driven and intelligence-centred transformation. Modern manufacturing systems, especially in domains like automobile manufacturing, are characterised not only by structural complexity and multi-step processes but also by high coupling and nonlinear behaviours. Traditional methods exhibit clear limitations in handling multi-source, high-dimensional data and uncovering hidden quality correlations, creating an urgent need for more intelligent and adaptive quality control strategies, so as to realise the transformation from post-event detection and reactive correction to predicting quality issues, identifying critical deviations, and tracing root causes. By integrating real-time sensor data, process parameters, and historical quality records, an intelligent quality control system can identify potential deviations in advance, pinpoint key process variables, and track error propagation paths across manufacturing stages. This transformation not only enables more precise and efficient quality management but also lays a solid foundation for building next-generation smart manufacturing systems with self-awareness and autonomous decision-making capabilities. In this context, machine learning and neural networks have emerged as pivotal technologies for enhancing process quality [3]. Artificial neural networks (ANNs) are by far one of the most successful machine learning methods. Due to their flexible and parallel composition of neurons, and their ability to approximate any functions with various input forms, ANNs provide a feasible solution to modern engineering modelling [4,5]. The BP neural network denotes a specific category of feedforward ANN whose training is fundamentally based on the back propagation algorithm. In recent years, BP neural networks have been successfully applied to general and adaptive quality prediction and control in the fields of manufacturing and processing [6,7], exhibiting advantages in rapid and accurate prediction.
Accordingly, this study provides an integrated dimensional chain quality prediction and traceability system that combines principal component analysis (PCA), BP neural networks, and permutation importance. Firstly, by using PCA to reduce high-dimensional correlated data and extract essential features, it eliminates redundancy to ensure reliable and precise model inputs. Secondly, it integrates neural network methodologies from a data-driven perspective to establish the nonlinear predictive relationship between body-in-white dimensions and matching quality. Finally, measuring the variables’ importance addresses the limitations of conventional neural networks in identifying the root causes of deviations and pinpointing the primary sources of these deviations. This method synergistically enhances prediction accuracy and interpretability and introduces a novel dimensional assembly quality analysis technique for automotive manufacturing.

2. Literature Review

Conventional assembly quality control relies on worker expertise and measurement instruments. While adequate for simple scenarios, it falls short in contemporary, complicated assemblies. These complex assemblies demand synchronised, hierarchical data management from diverse sources. To address this, physically based modelling approaches, particularly deviation propagation modelling, are suggested for identifying quality variations [8].
Recent mechanism-based models [9,10,11,12,13] mathematically analyse assembly deviations, overcoming empirical limits. However, their inability to precisely quantify complex multi-factor coupling and adapt in real-time (relying on offline modelling) renders them inadequate for modern complex assembly. Efficient online methods for prediction and control are urgently needed.
Developing sophisticated control models for predicting assembly quality is essential in intricate product production. Accurate forecasting of product quality metrics enables early issue detection and proactive strategies to maintain acceptable thresholds. Product quality prediction methods are often classified as either physical methods or data-driven approaches. Data-driven methodologies are more pragmatic and adaptable, involving training predictive models and extracting trends from previous data. New techniques offer benefits by eliminating the intricate modelling processes conventionally needed in assembly activities. Ref. [14] integrated quality tools, the Genetic Algorithm (GA), and the Distributed Computing Continuum (DCC); Ref. [15] used Bayesian sampling for joint design; Refs. [16,17] proposed a data-driven approach to creating a predictive grinding error model to improve tolerance allocation; and Ref. [18] applied Man, Machine, Material, Method, Measure, and Environment (5M1E)/Fuzzy Analytic Hierarchy Process (FAHP) to assembly error analysis.
Manufacturing firms must examine and handle substantial process data before employing quality prediction methodologies, with data quality being crucial in machine learning algorithms [19]. Consequently, adopting more stable and interpretable quality prediction models is imperative. Prevalent quality prediction methodologies encompass particle swarm optimisation [20,21], grey theory [22,23], support vector machines [24], neural networks [25], and random forest algorithms [26]. Each possesses distinct advantages tailored to various application contexts, data conditions, and problem types. Recent studies have advanced quality control through data-driven methods: Ref. [27] applied time-series Machine Learning (ML) to automotive bumper inspection, while Ref. [28] combined random forest and FAHP for aerospace defect root-cause analysis. For assembly deviation, Ref. [29] modelled transmission mechanisms, Ref. [30] developed online control for multidimensional error coupling, and Ref. [31] integrated digital twins for geometric deviation propagation analysis. Prediction algorithms were enhanced by [32], using Least Squares Support Vector Regression (LSSVR)-enhanced Particle Swarm Optimization (PSO) and Ref. [33], employing ML techniques (including random forests) for imbalanced data. In industrial manufacturing, neural network predictions often overlook critical information within the original data. Ref. [34] proposed hierarchical residual networks with stacked autoencoders; Ref. [35] boosted accuracy via multi-model fusion; and Ref. [36] applied the failure sign algorithm of the firefly neural network (FSAFNN) to gear machining deviations. Advanced detection was addressed by [37] using interpretable Artificial Intelligence (AI) for defect linkage mining, and Ref. [38] implemented multilayer transformers for multi-source time-series anomaly detection.
Currently, most existing assembly quality optimisation methods rely primarily on design models to achieve simulation-based offline optimisation. Assembly processes face uncertainties from measurement, location, fixturing, and tightening forces, limiting error source identification and quality assurance. Neural networks are among the most prevalent machine learning algorithms for process and quality management in manufacturing, primarily for real-time analysis or identifying defective products using picture recognition. Their workflow can be divided into two major processes, namely, forward propagation and backward propagation of errors (backpropagation). In the forward propagation process, the input layer receives external input signals and passes them to the hidden layer; the hidden layer performs a nonlinear signal transformation; the processed information is transmitted to the output layer, and the output layer outputs the result. In the backpropagation stage, an error is propagated backward from the output layer through the hidden layer(s) toward the input layer, and the weights of connections between neurons in each layer are updated according to the principle of error gradient descent [39]. Ref. [40] used power signals with feed-forward networks for spot weld diameter prediction, outperforming regression models. For complex manufacturing scenarios, Ref. [41] identified key quality features in multi-process production using PageRank algorithms, while Ref. [42] enhanced prediction precision with Phased Dual-Attention Long Short-Term Memory (PDA-LSTM) = networks. Process-specific innovations included [43] optimising blast furnace control via a BP neural network, Ref. [6] developing a Random Subspace Method (RSM)-BP neural network for small-batch surface roughness prediction, and Ref. [44] combining the Principal Component Analysis (PCA) dimensionality reduction with deep neural networks for internal crack analysis. Ref. [45] used a BP neural network to predict adjustment errors, and heuristic algorithms guided by the structural characteristics of the BP neural network are embedded into the Machine Learning framework to construct a bi-level optimisation strategy that enhances model performance.
Despite advancements in physical modelling and data-driven techniques in contemporary assembly quality control research, a substantial bottleneck persists regarding the demand for high-dimensional and tightly coupled dimensional chain analysis in automotive manufacturing. Physical models’ offline nature impedes dynamic coupling quantification, while prevalent data-driven approaches (e.g., LSTM, Random Forest) suffer from “black-box” limitations that obscure deviation sources and hinder actionable optimisation. For complex assemblies like body-in-white—characterised by high dimensionality and multicollinearity—current methods rely on manual feature filtering or sequential prediction/traceability, risking inefficiency and error. Neural networks correlate parameters with quality but miss transmission paths; traditional statistics (e.g., ANOVA) identify problematic processes but fail with nonlinear interactions and real-time data. Moreover, most research compartmentalises data dimensionality reduction, predictive modelling, and root cause analysis, overlooking the possibility for synergistic optimisation among these elements, leading to inadequate model flexibility in dynamic production environments.

3. Data Collection, Preprocessing, and Analysis

3.1. Dimensional Chain Construction for Body-in-White Assembly

To analyse the rear panel fit quality, the assembly drawing of the rear panel must be established to support further node-to-node influence analysis, using a certain automobile company as an example, as shown in Figure 1. Since body-in-white components are typically joined by welding, with sequences limited by accessibility and part interference, optimising the process design requires systematic assembly sequence modelling. This study uses a directed graph to represent the assembly relationship, capturing both sequence dependency and physical connection logic. The assembly graph is defined as G = P , D , where P = P 1 , P 2 , , P n is the set of components, and D = D 1 , D 2 , , D n is the set of relationships describing their connections. Solid lines indicate direct physical connections, while dashed lines represent sequence constraints due to welding accessibility—for example, if part P j blocks the weld area of P i then P i must be assembled before P j .
R = r i j n n , r i j = 1   P h y s i c a l   c o n n e c t i o n ,   p a r t   i   p r e c e d e s   a s s e m b l y   j 1   P h y s i c a l   c o n n e c t i o n ,   p a r t   i   l a t e r   t h a n   a s s e m b l y   j   λ   n o t   d i r e c t l y   c o n n e c t e d ,   p a r t   i   p r i o r   t o   a s s e m b l y   j   λ   n o t   d i r e c t l y   c o n n e c t e d ,   p a r t   i   l a t e r   t h a n   a s s e m b l y   j   0 , n o   c o n s t r a i n t   r e l a t i o n s h i p
Based on the architectural diagram and the welding accessibility employed by the automobile company, the pertinent components of the rear panel can be interrelated as depicted in Figure 2. The numbers correspond to those in Figure 1. Solid lines indicate direct physical connections between components, while dashed lines denote a sequential order of operations between components that are not directly connected. This convention stems from practical manufacturing constraints. During auto body welding, many components cannot be welded simultaneously due to spatial interference.
Based on Figure 2, the adjacency matrix of the directed graph G = { P , D } can be obtained. Through graph theory transformation, it can be converted into numerical sorting data while retaining the assembly priority, so that the computer can recognise the assembly sequence represented by {} for subassemblies and () for subordinate components, while maintaining the hierarchical relationship. The interrelations among specific assemblies can be articulated in the following manner:
Γ 1 , Γ 2 , Γ s , Γ r , Γ x , Γ y ,
Let Γ s represent the assembly, defined as Γ s = p 1 , p 2 , , p l 1 l n , p i P , where P = p 1 , p 2 , p 3 , , p n and n signifies the total number of pieces.
Consequently, the assembly dimension chain for the whole vehicle of this automobile company may be articulated as follows:
6 ) 5 ) 2 ) , 16 ) 17 ) 18 ) 11 ) 10 ) 12 ) 4 ) ,   13 ) 14 ) 15 ) 8 ) 9 ) 7 ) 3 ) , 1 )  
The assembly sequence of various components and their interrelated assembly relationships can be identified from these brackets; specifically, (2), (4), (3) represent the primary assembly, while 6 ) 5 ) 2 ) denotes the subassembly, which is ultimately integrated to constitute the assembly of (1).

3.2. Data Collection

This study focuses on the dimensional matching quality of a vehicle’s rear section, which is evaluated by comparing the physical measurements against theoretical design specifications. The analysis is based on a dataset obtained from the automobile manufacturer, comprising rear panel measurement data for a specific Body-in-White model from June 2022 to October 2023. All data were acquired using a calibrated Three-Coordinate Measuring Machine (CMM) system, which provides an objective assessment of deviations at key installation and mating points. The fundamental quality criterion is whether the foundational dimensions of the rear section are within the specified tolerance range. Consequently, if it is within the tolerance range, the body matching quality is considered to have met the required control standards.
To construct the dimensional chain for rear panel fit quality, measurement points related to five key assemblies are selected based on engineering expertise: Side Inner Panel Assembly (039), Side Outer Panel Assembly (051), Rear Floor Assembly (101), Floor Assembly (709), and Body-in-White Skeleton Assembly (701), corresponding, respectively, to parts (4), (2), (8), (3), and (1) as shown in the previous section. Assemblies 709 and 701 are re-assessed daily, while the remaining subassemblies are evaluated twice a week on average. Additionally, overlapping measurement points exist between sub-assemblies and assemblies, resulting in repeated assessments of their respective components. A total of 119,910 measurement data points is ultimately accumulated.
Using the measurement point ‘NLASL1201_V_AA’ in 709 as an example, the measurement data is shown in Table 1. X, Y, and Z serve as the primary criteria for dimensional evaluation, whereas the standard D value is employed to augment the assessment of surface alterations.

3.3. Data Preprocessing

The initial modelling attempts using the raw measurement data yield suboptimal prediction performance, indicating the likely presence of significant noise, sporadic measurement errors, and non-systematic variations. To enhance the signal-to-noise ratio and extract the stable, systematic relationships crucial for quality prediction, a targeted preprocessing workflow is designed and implemented. The workflow comprises three sequential stages: data classification, gross error elimination, and data denoising, utilising selected data from 709 as examples. Its overall effectiveness is validated by the subsequent improvements in model stability and predictive accuracy.

3.3.1. Data Classification

The unprocessed Computerised System Validation (CSV) data are classified and structured by measurement points to create an analytical matrix-type data framework encompassing multidimensional measurement metrics, with the reorganised data presented in Table 2.

3.3.2. Gross Error Elimination

Prior to dimensionality reduction with PCA and training the neural network, all numerical features (i.e., the X, Y, and Z values of each measurement point) are standardised to have a mean of zero and a unit standard deviation. This step is crucial for PCA, which is sensitive to the scale of variables, and it also stabilises and accelerates the convergence of the neural network training.
The standardisation is applied to each data point independently using the Z-score normalisation formula:
x standardised = x x ¯ σ
where x is the original value of a data point for a given feature, x ¯ is the mean of that feature calculated from the data set, and σ is its standard deviation. The calculation formula for x ¯ and σ are as follows:
x ¯ = i = n n x i n
σ = i = 1 n x i x ¯ 2 n 1
This work employs the 3 σ criterion for gross error rejection based on normally distributed data. The computation of all X-direction data for the NLASL1201_V_AA yields x ¯ = 0.0776 and σ = 0.4827 for this measurement point. The upper and lower tolerances of the data are calculated based on these two values, as shown in Equations (6) and (7).
U p p e r   t o l e r a n c e = x ¯ + 3 σ = 1.3704
L o w e r   t o l e r a n c e = x ¯ 3 σ = 1.5257
All data sources are evaluated, and those that satisfy Equation (8) are removed. The σ value is recalculated, and the new σ value is used to screen gross errors further until no gross errors remain.
v d = x d x ¯ > 3 σ
Upon eliminating the three outliers (−1.64, 1.38, and 1.5), we obtain the new x ¯ = 0.0777 and σ = 0.4626 , and retain the updated measurement data points.

3.3.3. Data Denoising

This paper analyses the data distribution of NLASL1201_V_AA following the removal of gross errors, utilising a histogram and box plot, as shown in Figure 3. In Figure 3, the solid red line represents the theoretical normal distribution, while the dashed red lines and the blue shaded band both indicate the boundaries of the confidence interval. The black data points are the observed values, and assessing their position relative to the theoretical line and confidence band helps determine whether the data significantly deviate from a normal distribution. The analysis reveals that the measurement point exhibits normal distribution, with noise predominantly concentrated at both extremes of the data. A Gaussian filter, which is more responsive to normally distributed data, is chosen for processing to achieve data smoothing.
The fundamental parameter of Gaussian filtering is the standard deviation σ, which dictates the extent of the filter’s smoothing effect. This study employs the Gaussian filtering function from the Scipy library in Python 3.9 to assess the MSE of filtering outcomes across various σ values. For settings of 1.0, 1.5, and 2.0, respectively, for the output results, and the Mean Squared Error (MSE) values obtained are 0.0709, 0.1223, and 0.1107. Since a lower MSE indicates superior performance, data filtered with a Gaussian filter with σ = 1 is ultimately selected for subsequent data mining.
Figure 4 effectively eliminates extreme noise while preserving distribution tails, retaining original features, and reducing computational complexity for subsequent mining. Upon completion of data cleansing, a total of 88,970 measurement data points are acquired.
The preprocessing steps are essential for creating a robust and stable dataset for model training, effectively removing measurement noise and sporadic, non-systematic errors. However, the 3σ-based outlier removal and Gaussian smoothing, while targeting random noise, may also attenuate or remove rare yet potentially significant abnormal patterns that deviate from the normal process distribution. These patterns could correspond to infrequent but critical process faults, incoming material defects, or unique assembly events. Since this study aims to establish a reliable prediction model for routine dimensional control, by filtering these noises, the model is primarily optimised for predicting quality under stable, standard operating conditions; its ability to predict outcomes under extreme or unforeseeable fault conditions may be limited.

4. Model Development Based on PCA-BP for Dimensional Matching Quality Prediction

4.1. PCA Optimisation of Input Parameters

In this study, PCA is employed not for the conventional purpose of linear feature transformation, but as a robust feature selection technique to identify the most informative original measurement points from a high-dimensional, multicollinear dataset [46]. We apply standard linear PCA for dimensionality reduction and feature selection. The calculations are performed using the PCA class from the scikit-learn library (version 1.3.0) in Python.
The BIW Assembly dimensional data is characterised by strong correlations among numerous measurement points due to physical constraints and process couplings. Simple univariate metrics, such as Pearson correlation coefficients between individual predictors and the target, could identify locally relevant points but may fail to capture the underlying, system-level variation patterns that govern overall assembly quality. PCA, by identifying the orthogonal directions of maximum variance in the entire predictor space, effectively reveals these dominant modes of variation. We then trace back to the original variables that contribute most to these key PCs.
This approach ensures that the selected predictors are not only individually significant but also collectively representative of the major systematic variation sources in the assembly process. The retained original measurement point values are then used directly as inputs to the BP neural network, preserving their physical interpretability for subsequent root cause analysis, which would be obscured if transformed principal components were used.
The dimensional matching quality of the rear panel is influenced by a multitude of pertinent factors. Due to the elevated measurement frequency of factory-manufactured components (such as Floor Assembly Zone) and the comparatively lower measurement frequency of incoming components from suppliers (such as Body Side Compartment), directly integrating the dimensional reports of incoming components with the modelling may lead to unnecessary interference factors caused by different measurement cycles, we prioritise establishing higher level factory-controlled components, preserving solely the 101 within the 709. Therefore, the model input comprises five essential components: 101 (rear floor assembly), 051 (side outer panel assembly), 039 (side inner panel assembly), 709 (floor assembly), and 701 (body-in-white skeleton assembly).
From the perspective of data analysis, there are 233 measurement points recorded from various locations; including all of them would considerably elevate the computing complexity and resource consumption of the model. To manage the computational complexity of 233 measurement points and their correlations, PCA is employed to reduce dimensionality, removing superfluous variables while preserving essential features. Using a refined dataset of 101 points as a case study, this section details PCA’s application process and key phases in rear panel dimensional modelling.

4.1.1. PCA Application Using 101 as an Example

Based on previously cleaned and normalised data, the 101 area totals 33 variables, with 434 data points for a single variable and 14,322 data points cumulatively. The Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy is 0.699, and Bartlett’s test of sphericity indicates that χ 2 = 21,404.402 , d f = 528 , and p < 0.001 . The results suggest that the PCA was suitable.
The PCA shows that the first eight principal components (PCs) explain 91.45% of the total variance. The PC1 (eigenvalue 2.975) contributes 41.46%, and PC2 (eigenvalue 1.927) contributes 26.85%. Given their dominant cumulative contribution and the minimal impact of later components, these eight PCs were selected as evaluation criteria.
Based on the eight selected PCs, the contribution of variables to the PCs is ranked from highest to lowest according to PC1, and the statistical results are shown in Table 3. To facilitate the selection of high-contribution variables, plotting all data yields a combined histogram of all 33 variables across the eight PCs. Figure 5 shows the aggregated histogram for these components.
To more accurately assess the importance of each original variable in explaining the overall data variation, we adopt a weighted contribution calculation method. This method considers the proportion of each PC in explaining the overall variance, thereby assigning appropriate weights to the variable contributions within different PCs.
Let w i denote the variance contribution ratio (i.e., the proportion of eigenvalue to total eigenvalues) of the i t h principal component and λ n i represent the original contribution of variable n in the i t h PC.
The number of PCs is selected through their cumulative contribution rate [47], which can effectively demonstrate the importance of their comprehensive influence in the measurements. By summing the contributions of each variable across the selected eight PCs, the cumulative contribution of each variable to all PCs can be calculated. Thus, the following formula can be used for calculation, where p represents the total number of variables.
Cumulative weighted contribution of individual variables:
i = 1 8 w i · λ n i n = 1 , 2 , p
Cumulative weighted contribution of all variables:
n = 1 p i = 1 8 w i · λ n i n = 1 , 2 , p
Cumulative weighted contribution share of individual variables:
i = 1 8 w i · λ n i n = 1 p i = 1 8 w i · λ n i n = 1 , 2 , p
Parameter Explanation for the Formulas: for an original matrix with n samples and p indicators, we first compute the column-wise mean x j and standard deviation S j ; then, we transform the raw data x i j into standardised data X i j = x i j x ¯ j S j to obtain the standardised matrix X . Next, based on this standardised matrix, we calculate the p × p covariance matrix R , where the element r i j represents the correlation coefficient between indicator i and indicator j . Finally, we solve for the eigenvalues of the covariance matrix R , which satisfy λ 1 λ 2 0 [48].
The cumulative weighted contribution ratio of all variables is calculated and ranked across the eight PCs, as shown in Table 4. This helps in analysing the cumulative contribution rates and reducing dimensionality. The first 11 variables contain the top 80% of the contribution to content information change. To optimise the analysis cost and reduce the number of calculations, these variables are selected as the input variables in the BP neural network.

4.1.2. Principal Component Evaluation and Selection

For 101, the 11 selected variables (For ease of understanding, hereinafter referred to as “measurement points”) are distributed on the digital model as shown in Figure 6.
These points cluster primarily on the inclined support of the rear floor assembly and the flange edge area on the side of the rear floor. The analysis shows that these regions involve non-focused flange edges. During production, conventional Reference Point System (RPS) areas and critical surfaces are secured in position using pins and fixtures. However, the flange edges and styling surfaces have no dedicated clamping points and instead rely on the inherent stiffness of the stamped part to maintain their shape. Crucially, the inclined support and flange edge area form the main welding joint between the rear panel and side inner panel, ensuring final connection dimensions.
Similarly, PCA was performed on the four major areas (051, 039, 709, and 701) in the same way, with the selected measurement points being 32 for 701, 23 for 039, 21 for 051, and 9 for 709. The 701, 039, and 051 regions possess a greater number of measurement points, as these areas are monitored comprehensively, encompassing both the inner and outer side panels along with the final vehicle dimensions; therefore, the corresponding impact area is larger. Consequently, this paper uses filtered data from these high-value measurement points to develop the neural network model.

4.2. Configuration of the BP Neural Network Algorithm

The BP neural network is a multi-layer feedforward network trained via the error back-propagation algorithm. The core of its learning process involves using gradient descent to iteratively adjust the network’s weights and biases, thereby minimising a predefined loss function (e.g., MSE) between the actual and desired outputs. The architectural configuration and learning dynamics of the network are governed by a set of hyperparameters. These include the number of hidden layers, the number of neurons per layer, the choice of activation functions, and the learning rate. The selection of these hyperparameters is typically empirically predetermined prior to the commencement of the training process. Based on the BP neural network, it is possible to effectively simulate complex geometric constraints and cumulative tolerance relationships in a dimensional chain and automatically adjust weights and thresholds based on errors. This adaptability allows the model to continuously iterate and optimise, adapting to new dimensional chain datasets. For example, Liu et al. [49] used a BP neural network to perform accurate dimensional prediction and precision verification of the weld seam dimensional chain model.
This paper investigates two neural network modelling strategies—single neural network and sequential neural network—considering the hierarchical nature of measurement points within the part structure. To identify the most suitable neural network for the automobile company, both strategies will be employed to construct the models separately, and the ultimate method will be determined based on the final validation results. From the results of PCA in Section 4.1.2, a cumulative total of 41,664 data points is obtained from 96 measurement points (the data volume of each measurement point is 434). This model will allocate 70% of the data to the training set, comprising 29,184 measurement data points, 20% to the validation set, totalling 8352 measurement data points, and retain 10% for model testing, amounting to 4128 measurement data points. The BP neural network model in this study is implemented using the MLPRegressor class from the scikit-learn machine learning library (version 1.3.0) in Python.

4.2.1. Data Matrix Construction and Rationale

The input data matrix X and output matrix Y for the BP neural network are constructed as in Table 5. Each row in the dataset represents a unique assembly instance (e.g., a specific vehicle body at a specific measurement time). The columns represent the dimensional variables of different areas.
This design is dictated by the physical assembly sequence and deviation propagation path shown in Figure 2. Components 101, 051, 039, and 709 are precursors and sub-assemblies that are joined together to form the final 701 Skeleton Assembly. Their individual dimensional states collectively determine the final dimensional state of 701. Modelling the relationship Y = f ( X ) directly captures this cumulative effect of upstream variation on the final assembly quality, enabling predictive rather than reactive control.
The matching of data rows is achieved by aligning timestamps and product identifiers. For each completed 701 assembly sample, the measurement system traces back and associates the most recent pre-assembly measurement records of all its upstream areas (101, 051, 039, 709) based on the production timestamp or batch number. Despite differences in measurement frequencies across these components, this chronological backtracking method constructs corresponding input variables that reflect the pre-assembly state for each of the 701 variables. During the modelling process, the measurement for each variable (e.g., NLASL1201_V_AA) is one-dimensional; the X, Y, and Z values are each incorporated as independent feature columns into the input matrix, ensuring that the model learns the causal relationship from the complete spatial deviations of upstream parts to the final assembly quality.

4.2.2. Single Neural Network Model

This neural network models all measurement points directly. The measurement points of 101, 051, 039, and 709 are used as X factors, amounting to a total of 64 groups with 27,776 measurement data points. The data of 701, totalling 32 groups with 13,888 measurement data points, are used as Y factors.
h i d d e n   l a y e r s = 2 are chosen to balance dimensional chain complexity and overfitting risk from excessive layers. The number of neurons in the hidden layer is determined by the following empirical Equation (12) [50].
N h = N s α × N i + N o
where N i denotes the quantity of neurons in the input layer; N o signifies the quantity of neurons in the output layer; N s represents the number of samples in the training set; and N h indicates the computed number of neurons. a is a variable with arbitrary values that can be self-assigned, often within the range of 2 to 10.
Based on this empirical formula, the maximum and minimum values of the neurons used are calculated separately:
N h m a x = 29,184 2 × 64 + 32 152
N h m i n = 29,184 10 × 64 + 32 30
The number of neurons in the hidden layers is tuned within a fundamental range of 30 to 152. In this study, a count of 100 is chosen as the starting point for network configuration. Within this architecture, more neurons are allocated to the first hidden layer (65) than to the second (35).
Employing the two-hidden-layer architecture is based on a balance between model capacity and the risk of overfitting. The dimensional chain problem, while nonlinear, does not require excessively deep feature transformations. A single hidden layer may lack the expressive power to capture complex interactions among the numerous factors in the dimensional chain model, whereas three or more layers could easily overfit due to the high dimensionality of the input and the limited number of training samples. Adding more layers would also increase computational cost without a guaranteed improvement in performance.
The initial neuron counts (65 and 35) are derived from the empirical rule (Equation (12)) as a starting point. Recognising the general nature of this rule, we conduct a systematic experimental evaluation to justify our final choice. We test multiple network configurations around the initial estimate, including but not limited to [50/25], [80/40], and [100/50]. The performance of these architectures is compared to the validation set using the primary metrics ( R 2 and RMSE).
The [65/35] configuration consistently achieves an optimal balance: it delivers high accuracy (e.g., R 2   >   0.95 ) while maintaining lower validation error compared to smaller networks like [50/25], and exhibits less overfitting (smaller generalisation gap between training and validation sets) compared to larger networks like [100/50]. Therefore, the [65/35] architecture is selected as the optimal configuration for the single neural network model. The training of the sequential neural network in the following text is also the same as in this section.
The Gaussian activation function is employed in the hidden layers due to normally distributed data. The learning rate is typically chosen in the range of 0~1, and the traditional default value of 0.1 is chosen. Given the localised size changes in the dimensional chain, weighted attenuation is adopted for training penalties. The number of iterations is set to five for each model to enable timely training. The finalised BP network hyperparameters are summarised in Table 6.

4.2.3. Sequential Neural Network Model

Sub-models are developed sequentially per assembly hierarchy: first 101 to 709, then integrating 709, 039, and 051 to reach 701 body assembly. Sequential building requires the output of the first network ( 101 709 ) to feed into the second. Thus, models for 101 and 709 must be designed first. Data selection (training, validation, test) follows the previous methodology, differing only in new network parameters.
The 101 has 11 measurement points; 709 has 9. Neuron/layer counts for the first network are calculated using the established formula and principles:
N h m a x = 6076 2 × 9 + 11 152
N h m i n = 6076 10 × 9 + 11 30
Based on comprehensive principles, the initial model uses 30 hidden neurons (20 in the first layer, 10 in the second) with other parameters unchanged. Using the 709 prediction values alongside 051 and 039 data (totalling 53 input components) as inputs, the sequential neural network calculates the 701 output response (32 output factors) with two hidden layers totalling 100 neurons (65 first, 35 s).
N h m a x = 25,823 2 × 53 + 32 152
N h m i n = 25,823 10 × 53 + 32 30

5. Results and Discussion

5.1. Performance of Different Models

5.1.1. Model Performance Evaluation Metrics

In this study, we selected several indicators to quantitatively analyse the performance of various models, as shown in Table 7. y i is the true value, y ~ i is the predicted value, y ¯ i is the mean of the observed series and m indicates the size of the sample collection.

5.1.2. Comparison and Optimisation of Neural Network Models

After running the neural network models on the training set, the predicted values and model formulas of 701 measurement points can be obtained. The model is then adjusted using the validation set, and its final performance is evaluated on the test set. By comparing the performance metrics across the training, validation, and test sets, the model’s goodness of fit and generalisation capability can be comprehensively assessed.
Table 8 presents the outcomes of NLSTS1235_O_BA_701, demonstrating the comprehensive operation of the single neural network, which will serve as a case study for the evaluative methodology in this paper.
The 70%–20%–10% training–validation–test split provides robust model evaluation. Cross-validation confirms its superior performance: R 2 > 0.95 across all sets indicates excellent fit, while low prediction errors (Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), The Sum of Squares due to Error (SSE)) demonstrate high accuracy.
The comprehensive analysis of the aforementioned results indicates that the neural network model exhibits superior performance across all evaluation metrics of NLSTS1235_O_BA_701. The model demonstrates a strong fit on the training data while exhibiting comparable performance on the validation and test sets, indicating robust generalisation capabilities and effective prediction of new data. This model is expected to be dependable and efficient in actual applications. All measurement data points are evaluated using the same procedure, with the results illustrated in Figure 7.
Figure 7 shows that most points exhibit high model usability, except for NLSTS1308_R_BA_Y_701, where SSE declines across three datasets ( 5.5564 2.3324 0.8594 ). This suggests that the model possesses satisfactory predictive performance on the test set, thus negating the need for further model optimisation.
The evaluation results of the sequential neural network are shown in Figure 8. It is found that there are significant differences in the three datasets at many points.
Analysis of randomly selected NLSTS1274_O_BA_701 reveals significant dataset disparities (The R 2 across the training, validation, and test sets are 0.8731, 0.7489, and 0.6903), indicating overfitting. Neuron augmentation ( m a x = 152 ) is prioritised over layers to enhance fit without excessive training time. The final configuration is L a y e r   1 = 100 , L a y e r   2 = 52 neurons. Figure 9 shows light outputs significantly outperform the initial model (dark lines) across metrics.
After the second model training, there is, again, a lack of model explanation on NLSTS1308_R_BA_Y_701 ( R 2 : 0.8731, 0.7489 and 0.6903; R M S E : 0.1332, 0.1853, and 0.2041 across three sets). All other points achieve R 2 > 90 % ,   R M S E < 0.05 . Neuron augmentation triples training time, increasing computational demands. Final model selection confirms the single neural network outperforms the sequential approach across validation metrics under equivalent resource constraints, as shown in Figure 10.
At the same time, the NLSTS1308_R_BA_Y_701 point, with the worst status in both models, is extracted to compare the difference between the two evaluations of this data point, which can be observed in Figure 11.
In Figure 11, light pillars (single neural network) outperform dark pillars (sequential neural network) across metrics: higher R 2 and lower MAE/SSE/RMSE, confirming their superior fitting capability.
In order to visualise the advantages and disadvantages of the two models, the evaluation data of the sequential neural network for the NLSTS1308_R_BA_Y_701 is subtracted from the single neural network evaluation data, illustrated in a histogram, as shown in Figure 12.
A positive difference signifies that the single neural network’s R 2 is superior. The greater negative difference value of the remaining three evaluations indicates the superior performance of the single neural network. Therefore, the single neural network is ultimately selected for subsequent application.

5.1.3. Practical Verification of the Single Neural Network Model

To validate the single neural network’s generalisability, a new production dataset is employed. This validation set is constructed from one measurement per day over five consecutive days, with model performance evaluated with respect to the MAE, MSE, and RMSE metrics.
Figure 13 demonstrates strong predictive accuracy across all measurement points, with MAE, MSE, and RMSE values all below 0.1. Especially, MAE values are as low as 0.003573 at NLSTS1307_H_BA_701 and 0.006894 at PLSTS1108_R_BA_701, a level of accuracy further underscored by the exceptional precision at NLSTS1264_U_BA_701 (MSE = 0.000000011). Validation using new production data confirms the model’s robustness in handling dimensional variations during routine operations, affirming its practical generalisability.

5.2. Feature Traceability: Pinpointing with Permutation Importance

To ensure transparency and interpretability of the predictive model and trace the sources of dimensional deviations, we employ Permutation Importance (PI). Among various feature importance evaluation methods (e.g., SHAP values, Gini Importance, and Leave-One-Covariate-Out), PI is selected for its model-agnostic nature, interpretability, and its unique ability to disentangle a feature’s independent influence from its interaction effects with other features. This is critical for root cause analysis in complex assemblies.
For a trained model, the importance of a feature is calculated by randomly shuffling the values of that feature in the validation dataset and measuring the resulting increase in the model’s prediction error (MSE). This process is repeated multiple times to obtain a stable estimate. The reported importance score is the average increase in MSE across all iterations. In this study, the permutation importance function from the scikit-learn library is used, with the number of permutations set to K = 50, and we calculate two metrics:
Main Effects: The average increase in MSE when only the feature in question is permuted. This shows the feature’s direct, independent contribution to the model’s prediction accuracy.
Total Effects: The average increase in MSE when the feature in question is permuted together with all other features simultaneously. This captures the feature’s total contribution, including both its main effect and all its interactions with other features.
A Total Effect significantly larger than its corresponding Main Effect indicates that the feature exerts substantial influence through interactions with other process variables. Conversely, a feature with a high Main Effect is a dominant independent driver of quality deviation. The values presented in Table 9 represent the mean contribution percentages derived from this procedure, providing a ranked list of critical deviation sources for targeted quality intervention.
Table 9 reveals NLQRA1201_L_AA_Y_709, NLVDS1204_L_BA_Y_709, and NLQRA1202_L_AA_Y_709 as dominant variables with high main and total effects on predictions, marking them critical for error control and optimisation. Notably, some variables exhibit minimal main effects but substantial total effects, indicating significant interaction-driven indirect influence. This demonstrates that model outcomes depend on both individual variables and their synergistic relationships, necessitating explicit interaction in the modelling process.

6. Empirical Analysis

This section applies the neural network prediction model to solve the sudden overshoot problem of the body-in-white dimensions.
During the manufacturing of a specific vehicle model, the measurement data at point NLSTD1311_L_BA_Y in the luggage compartment area of 051 exhibited an abnormal fluctuation. The recorded value surged to 1.58 mm, a significant deviation from the conventional mean of 0.89 mm. The traditional single-piece optimisation was deemed unsuitable due to its high time and cost requirements, compounded by a lack of historical data on how adjustments in this area would impact the rear cover matching. To address this, the PCA-BP prediction framework was employed, which integrated data from the side panel and floor assemblies. The model predicted that only one measurement point, NLSTS1249_O_BA_701, would exceed the tolerance (1.053 mm > +1 mm threshold), as detailed in Table 10.
Upon thoroughly comparing the measurement report, it is revealed that the area is merely the measuring process point of the rear cover matching area, and it has a long-term propensity to be on the high side. Since the fluctuation range is not large and is within the controllable range of the actual loading, the region is judged to be risk-free, and no special intervention is made for the out-of-tolerance part. Following repeated routine measurements, no apparent abnormalities are detected in the report of 701, and the actual measurement result at this point is 1.01, which closely aligns with the model prediction.
Based on the above analysis, PI validation is further conducted. To visually highlight the primary sources of assembly variation, Figure 14 presents a Pareto diagram of the contribution share based on the Total Effects.
As shown in Figure 14, the contribution analysis clearly indicates that the anomalous point (NLSTD1311_L_BA_Y) is only a secondary factor in the overall dimensional variation. The primary sources of variation—key measurement points such as NLQRA1201_L_AA_Y_709, NLVDS1204_L_BA_Y_709, and NLQRA1202_L_AA_Y_709, which collectively account for 38% of the total effect—have already been identified during the model-building phase. In practice, these points are located in well-controlled process areas where fixture positioning, welding sequence, and part conformity are rigorously monitored. As a result, their production variability remains very low (standard deviation < 0.05), and they rarely drift out of tolerance under normal conditions. This explains why, despite the model attributing strong influence to them, the actual assembly fluctuation in this case is minimal, and the corresponding 701 measurement results consistently meet specifications with reduced variation.
The key insight from this case is the shift in quality strategy it enables. By distinguishing primary variation drivers from secondary anomalies through the PCA-BP model, we can implement differentiated control: stringent preventive monitoring is maintained for the few critical points, while non-critical deviations (like the one observed) do not trigger immediate, costly interventions. This approach directly reduces the rework rate compared to the period before model implementation.
From a cost perspective, this strategy yields savings in two major ways. First, avoiding unnecessary rework on non-critical anomalies saves direct labour, material, and downtime costs. Second, and more significantly, the preventive focus on key points drastically reduces the occurrence of major defects, thereby minimising high-cost rework, line stoppages, and delays. These benefits are intrinsically linked to the model’s ability to isolate major variation sources from minor noise.
Furthermore, the neural-network-based method itself creates substantial efficiency gains. Without it, resolving such an anomaly would require immediate 701 skeleton measurements (taking three hours per assembly due to lack of historical data) and 100% manual inspection of rear panels using gap rulers and flatness gauges. The predictive model eliminates these resource-intensive, ad hoc measurement requirements, significantly reducing quality control costs while sustaining high operational efficiency.
It is worth emphasising that, to enhance the practical convenience and efficiency of the model proposed, it is essential to establish a dimensional quality control digital platform. This platform should be equipped with core functionalities such as data uploading, automated computation, results visualisation, web-based publishing, and a human–machine interface (HMI). The development and implementation of such a digital platform within the automobile company discussed in this paper has significantly bolstered the effectiveness and efficiency of its overall dimensional matching quality control. For instance, the intuitive visualisation interface coupled with intelligent early-warning mechanisms has markedly improved the timeliness and accuracy of anomaly detection, thereby reducing human misjudgement. This integrated, data-driven approach has achieved a 50% reduction in the analysis-and-resolution cycle for dimensional quality issues, demonstrating a substantial advancement towards intelligent, proactive quality management.

7. Conclusions

This study proposes an integrated PCA-BP neural network technique with permutation importance for quality prediction and traceability in body-in-white rear panel dimension chains. PCA first reduces high-dimensional data dimensionality; using rear floorboard data as a case study, 8 principal components yield 11 key measurement points while retaining 91.452% of the information content. The evaluation of critical measurement points across five major regions (101, 039, 051, 709, 701) provides key reference indices for developing a neural network aligned with the rear panel dimensional chain, reducing computational burden and enhancing model accuracy.
Neural network development involves model configuration and dataset distribution. During model creation and optimisation, a two-hidden-layer structure is implemented, with the optimal number of neurons established based on data features and empirical calculations. The Gaussian activation function and weighted decay mechanisms enhance generalisation. According to the characteristics of the dimension chain, two modelling approaches—single and sequential neural networks—are comprehensively compared. Results confirm that the single neural network model delivers superior metrics (average R 2 > 95 % ) and robust generalisation ( R M S E < 0.1 ).
To address the neural network’s ‘black box’ issue that hinders bias source identification, this work evaluates variable importance by permutation importance to enhance interpretability and traceability. It identifies critical bias sources (e.g., NLQRA1201_L_AA_Y_709 in region 709) and elucidates how variable interactions influence quality. The proposed PCA-BP neural network model enables active intervention in dimensional fluctuation. Empirically, the model alerts to Side Panel to Rear End Mating discrepancies, with p r e d i c t i o n   v a l u e = 1.053693 closely aligning with a c t u a l   v a l u e   = 1.01 . It also reduces emergency measuring and manual inspection costs, while lowering rework rates. This data-driven predictive control paradigm offers significant advantages for automotive assembly quality assurance.
Compared to conventional methods, this framework substantially improves prediction accuracy, computational efficiency, and interpretability, providing both theoretical support and a practical pathway for intelligent quality control.
Despite the promising results, this study has several limitations. Future research can be carried out in the following aspects:
(1)
Extension to different vehicle models and production lines
The datasets used for empirical analysis are sourced from a specific vehicle model and a single production line of a particular automotive manufacturer, specifically focusing on the body-in-white rear panel assembly process. Consequently, the proposed methodology and findings are, to a certain extent, dependent on specific process conditions and production environment, which means their generalisability to other vehicle models, different manufacturers, or varied tooling and production line layouts has not been sufficiently validated. Future work will involve applying the framework to other vehicle models and production lines from the same manufacturer and testing it under diverse process conditions to systematically evaluate its robustness and generalisability. Through cross-model comparisons, the method will be further refined for a wider range of automotive manufacturing scenarios.
(2)
Systematic parameter sensitivity analysis for data preprocessing
The data preprocessing strategy employed in this study may filter out some rare anomalous signals, which could affect the model’s predictive performance under extreme conditions. Future work will conduct a systematic parameter sensitivity analysis of data preprocessing steps, such as filtering and normalisation, to quantify the impact on model performance. This will shift preprocessing from an empirically driven selection to an evidence-based optimisation, thereby enhancing the overall robustness of the framework.
(3)
Comparison with different machine learning algorithms
This study is limited to examining the performance of single neural networks and sequential neural networks, without comparing them with other representative data-driven methods. Future work plans to conduct a multidimensional comparison with machine learning algorithms such as Random Forest and Support Vector Machines to comprehensively evaluate their respective strengths and weaknesses. Building on this, we will further explore hybrid models that integrate the advantages of different algorithms to address more complex quality prediction tasks.
(4)
Integration with cloud manufacturing and real-time quality control
Future work will focus on real-time data interoperability and multimodal fusion across cloud manufacturing networks, investigating multi-agent collaboration mechanisms that incorporate suppliers and other stakeholders to enhance the model’s adaptability to process variations and support real-time quality control in distributed production ecosystems.

Author Contributions

Conceptualization, X.D. and Y.Z.; methodology, X.D., Y.Z. and L.C.; software, Y.Z. and L.C.; validation, J.L.; formal analysis, X.D., Y.Z. and L.C.; investigation, L.C.; resources, A.M.; data curation, J.L. and A.M.; writing—original draft preparation, X.D., Y.Z. and L.C.; writing—review and editing, X.D. and Y.Z.; visualisation, Y.Z.; supervision, X.D.; project administration, X.D.; funding acquisition, X.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 72171173.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shibata, H.; Cheldelin, B.; Ishii, K. Assembly Quality Methodology: A New Method for Evaluating Assembly Complexity in Globally Distributed Manufacturing. In Proceedings of the ASME 2003 International Mechanical Engineering Congress and Exposition, Washington, DC, USA, 15–21 November 2003; Volumes 1 and 2, pp. 335–344. [Google Scholar] [CrossRef]
  2. Kao, H.A.; Hsieh, Y.S.; Chen, C.; Lee, J.H. Quality Prediction Modelling for Multistage Manufacturing Based on Classification and Association Rule Mining. MATEC Web. Conf. 2017, 123, 00029. [Google Scholar] [CrossRef]
  3. Alenizi, F.A.; Abbasi, S.; Mohammed, A.H.; Rahmani, A.M. The Artificial Intelligence Technologies in Industry 4.0: A Taxonomy, Approaches, and Future Directions. Comput. Ind. Eng. 2023, 185, 109662. [Google Scholar] [CrossRef]
  4. Lazaridis, P.C.; Kavvadias, I.E.; Demertzis, K.; Iliadis, L.; Papaleonidas, A.; Vasiliadis, L.K.; Elenas, A. Structural Damage Prediction under Seismic Sequence Using Neural Networks. ECCOMAS Proceedia 2021, 8750, 3820–3836. [Google Scholar] [CrossRef]
  5. Xu, Z.K.; Chen, J.; Shen, J.X.; Xiang, M.J. Regional-scale nonlinear structural seismic response prediction by neural network. Eng. Fail. Anal. 2023, 154, 107707. [Google Scholar] [CrossRef]
  6. Xiao, N.Y.; Wen, K.; Qu, Y.J.; Mao, Y.X.; Yang, P. Surface-Roughness Prediction Based on Small-Batch Workpieces for Smart Manufacturing: An Aerospace Robotic Grinding Case Study. Appl. Sci. 2025, 15, 1349. [Google Scholar] [CrossRef]
  7. Jiao, A.; Zhang, G.; Liu, B.; Liu, W. Prediction of Manufacturing Quality of Holes Based on a BP Neural Network. Appl. Sci. 2020, 10, 2108. [Google Scholar] [CrossRef]
  8. Tang, X.; Wang, B.; Wang, S. Quality Assurance Model in Mechanical Assembly. Int. J. Adv. Manuf. Technol. 2010, 51, 1121–1138. [Google Scholar] [CrossRef]
  9. Moliner-Heredia, R.; Peñarrocha-Alós, I.; Abellán-Nebot, J.V. A Methodology for Data-Driven Adjustment of Variation Propagation Models in Multistage Manufacturing Processes. J. Manuf. Syst. 2023, 67, 281–295. [Google Scholar] [CrossRef]
  10. Mu, X.; Wang, Y.; Yuan, B. A New Assembly Precision Prediction Method of Aeroengine High-Pressure Rotor System Considering Manufacturing Error and Deformation of Parts. J. Manuf. Syst. 2021, 61, 112–124. [Google Scholar] [CrossRef]
  11. Jia, K.; Wang, H.; Ren, D.; Liu, B.; Zhao, Q.; Hong, J. A General Mathematic Model Framework for Assembly Process Driven Digital Twin of Assembly Precision. J. Manuf. Syst. 2024, 77, 196–211. [Google Scholar] [CrossRef]
  12. Zhu, X.; Lu, H.; Rätsch, M. An Interactive Clothing Design and Personalized Virtual Display System. Multimed. Tools Appl. 2018, 77, 27163–27179. [Google Scholar] [CrossRef]
  13. Zheng, Y.; Huang, X.; Wang, M.; Hu, P. Source and Accumulation Analysis of Deviation during Multi-Level Assembly of an Aircraft Rear Fuselage Frame. Appl. Sci. 2023, 13, 9914. [Google Scholar] [CrossRef]
  14. Ghali, M.; Elghali, S.; Aifaoui, N. Genetic Algorithm Optimization Based on Manufacturing Prediction for an Efficient Tolerance Allocation Approach. J. Intell. Manuf. 2024, 35, 1649–1670. [Google Scholar] [CrossRef]
  15. Ma, Y.; Wang, J.; Tu, Y. Concurrent Optimization of Parameter and Tolerance Design Based on the Two-Stage Bayesian Sampling Method. Qual. Technol. Quant. Manag. 2023, 21, 88–110. [Google Scholar] [CrossRef]
  16. Li, Z.; Fan, J.; Pan, R. A Study on Machining Error Prediction Model of Precision Vertical Grinding Machine Based on the Tolerance of Key Components. Int. J. Adv. Manuf. Technol. 2024, 131, 4515–4528. [Google Scholar] [CrossRef]
  17. Li, K.; Gao, Y.; Zheng, H.; Tan, J. A Data-Driven Methodology to Improve Tolerance Allocation Using Product Usage Data. J. Mech. Des. 2021, 143, 071101. [Google Scholar] [CrossRef]
  18. Cheng, X.; Huang, F.; Qiu, L. A Systematic Quality-Integrated Diagnostic Method for Complex Product Assembly Using Multi-Task Spatial–Temporal Transfer Learning. Int. J. Adv. Manuf. Technol. 2024, 135, 1355–1375. [Google Scholar] [CrossRef]
  19. Markatos, N.G.; Mousavi, A. Manufacturing Quality Assessment in the Industry 4.0 Era: A Review. Total Qual. Manag. Bus. Excell. 2023, 34, 1655–1681. [Google Scholar] [CrossRef]
  20. Yusup, N.; Zain, A.M.; Hashim, S.Z.M. Overview of PSO for Optimizing Process Parameters of Machining. Procedia Eng. 2012, 29, 914–923. [Google Scholar] [CrossRef]
  21. Quarto, M.; D’Urso, G.; Giardini, C.; Maccarini, G.; Carminati, M. A Comparison between Finite Element Model (FEM) Simulation and an Integrated Artificial Neural Network (ANN)-Particle Swarm Optimization (PSO) Approach to Forecast Performances of Micro Electro Discharge Machining (Micro-EDM) Drilling. Micromachines 2021, 12, 667. [Google Scholar] [CrossRef]
  22. Tien, T.-L. A Research on the Prediction of Machining Accuracy by the Deterministic Grey Dynamic Model DGDM (1,1,1). Appl. Math. Comput. 2005, 161, 923–945. [Google Scholar] [CrossRef]
  23. Rao, K.V.; Kumar, Y.P.; Singh, V.K. Vibration-Based Tool Condition Monitoring in Milling of Ti-6Al-4V Using an Optimization Model of GM(1,N) and SVM. Int. J. Adv. Manuf. Technol. 2021, 115, 1931–1941. [Google Scholar] [CrossRef]
  24. Lins, I.D.; Moura, M.D.C.; Zio, E. A Particle Swarm-Optimized Support Vector Machine for Reliability Prediction. Qual. Reliab. Eng. Int. 2012, 28, 141–158. [Google Scholar] [CrossRef]
  25. Guan, W.; Liu, C.; Dmoor, A. Prediction of Surface Quality in End Milling Based on Modified Convolutional Recurrent Neural Network. Appl. Math. Nonlinear Sci. 2022, 8, 69–80. [Google Scholar] [CrossRef]
  26. Liu, Y.; Lu, H.; Zhang, H.; Wu, X.; Zhong, Y.; Lei, Z. Quality Prediction of Continuous Casting Slabs Based on Weighted Extreme Learning Machine. IEEE Access 2022, 10, 78231–78241. [Google Scholar] [CrossRef]
  27. Msakni, M.K.; Risan, A.; Schütz, P. Using Machine Learning Prediction Models for Quality Control: A Case Study from the Automotive Industry. Comput. Manag. Sci. 2023, 20, 14. [Google Scholar] [CrossRef] [PubMed]
  28. Cao, P.; Shen, X.; Duan, G.; Liu, J.; Guo, K. Quality-Integrated Diagnostic Platform for Aerospace Complex Product Assembly Processes. Comput. Ind. Eng. 2024, 189, 109796. [Google Scholar] [CrossRef]
  29. Xi, Y.; Gao, Z.; Chen, K.; Dai, H.; Liu, Z. Error Propagation Model Using Jacobian-Torsor Model Weighting for Assembly Quality Analysis on Complex Product. Mathematics 2022, 10, 3534. [Google Scholar] [CrossRef]
  30. Zhang, C.; Yu, Y.; Zhou, G.; Hu, J.; Zhang, Y.; Ma, D.; Cheng, W.; Men, S. Hybrid Mechanism and Data-Driven Digital Twin Model for Assembly Quality Traceability and Optimization of Complex Products. Adv. Eng. Inf. 2024, 62, 102707. [Google Scholar] [CrossRef]
  31. Zhi, J.; Cao, Y.; Li, T.; Liu, F.; Luo, J.; Li, Y.; Jiang, X. A Digital Twin-Based Method for Assembly Deviations Analysis. J. Comput. Inf. Sci. Eng. 2024, 24, 091004. [Google Scholar] [CrossRef]
  32. Shi, Y.; Pang, J.; Chen, Y.; Dai, J.; Li, Y. An Intelligent Control Model Based on Digital Twin Technology and Optimized Least-Squares Support Vector Regression for Predicting Electromagnetic Brake Assembly Quality. IEEE Access 2023, 11, 137303–137316. [Google Scholar] [CrossRef]
  33. Ismail, M.; Mostafa, N.A.; El-assal, A. Quality Monitoring in Multistage Manufacturing Systems by Using Machine Learning Techniques. J. Intell. Manuf. 2022, 33, 2471–2486. [Google Scholar] [CrossRef]
  34. Wang, Y.; Luo, J.; Liu, C.; Yuan, X.; Wang, K.; Yang, C. Layer-Wise Residual-Guided Feature Learning with Deep Learning Networks for Industrial Quality Prediction. IEEE Trans. Instrum. Meas. 2022, 71, 2520011. [Google Scholar] [CrossRef]
  35. Todorovic, M.; Stanisic, N.; Zivkovic, M.; Bacanin, N.; Simic, V.; Tirkolaee, E.B. Improving audit opinion prediction accuracy using metaheuristics-tuned XGBoost algorithm with interpretable results through SHAP value analysis. Appl. Soft Comput. 2023, 149, 110955. [Google Scholar] [CrossRef]
  36. Dai, J.; Xie, Q. Prediction of Deviation Range of Gear Autonomous Machining Relying on Failure Sign Algorithm of Firefly Neural Network. Mob. Inf. Syst. 2022, 2022, 5878748. [Google Scholar] [CrossRef]
  37. Senoner, J.; Netland, T.; Feuerriegel, S. Using Explainable Artificial Intelligence to Improve Process Quality: Evidence from Semiconductor Manufacturing. Manag. Sci. 2022, 68, 5704–5723. [Google Scholar] [CrossRef]
  38. Leng, J.; Lin, Z.; Zhou, M.; Liu, Q.; Zheng, P.; Liu, Z.; Chen, X. Multi-Layer Parallel Transformer Model for Detecting Product Quality Issues and Locating Anomalies Based on Multiple Time-series Process Data in Industry 4.0. J. Manuf. Syst. 2023, 70, 501–513. [Google Scholar] [CrossRef]
  39. Wang, X.C.; Cui, J.R.; Xu, M.M. A Chlorophyll-a Concentration Inversion Model Based on Backpropagation Neural Network Optimized by an Improved Metaheuristic Algorithm. Remote Sens. 2024, 16, 1503. [Google Scholar] [CrossRef]
  40. Zhao, D.; Wang, Y.; Liang, D.; Ivanov, M. Performances of regression model and artificial neural network in monitoring welding quality based on power signal. J. Mater. Res. Technol. 2020, 9, 1231–1240. [Google Scholar] [CrossRef]
  41. Qu, D.; Liang, W.; Zhang, Y.; Gu, C.; Zhan, Y. Research on Machining Quality Prediction Method Based on Machining Error Transfer Network and Grey Neural Network. J. Manuf. Mater. Process. 2024, 8, 203. [Google Scholar] [CrossRef]
  42. Dong, Z.; Pan, Y.; Yang, J.; Xie, J.; Fu, J.; Zhao, P. A Multiphase Dual Attention-Based LSTM Neural Network for Industrial Product Quality Prediction. IEEE Trans. Ind. Inform. 2024, 20, 9298–9307. [Google Scholar] [CrossRef]
  43. Wang, S.; She, J.; Kawata, S.; Wang, F.; Zhao, J.; Chen, Q. Improved BP neural network utilizing Bayesian optimization algorithm for iron-making process control. In Proceedings of the 2024 IEEE 33rd International Symposium on Industrial Electronics (ISIE), Ulsan, Republic of Korea, 18–21 June 2024; pp. 1–6. [Google Scholar] [CrossRef]
  44. Zou, L.; Zhang, J.; Han, Y.; Zeng, F.; Li, Q.; Liu, Q. Internal Crack Prediction of Continuous Casting Billet Based on Principal Component Analysis and Deep Neural Network. Metals 2021, 11, 1976. [Google Scholar] [CrossRef]
  45. Wang, W.L.; Shi, H.X.; Cheng, X.H.; Ding, R.D.; Sun, J.W.; Yuan, L.; Wang, X.J.; Hao, S.Z.; Jing, Y.; Han, Q.G. A Machine Learning-Optimized Robot-Assisted Driving System for Efficient Flexible Forming of Composite Curved Components. Eng 2025, 6, 356. [Google Scholar] [CrossRef]
  46. Back, A.D.; Trappenberg, T.P. Selecting inputs for modeling using normalized higher order statistics and independent component analysis. IEEE Trans. Neural Netw. 2001, 12, 612–617. [Google Scholar] [CrossRef] [PubMed]
  47. Ni, X.; Wang, H.; Che, C.; Hong, J.; Sun, Z. Civil aviation safety evaluation based on deep belief network and principal component analysis. Saf. Sci. 2019, 112, 90–95. [Google Scholar] [CrossRef]
  48. Wang, F.Y.; Xu, H.Y.; Ye, H.F.; Yan, L.; Wang, Y.B. Predicting Earthquake Casualties and Emergency Supplies Needs Based on PCA-BO-SVM. Systems 2025, 13, 24. [Google Scholar] [CrossRef]
  49. Liu, H.; Yang, H.; Liu, Z.; Li, Z.; Sun, J.; Zhang, Y. Backpropagation neural network prediction model of arc additive manufacturing weld size based on particle swarm optimization algorithm. Mater. Mech. Eng. 2024, 48, 97–102. [Google Scholar] [CrossRef]
  50. Xu, C.; Chen, J.; Li, Z.; Yang, Y.; Wang, F. Predicting the durability and strength of fiber-reinforced concrete exposed to salt and freezing conditions. Hydro-Sci. Eng. 2024, 1, 94–103. [Google Scholar] [CrossRef]
Figure 1. Architectural diagram of parts related to the body-in-white skeleton assembly.
Figure 1. Architectural diagram of parts related to the body-in-white skeleton assembly.
Applsci 16 00037 g001
Figure 2. Directional correlation diagram of the whole vehicle’s rear panel part assembly.
Figure 2. Directional correlation diagram of the whole vehicle’s rear panel part assembly.
Applsci 16 00037 g002
Figure 3. Distribution of NLASL1201_V_AA.
Figure 3. Distribution of NLASL1201_V_AA.
Applsci 16 00037 g003
Figure 4. Comparison of NLASL1201_V_AA distribution before and after filtering.
Figure 4. Comparison of NLASL1201_V_AA distribution before and after filtering.
Applsci 16 00037 g004
Figure 5. Histogram of the ranked contributions of the PCs of variables in 101.
Figure 5. Histogram of the ranked contributions of the PCs of variables in 101.
Applsci 16 00037 g005
Figure 6. Distribution of measurement point locations after PCA screening of 101.
Figure 6. Distribution of measurement point locations after PCA screening of 101.
Applsci 16 00037 g006
Figure 7. Evaluation results of single neural network.
Figure 7. Evaluation results of single neural network.
Applsci 16 00037 g007
Figure 8. Evaluation results of sequential neural network.
Figure 8. Evaluation results of sequential neural network.
Applsci 16 00037 g008
Figure 9. Comparison of evaluation metrics of retrained sequential neural network.
Figure 9. Comparison of evaluation metrics of retrained sequential neural network.
Applsci 16 00037 g009
Figure 10. Comparison of evaluation metrics of single neural network and retrained sequential neural network.
Figure 10. Comparison of evaluation metrics of single neural network and retrained sequential neural network.
Applsci 16 00037 g010
Figure 11. Comparison of evaluation metrics of single neural network and retrained sequential neural network at NLSTS1308_R_BA_Y_701.
Figure 11. Comparison of evaluation metrics of single neural network and retrained sequential neural network at NLSTS1308_R_BA_Y_701.
Applsci 16 00037 g011
Figure 12. NLSTS1308_R_BA_Y_701 comparison of differences in evaluation metrics for both neural networks.
Figure 12. NLSTS1308_R_BA_Y_701 comparison of differences in evaluation metrics for both neural networks.
Applsci 16 00037 g012
Figure 13. Comparison of model evaluation for predicting new sampled actual data.
Figure 13. Comparison of model evaluation for predicting new sampled actual data.
Applsci 16 00037 g013
Figure 14. Measurement points Pareto diagram based on the total effects of PI.
Figure 14. Measurement points Pareto diagram based on the total effects of PI.
Applsci 16 00037 g014
Table 1. Measurement data points for NLASL1201_V_AA in 709.
Table 1. Measurement data points for NLASL1201_V_AA in 709.
Measuring TimeXYZD
13 June 20230.5275000.5175
14 June 2023−0.1700−0.18
15 June 20230.87000.86
16 June 20230.45000.44
Table 2. Measurement point classification data for NLASL1201_V_AA in 709.
Table 2. Measurement point classification data for NLASL1201_V_AA in 709.
Measuring TimeNLASL1201_V_AA_XNLASL1201_V_AA_YNLASL1201_V_AA_ZNLASL1201_V_AA_D
13 June 20230.5275000.5175
14 June 2023−0.1700−0.18
15 June 20230.87000.86
16 June 20230.45000.44
Table 3. Ranking of variable contributions in PCs of 101.
Table 3. Ranking of variable contributions in PCs of 101.
VariablesPC1 PC2PC3PC4PC5PC6PC7 PC8
NRBLV1790_O_LD_Y39.024618.96890.79394.31262.663228.55240.44070.9786
NLVDS1214_O_BA_Z28.804526.89744.544210.753116.16225.18810.23000.1734
NRBLV1790_O_LD_Z3.58796.735826.444120.604717.85167.32440.94802.7930
NRBHF1206_R_AA_Y3.25840.48323.19230.010610.19372.33900.14933.8552
NRBHF1207_R_AA_Y3.18000.67882.68130.010610.77212.34650.36291.3657
NRBHF1226_U_AA_Z2.52900.38002.43750.27950.38880.34134.53881.3005
NRLHU1401_O_AA_Z2.50720.09462.71800.23160.00840.05432.73870.1688
NRBHF1205_R_AA_Y2.19030.01033.81380.19387.45410.00270.25865.1236
NRBLV1790_O_LD_X2.15511.35127.193114.56461.540410.036810.20637.0253
NRLHU1213_V_AA_X2.15500.80250.15893.33550.89808.86601.63950.0640
NRBHF1203_L_AA_Y2.07580.92161.63350.72757.05331.12630.13352.1005
NRLHU1211_O_AA_Z2.00812.312417.23940.12560.39029.36053.16561.0632
NRBHV1223_R_AA_Y0.97520.00770.55960.22481.40912.42460.94630.5563
NRBHV1222_R_AA_Y0.75930.10942.28940.41592.89861.82280.61390.5933
NLLHU1216_R_AA_Y0.71441.16500.00090.04791.11572.336518.37650.0425
NRLHU1403_O_AA_Z0.69300.74670.51251.67412.20580.52306.48121.1148
NRLHU1214_L_AA_Y0.60800.26980.27900.00451.27931.39230.03650.0030
NRLHU1215_L_AA_Y0.56840.00020.04380.23071.29700.46573.07000.7170
NLBHD1227_H_AA_X0.48230.06411.47800.77170.00001.14830.00024.8657
NRQBJ1202_R_AA_Y0.41520.05900.31680.26400.30980.43613.19882.7089
NLBHD1226_H_AA_X0.31850.13531.64864.19960.15981.61460.66720.8286
NLBHD1224_H_AA_X0.20040.00032.07710.76501.60940.40500.01162.4715
NRBHF1227_U_AA_Z0.19592.16410.05994.81190.28730.06967.65510.0543
NRBHF1202_U_AA_Z0.16692.11100.05023.97692.38740.75244.06070.0000
NRLHU1210_O_AA_Z0.129225.98630.516018.28981.95546.208611.40870.2545
NLBHF1204_U_AA_Z0.10481.18680.01373.02620.01170.19864.92471.6164
NLBHD1225_H_AA_X0.08020.24223.44440.38950.18540.96160.768223.5797
NRLHU1402_O_AA_Z0.04350.55600.13361.99173.94220.01543.01010.0413
NRBHV1221_R_AA_Y0.03772.34001.05750.40720.14200.40970.00625.6601
NRQBJ1204_R_AA_Y0.01440.26910.00980.03960.42150.00712.01150.0156
NLLHD1201_H_AA_X0.01190.12495.41240.52101.25371.57144.553728.6751
NRLHU1212_O_AA_Z0.00450.92457.11940.00520.27420.06700.24580.1770
NRBHF1201_U_AA_Z0.00061.90080.12752.79311.47871.63133.14100.0122
Table 4. Ranking of the sum of the weighted contribution of each variable of 101.
Table 4. Ranking of the sum of the weighted contribution of each variable of 101.
VariablesContribution Sum Contribution PercentageCumulative Contribution Percentage
NRBLV1790_O_LD_Y22.604424.72%24.72%
NLVDS1214_O_BA_Z20.795522.74%47.46%
NRLHU1210_O_AA_Z8.60919.41%56.87%
NRBLV1790_O_LD_Z6.83737.48%64.35%
NRBLV1790_O_LD_X3.24223.55%67.89%
NRLHU1211_O_AA_Z2.82803.09%70.99%
NRBHF1206_R_AA_Y2.24192.45%73.44%
NRBHF1207_R_AA_Y2.20302.41%75.85%
NRLHU1213_V_AA_X1.65961.81%77.66%
NRBHF1203_L_AA_Y1.61161.76%79.42%
NRBHF1205_R_AA_Y1.56281.71%81.13%
Table 5. Structure of the BP neural network training data matrix (using 101 as an example).
Table 5. Structure of the BP neural network training data matrix (using 101 as an example).
Sample IDInput Features (X) Output Features (Y)
NRBLV1790_O_LD_YNLVDS1214_O_BA_Zother 9 variablesNLSTS1235_O_BA_701other 31 variables
1
2
434
Table 6. Hyperparameter description of BP neural network.
Table 6. Hyperparameter description of BP neural network.
HyperparametersValue
Number of hidden layers2
Number of hidden layer neurons65/35
Activation functionGaussian function
Learning rate0.1
Number of iterations5
Loss functionWeight decay
Table 7. Regression model evaluation metrics.
Table 7. Regression model evaluation metrics.
MetricsFull NameFormulaInterpretation
R 2 Coefficient of Determination R 2 = 1 i = 1 m y i y ~ i 2 i = 1 m y i y ¯ i 2 Closer to 1 → Better fit
RMSERoot Mean Squared Error R M S E = 1 m i = 1 m y i y ~ i 2 Smaller → Higher accuracy
MAEMean Absolute Error M A E = 1 m i = 1 m y i y ~ i Smaller → Higher accuracy
SSESum of Squared Errors S S E = i = 1 m y i y ~ i 2 Smaller → Higher accuracy
MSEMean Squared Error M S E = 1 m i = 1 m y i y ~ i 2 Smaller → Higher accuracy
Table 8. Single neural network modelling of NLSTS1235_O_BA_701.
Table 8. Single neural network modelling of NLSTS1235_O_BA_701.
MetricsTrainingValidationTest
R 2 0.9912814120.96048480.961566708
RMSE0.0141485550.0287306940.029197534
MAE0.0105212510.0211099890.023006065
SSE0.0606550310.0718143890.036657328
Table 9. Ranking of the contribution of key influencing factors.
Table 9. Ranking of the contribution of key influencing factors.
Measurement PointMain EffectsTotal Effects
NLQRA1201_L_AA_Y_7090.0210.171
NLVDS1204_L_BA_Y_7090.0190.116
NLQRA1202_L_AA_Y_7090.0190.114
NLSHU1201_R_AA_Y_7090.0190.106
NLVDS1205_L_AA_Y_7090.0160.066
NLSHR1327_R_AA_Y_0390.0140.042
NLSLA1401_H_BK_X_0510.0120.039
NRBLV1790_O_LD_Y_1010.0110.038
Table 10. Partial 701 prediction results for process anomalies.
Table 10. Partial 701 prediction results for process anomalies.
Measurement PointsValue
NLSTS1308_R_BA_Y_701−0.514912
NLSTS1235_O_BA_701−0.030523
NLSTS1236_O_BA_701−0.336233
NLSTS1240_O_BA_701−0.009309
NLSTS1247_O_BA_7010.378807
NLSTS1249_O_BA_7011.053693
NLSTS1262_U_BA_7010.617516
NLSTS1263_H_BA_7010.002242
NLSTS1264_U_BA_7010.545484
NLSTS1265_L_BA_701−0.181566
NLSTS1266_U_BA_7010.481592
NLSTS1267_L_BA_7010.610731
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, X.; Zhou, Y.; Chen, L.; Li, J.; Ma, A. Data-Driven Framework for Dimensional Quality Control in Automotive Assembly: Integration of PCA-BP Neural Network with Traceable Deviation Source Identification. Appl. Sci. 2026, 16, 37. https://doi.org/10.3390/app16010037

AMA Style

Du X, Zhou Y, Chen L, Li J, Ma A. Data-Driven Framework for Dimensional Quality Control in Automotive Assembly: Integration of PCA-BP Neural Network with Traceable Deviation Source Identification. Applied Sciences. 2026; 16(1):37. https://doi.org/10.3390/app16010037

Chicago/Turabian Style

Du, Xuemei, Yutong Zhou, Lei Chen, Jingfei Li, and Anli Ma. 2026. "Data-Driven Framework for Dimensional Quality Control in Automotive Assembly: Integration of PCA-BP Neural Network with Traceable Deviation Source Identification" Applied Sciences 16, no. 1: 37. https://doi.org/10.3390/app16010037

APA Style

Du, X., Zhou, Y., Chen, L., Li, J., & Ma, A. (2026). Data-Driven Framework for Dimensional Quality Control in Automotive Assembly: Integration of PCA-BP Neural Network with Traceable Deviation Source Identification. Applied Sciences, 16(1), 37. https://doi.org/10.3390/app16010037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop