Next Article in Journal
Statistical Data-Generative Machine Learning-Based Credit Card Fraud Detection Systems
Previous Article in Journal
New Advances in Distribution Theory and Its Applications
Previous Article in Special Issue
Optimization of Analog Circuit Parameters Using Bidirectional Long Short-Term Memory Coupled with an Enhanced Whale Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Anomaly Detection on Laminated Composite Plate Using Self-Attention Autoencoder and Gaussian Mixture Model

1
Department of Aeronautics, Mechanical and Electronic Convergence Engineering, Graduated School, Kumoh National Institute of Technology, Daehak-ro 61, Gumi 39177, Republic of Korea
2
School of Mechanical Engineering, Kumoh National Institute of Technology, Daehak-ro 61, Gumi 39177, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2445; https://doi.org/10.3390/math13152445
Submission received: 2 July 2025 / Revised: 25 July 2025 / Accepted: 27 July 2025 / Published: 29 July 2025

Abstract

Composite laminates are widely used in aerospace, automotive, construction, and luxury industries, owing to their superior mechanical properties and design flexibility. However, detecting manufacturing defects and in-service damage remains a vital challenge for structural safety. While traditional unsupervised machine learning methods have been used in structural health monitoring (SHM), their high false positive rates limit their reliability in real-world applications. This issue is mostly inherited from their limited ability to capture small temporal variations in Lamb wave signals and their dependence on shallow architectures that suffer with complex signal distributions, causing the misclassification of damaged signals as healthy data. To address this, we suggested an unsupervised anomaly detection framework that integrates a self-attention autoencoder with a Gaussian mixture model (SAE-GMM). The model is solely trained on healthy Lamb wave signals, including high-quality synthetic data generated via a generative adversarial network (GAN). Damages are detected through reconstruction errors and probabilistic clustering in the latent space. The self-attention mechanism enhances feature representation by capturing subtle temporal dependencies, while the GMM enables a solid separation among signals. Experimental results demonstrated that the proposed model (SAE-GMM) achieves high detection accuracy, a low false positive rate, and strong generalization under varying noise conditions, outperforming traditional and deep learning baselines.

1. Introduction

The need for composite laminates has gradually increased in the automotive, aerospace, naval, and energy fields due to their unique properties, such as high specific strength, high toughness, high flexibility, and low density [1,2]. Composite materials are prone to intricate flaws during manufacturing and assembly processes. Moreover, they are subjected to harsh environmental conditions and various cyclic loadings, which adversely affect their performance and can lead to structural integrity failure [3,4]. To detect flaws in composite materials, nondestructive testing using ultrasonic guided waves, also known as Lamb waves, has been used in the past decades because of their ability to propagate over long distances with minimal attenuation, a larger inspection range, and high sensitivity to small damages [5].
The basic principle of NDT-based structural health monitoring (SHM) is to assess the condition of a structure by collecting signal responses from multiple sensor networks, analyzing them, and determining whether the structure is healthy or damaged, using either a physics-based (model-driven) or data-driven approach. Physics-based models typically rely on mathematical formulations and physical principles to define structural characteristics and predict damage. However, owing to their computational intensity, reliance on accurate material properties and boundary conditions, and limited adaptability to real-world uncertainties, the quantitative analysis of the physical behavior of guided waves remains challenging [6]. On the contrary, data-driven approaches utilize machine learning and artificial intelligence to learn complex patterns from collected time-series data without requiring expert knowledge or exact structural information [2,7]. Supervised data-driven approaches have been extensively used for Lamb wave-based damage detection in SHM, achieving good performance for both image and time-series datasets. However, supervised learning relies on labeled data from both healthy and damaged states of a structure, which can be time-consuming, labor-intensive, and sometimes impractical to obtain in real-life scenarios. To tackle this problem, unsupervised approaches have been successful in detecting abnormalities at an early stage before catastrophic failure occurs. Unsupervised anomaly detection for SHM does not require damaged data to train the model. Instead, the model only requires normal data, and it solely uses a small portion of the damaged data for testing [8]. Khan et al. [9] proposed the processing of guided waves using a sparse autoencoder to extract unsupervised autonomous damage features. The unsupervised features, when applied in supervised learning, demonstrated that the sparse autoencoder (SAE) could distinguish between pristine and damaged conditions with 99% accuracy. Abbassi et al. [10] evaluated four unsupervised dimensionality reduction methods (PCA, KPCA, t-SNE, and AE) for detecting and localizing damage under varying temperature conditions. Through the dimensionality reduction strategy, the AE outperformed the other models in detecting and localizing damage, whereas the others succeeded in detecting damage at low temperatures. Additionally, using the strategy involving the Q and T2 indices, all the methods successfully detected damage across all temperature ranges. The effectiveness of the unsupervised autoencoder was also highlighted in [11], as it eliminated the use of labeled data by proposing a CNN autoencoder for damage localization. They validated their method using numerical and experimental datasets. The proposed method demonstrated excellent results compared to the tomography method.
Guided-wave-based SHM presents significant complexities and heterogeneity, which primarily affect models’ ability to extract meaningful features. To address this challenge, various deep learning models have been proposed for feature differentiation; however, these approaches require a substantial number of learning parameters in addition to the computational complexity of the task. Sawant et al. [4] utilized raw Lamb wave time-series data to identify and localize damage through unsupervised transfer learning using a convolutional autoencoder (TL-CAE). The proposed unsupervised model outperformed its supervised counterpart trained on the same dataset. In SHM, gathering a sufficient dataset to train deep learning models can be challenging, costly, and time-consuming, particularly for damage-related data. To minimize the negative impact of an imbalanced dataset on anomaly detection, Zhao et al. [12] proposed an SGAD-GAN model to perform data augmentation and anomaly detection simultaneously. The issue of data scarcity in SHM was also addressed by Furkan et al. [13] by employing a generative adversarial network with a gradient penalty to generate synthetic labeled acceleration data. The data generated were augmented using varying ratios and were used to train a 1D-CNN. Training GANs is not an effortless task, and it often produces misleading samples or fails to capture the important patterns in the signals. To identify and filter such low-quality samples, different clustering-based techniques have been employed, including DBSCAN [14], K-means [15], and discriminative fuzzy K-means [16].
In this study, a self-attention autoencoder combined with a Gaussian mixture model (SAE-GMM) for unsupervised anomaly detection in laminated composite plates was proposed. The model was trained exclusively on the original healthy data and synthetic data generated using a GAN. A deep autoencoder was employed to extract the low-dimensional latent features and compute the reconstruction errors for each input signal. Additionally, a self-attention mechanism was integrated to enhance the performance of the deep autoencoder by dynamically prioritizing the most relevant parts of the input signal. Finally, the extracted latent features and reconstruction error outputs from the AE were input into the estimation network for density estimation via a GMM. This combination provides a full anomaly detection model, capable of detecting damage with high accuracy and inference time compared to conventional models. The key contributions of this study are summarized as follows:
  • The developed SAE-GMM architecture was trained solely on healthy Lamb wave data, enabling reliable damage detection without requiring labeled damage samples.
  • The proposed unsupervised framework was specifically designed to minimize false positive rates, thereby improving the robustness and reliability of damage detection.
  • A GAN was employed to generate high-fidelity synthetic signals, filtered via K-means clustering, to enrich healthy training data and overcome data shortage issue in SHM.
  • The self-attention mechanism was integrated into the encoded network, enhancing sensitivity to subtle signal distortions, improving representation quality and reconstruction fidelity.
  • Leveraged Gaussian mixture modeling combined latent features and reconstruction errors to effectively separate healthy and damaged signals.
  • We achieved state-of-the-art anomaly detection performance on an experimental composite plate, with validated generalization under varied Gaussian noise levels.

2. The Proposed Framework

2.1. Overview

The proposed framework comprises of three main components: (1) an experimental setup and data acquisition, (2) a GAN for generating synthetic data, and (3) a self-attention autoencoder (SAE) with a Gaussian Mixture Model (GMM) for anomaly detection. Figure 1 shows the architecture of the SAE-GMM framework. First, a composite plate was fabricated, and Lamb wave-based experiments were conducted to collect both healthy and damaged signals. The GAN model was employed to generate synthetic healthy data that closely resembled the original healthy signals. To ensure data quality, the K-means algorithm was applied to filter out the most realistic synthetic samples before incorporating them into the dataset for augmentation. The autoencoder was trained exclusively on healthy data using a self-attention module integrated into the encoder to enhance feature extraction. The reconstruction error and latent vector were then combined to form a new input vector for density estimation using the GMM, enabling effective anomaly detection.

2.2. GAN Model for Data Augmentation

The GAN consists of two key components: a generator network (G) and a discriminator network (D). As shown in Step 2 of Figure 1, the generator network converts Gaussian noise into synthetic data that closely resembles the original data distribution, aiming to deceive the discriminator. The discriminator network is responsible for distinguishing between fake and real data. Through an adversarial process, the two networks engage in a competitive game: the generator continuously improves to produce more realistic data, whereas the discriminator enhances its ability to detect fake data. During training, the objective function follows a minimax strategy [17,18]:
V D , G = E x ~ P ( x ) log D ( x ) + E x ~ P ( Z ) log 1 D ( G ( Z ) )
where P ( x ) represents the distribution of the original data and P(Z) denotes the distribution of the synthetic data generated by the model. The first term, E x ~ P ( x ) log D ( x ) , represents the discriminator’s goal of maximizing its ability to classify real samples. Simultaneously, the generator minimizes the discriminator’s success in distinguishing fake data through the second term, E x ~ P ( Z ) log 1 D ( G ( Z ) ) . This process continues until the generator reaches the stage of generating samples that are closely related to real data, and the discriminator’s accuracy can no longer be improved further.
The generator maps a 150-dimensional latent vector to a 1500-point time-domain signal. Both the generator and discriminator consist of three fully connected layers with 64, 128, and 256 neurons, each followed by LeakyReLU activation functions. To improve training stability, batch normalization and dropout were applied. The models were optimized using the Adam optimizer with a learning rate of 2 × 10−4 and a momentum coefficient β1 = 0.5. Binary cross-entropy was used as the loss function. The training process was carried out over 5000 epochs with a batch size of 32 to ensure stable convergence.
After training the model, 5000 synthetic data samples were generated. However, some of the synthetic signals were of poor quality. To ensure the quality of the samples generated, K-means clustering was used to distinguish between high- and low-quality signals. Additionally, this algorithm is simple to implement and efficient for large datasets. First, the mean square error (MSE) was calculated and stored for each synthetic signal in comparison with all the original data as follows:
M S E d s y n , d r e a l = 1 n i = 1 n d s y n , i d r e a l , i 2
where d s y n and d r e a l are synthetic data and real data, respectively.
After storing all the MSE values as n 1 ,   n 2 ,   n 3 , ,   n n , K-means clustering was initialized with two clusters, and the two centroids C 1 ,   C 2 were assigned randomly. Each value in the MSE set was assigned to the closest centroid as follows:
K j = m i a r g m i n j n i C j
where K j is the cluster set, and the updated centroid C j is calculated as C j = 1 K j n i K j ( n i ) . The centroids are continuously updated until convergence is achieved. To select the highest-quality synthetic signals, the threshold was determined as T h r e s h o l d   =   m i n   ( C 1 ,   C 2 ) . Finally, only 3270 quality signals fell below the threshold.

2.3. Self-Attention Autoencoder with GMM

The self-attention autoencoder with Gaussian mixture model (SAE-GMM) consists of two main components, as shown in Figure 1. The first is a self-attention autoencoder, which functions as a compression network to reduce input dimensionality into a lower-dimensional representation while simultaneously computing reconstruction errors. In the second stage, both the low-dimensional latent features and their corresponding reconstruction errors are concatenated and fed into an estimation network, which performs density estimation and maximizes the likelihood of the input samples using a Gaussian mixture model.

2.3.1. The Encoder

One key component of the compression network is the encoder, which transforms a sequence of input data ( X R n ) from a high-dimensional space into a lower-dimensional latent representation ( Z c R t ,   t < n ) . This compression process is accomplished by extracting and encoding essential features from the input data. In the proposed model, the encoder consists of two stacked layers with 128 and 64 neurons. Each layer employs a Rectified Linear Unit (ReLu) activation function. The encoding process that maps the input data X to the low-dimensional latent space Z c is formulated as follows:
Z c = f θ ( X )
where f θ represents the fully connected layers of the encoder part.
Z c = ρ W 3 δ W 2 δ W 1 X + b 1 + b 2 + b 3
where X is the input data, W i and b i are weight and bias, respectively, and ρ is the activation function.

2.3.2. The Attention Mechanism

The attention block in the autoencoder enhances the encoder’s ability to focus on the most salient features of the input data while suppressing less important features. The mechanism computes attention weights by evaluating the relative importance of each input element, measuring their similarities, and normalizing these values to form a weighted representation [19]. The self-attention mechanism computes three matrices, namely query ( Q ) , key ( K ) , and value ( V ) from the hidden states, and they are calculated as follows:
Q i = X c W Q
K i = X c W K
V i = X c W V
where X c denotes the ith feature sliding window matrix of a low-dimensional latent space. W Q , W K , and W V are the weight matrices corresponding to query, key, and value, respectively. The attention function is calculated as follows [20]:
A i = S o f t m a x Q i K i T d . V i
where A i is the attention coefficient of matrix X c , and d represents the scaling factor. The output from the attention layer is obtained by applying attention weights to the input sequences.
Z c = Z c A i
During implementation, the attention module was constructed using the Dot-product attention layer from TensorFlow-Keras. The encoded feature tensor is passed simultaneously as both the query and value inputs to this layer, enabling self-attention across all steps. The resulting attention scores highlight interdependence within the sequence and are used to compute a weighted representation of the input. This attention-enhanced output is combined with the original encoded features via elementwise multiplication before further transformation through dense layers.

2.3.3. The Decoder

The decoder typically consists of hidden layers with an increasing number of nodes and gradually decompresses the low-dimensional features from the encoder. Its primary goal is to reconstruct X from the latent features z c , with minimal loss. The last layer of the decoder network can be mathematically expressed as follows:
X ^ = ρ ( W c δ W b δ W a Z C + b a + b b + b c )
where X ^ is the reconstructed output from the decoder and W and b are the weight and bias of the decoder network, respectively. The reconstruction loss of the compression network is computed as follows:
L r e c = 1 N i = 1 N X i X ^ i 2

2.4. Estimation Network and Anomaly Score

The estimation network computes density estimation of the input data ( Z ) by leveraging concatenation of the reconstruction error ( Z r ) and low-dimension latent vector ( z c ) . This enables the model to estimate the likelihood of a sample belonging to either the healthy or damaged class. The process was performed within the framework of the Gaussian mixture model (GMM). In this study, a multilayer perceptron (MLP) with three hidden layers consisting of 16, 8, and 4 neurons was employed to estimate the parameters of the GMM. This classifier can learn efficiently the complex relationships between construction error, latent features, and output probabilities [21]. The mathematical formulation of this process is as follows [22]:
h = Z , X X ^
The final layer of the MLP is obtained as follows:
h ( 3 ) = ρ ( W 3 ρ W 2 ρ W 1 h + b 1 + b 2 + b ( 3 ) )
where h ( 3 ) represents the output feature vector of the MLP, ρ is the activation function (ReLU), and W i and b i represent the weight matrices and bias for the ith layer, respectively. To estimate the probabilities of each GMM component, the last layer of the MLP passes through SoftMax activation and is expressed as follows:
γ k = e x p h k ( 3 ) j = 1 K e x p h j ( 3 )
Here, γ k represents the probability of data belonging to the kth Gaussian component, K is the total number of Gaussian components, and h k ( 3 ) is the output of the final MLP layer corresponding to the kth component. Furthermore, to maximize the likelihood of the input samples, the mean ( μ k ) and variance ( k ) parameters of the GMM can be estimated as follows [23]:
μ k = i = 1 N γ i k Z i i = 1 N γ i k
k = i = 1 N γ i k Z i μ k Z i μ k T i = 1 N γ i k
where N denotes the total number of samples. With the estimated parameters, the likelihood of each sample being computed using the energy function is as follows:
E Z = log k = 1 K π k e x p 1 2 Z μ k T k 1 Z μ k | 2 π k |
where π k is the mixture probability of the kth component and | . | represents the matrix determinant. Based on Equation (18), samples with higher energy values are more likely to correspond to the damaged cases. In this study, the threshold was set at the 95th percentile of the healthy samples, meaning that any sample exceeding this threshold was classified as damaged.

3. Experiments and Analysis

3.1. Experimental Verification on Laminated Composite Plate

To validate the proposed model, a laminated composite plate with dimensions of 300   m m × 300   m m × 1 m m was fabricated using an epoxy-based carbon fiber prepreg with a cross-ply orientation of [ 0 / 90 / 0 / 90 ] S . The composite laminates were manufactured using a hot-press machine and carefully cured to achieve the desired mechanical properties: a tensile strength of 4900   M P a , tensile modulus of 230   G P a , thermal conductivity of 9.4   W / m . k , and density of 1.8   g / c m 3 . After trimming the composite plate to its final dimensions, four piezoelectric transducers (PZTs) from the PI ceramics were mounted diagonally at the four corners of the laminate plate using Loctite 401 adhesive to monitor the structure. Each PZT had a diameter of 10   m m and thickness of 0.5   m m .

3.2. Data Acquisition

The configuration consisting of four piezoelectric transducers, each with the dual functions of actuation and sensing, attached to a laminated plate is presented in Figure 2a. Twelve sensing paths (P1–P12) were used for data acquisition. The composite plate was segmented into nine grid zones, each measuring 90 mm × 90 mm. To simulate varying damage severity levels, as presented in Figure 2b, three distinct masses (13 g, 16 g, and 20 g) were simultaneously placed at the center of each grid zone. A disk magnet was chosen to simulate the damage because it was non-destructive, easy to position, and capable of altering the local stiffness and acoustic impedance of the laminate, thereby affecting the propagation of Lamb waves. This approach also allowed for a reversible process that enabled the collection of both healthy and damaged data under controlled conditions. The experimental setup, as shown in Figure 2c, comprised an NI-USB 6341 series data acquisition system, a dual-channel PZT amplifier, and a LabVIEW-controlled PC for signal generation and data acquisition.
The excitation signal and measured Lamb wave signals are presented in Figure 3. As shown in Figure 3a, a five-cycle sinusoidal tone burst, modulated by a Hanning window and centered at 150 kHz, was employed as the excitation signal. Signals were transmitted and recorded at a sampling rate of 500 kHz per sample. As shown in Figure 3b–f, healthy and damaged responses (D1–D3) were acquired sequentially using the pitch–catch method. The signals were collected along path 3 between PZT-1 and PZT-3, with all damage cases introduced simultaneously at the center of the plate. The influence of the damage was evident in the direct-wave packet region (index 0–400), where noticeable amplitude reductions and signal distortions were observed in the damaged signal compared to the healthy one (see Figure 3c–e). The damaged responses exhibited reduced amplitudes owing to the scattering effects induced by the defects. Subtle differences were observed among the three damage cases. As shown in Figure 3f, damage case D3 resulted in greater energy loss than D1 and D2, indicating more severe disruptions.
To ensure experimental repeatability, each test cycle was repeated ten times. A total of 3240 damaged signals (12 paths × 10 repetitions × 9 locations × 3 damage levels) and 600 healthy signals (12 paths × 10 repetitions × 5 trials) were collected. Due to the limited quantity of healthy signals and the risk of overfitting, a generative adversarial network (GAN), described in Section 2.2, was employed to generate new high-fidelity samples rather than simply replicating the existing data patterns. Moreover, to ensure the quality of the generated signals, K-means clustering was used to distinguish between high- and low-quality synthetic samples.
Specifically, the root mean square (RMS) of each generated signal was compared with that of the original healthy signal. A threshold was established based on the minimum distance between the centroids of two clusters. Synthetic signals with mean squared error (MSE) values below this threshold were classified as high-quality, as shown in Figure 4a. A representative comparison between a generated signal and an original healthy signal is shown in Figure 4b, demonstrating a close resemblance in both amplitude and temporal structure, with only minor deviations observed in the second-wave packet. In total, 3270 high-quality synthetic signals were generated and combined with the original healthy data, yielding a final dataset comprising 3870 samples.

3.3. Baseline Methods and Evaluation Metrics

To demonstrate the effectiveness of our model, we compared it with various state-of-the-art models that are widely used for anomaly detection. These include shallow machine learning and deep learning approaches. Some of the models evaluated were the One-Class Support Vector Machine (Oc-SVM) [24], IsoForest [25], Local Outlier Factor (LOF) [26], and K-nearest neighbor (KNN) [27]. To ensure a fair comparison, latent features from the pretrained autoencoder were extracted and used to train the selected shallow models. Although shallow machine learning algorithms have demonstrated effectiveness in certain tasks, they often struggle with complex datasets. Therefore, in this study, we considered several deep learning models as the baseline for our proposed approach. These deep models include the Variational Autoencoder (VAE) [28], Invariant Representation Anomaly Detection (IRAD) [29], Deep support vector data description (Deep-SVDD) [30], and the transformer autoencoder (TAE) [31].

3.4. Implementation Details

To ensure a fair comparison between shallow machine learning and deep learning models, features were extracted using an autoencoder to train the KNN, IsoForest, Oc-SVM, and LOF models. The autoencoder, with an input shape of 1500 and a 64-dimensional latent representation, was trained exclusively on healthy data. All experiments were conducted on a workstation equipped with an NVIDIA GeForce RTX 3060 GPU, an Intel Core i7-10700F CPU, and 32 GB of RAM. All models were trained using the same hyperparameters: the Adam optimizer, a batch size of 32, and 50 training epochs. For KNN and LOF, the neighborhood size was set to five based on trial-and-error experimentation. In contrast, the number of estimators for IsoForest was set to 30, and the radial basis function (RBF) kernel was selected for Oc-SVM. Fully connected layers were used to construct deep learning-based models such as VAE, IRAD, and Deep-SVDD, all trained exclusively on healthy data with identical training parameters. Anomaly detection in both the VAE and IRAD models is based on reconstruction errors, which serve as the anomaly score via the mean squared error (MSE). In the case of VAE, Kullback–Leibler (KL) divergence is added to the loss function to enforce a standard normal distribution in the latent space, whereas Deep-SVDD initializes its center point based on features extracted from healthy data. The transformer autoencoder model was constructed with a dense layer of 64 units, two attention heads, a feed-forward network with a hidden size of 128 and dropout rate of 0.25.
The proposed model, summarized in Table 1, identified abnormalities by initially training an attention-enhanced autoencoder solely on healthy data. This allowed the model to learn a compact latent representation and accurately reconstruct normal signal patterns. Subsequently, a Gaussian mixture model (GMM) with four components was then fitted to the latent features extracted from the healthy data. The number of components was selected based on a trial-and-error approach rather than exhaustive parameter optimization. Anomaly scores were then computed using the negative log-likelihood of each sample under the GMM. Samples with scores exceeding 95th percentile of the healthy data were classified as having anomalies, indicating potential damage.
To assess the contribution of the attention mechanism to our proposed model, we conducted a comparative experiment by retraining the SAE-GMM model with the attention module excluding from the encoder section. The modified model (AE-GMM) was evaluated under similar conditions, and its performance metrics were compared to those of the original architecture.

3.5. Evaluation Metrics

To evaluate the performance of the proposed model, four key metrics were selected: the F1-score, recall (R), the false positive rate (FPR), and inference time. According to [25], recall indicates the proportion of correctly classified damaged instances, whereas the FPR assesses the model’s reliability by measuring how often healthy data are misclassified as damaged. Inference time tells how long it takes for a trained model to analyze new data.

4. Experimental Results

4.1. Influence of Synthetic GAN Data on Proposed Model’s Performance

To evaluate the influence of synthetic data on our model’s performance, we designed four different scenarios, as summarized in Table 2. The first scenario is composed exclusively of real data (100% real samples). The second scenario includes a balanced mix of real and synthetic samples, consisting of 100% real and 20% synthetic data. The third scenario features a complete mix of real and synthetic data. Lastly, the fourth scenario consists solely of synthetic data (100% synthetic samples). Each scenario was evaluated using the same set of 1080 damaged samples.
From Table 2, it is clearly seen that the model trained solely on real data achieved low accuracy and a high false positive rate. This suggests that the model struggles to generalize well to unseen datasets due to the scarcity of real samples.
In contrast, the inclusion of synthetic data on real samples (scenario 3) demonstrated the best F1-score and the lowest false positive rate, highlighting the quality of the synthetic signals filtered by K-means clustering. Finally, the fourth scenario showed also strong anomaly detection; however, it suffered from low precision and an increased FPR, likely due to the absence of real data during training. Overall, the results indicate that synthetic data enhanced our model’s performance, with scenario 3 exhibiting optimal performance.

4.2. Anomaly Detection Results

To train the proposed and baseline models, 5-fold cross-validation was employed to split the healthy data into training and validation sets. For each damage category, 35% of the corresponding data were randomly selected to minimize imbalances during the testing phase. The selected samples from the three damage severities were concatenated to construct the damage dataset. Subsequently, the damaged dataset was combined with validation healthy dataset to form the testing dataset.
Table 3 lists the average performance metrics of the proposed and baseline models. For each metric, the highest score is indicated in bold, and the second-highest score is underlined. Among traditional models, Isolation Forest delivered the best performance with an F1-score of 91.4% and a recall of 93.8%, though it suffered from a relatively high false positive rate (FPR) of 11.3%. The K-nearest neighbor (KNN) model achieved an almost perfect recall (99.4%) but was undermined by the highest FPR (31.8%), indicating poor discrimination of healthy signals. Both One-Class SVM and Local Outlier Factor (LOF) performed poorly across all metrics. These findings suggest that while traditional machine learning models offer faster inference, they lack the feature extraction capacity required for complex signals such as Lamb waves. In contrast, deep learning models demonstrated superior feature extraction capabilities. The Variational Autoencoder (VAE) stood out among the baselines, achieving an F1-score of 96.1%, a recall of 98.3%, and a moderate FPR of 5.16%. However, its inference time (0.75 s)—nearly double that of the proposed model—limits its practicality in real-time settings. The transformer autoencoder (TAE) attained a moderate F1-score of 90.0% and a recall of 94.0% but exhibited the longest inference time (5.56 s), reflecting high computational demand despite decent detection performance. Both Deep-SVDD and IRAD failed to strike a balance across metrics. Deep-SVDD achieved the lowest FPR (0.8%) but suffered from extremely low recall (44.0%) and a corresponding F1-score of 60.6%, indicating severe overfitting and poor anomaly detection capability.
The proposed SAE-GMM model outperformed all baselines, achieving the highest F1-score (97.1%), nearly perfect recall (99.9%), and the lowest FPR (4.99%). Additionally, it maintained a fast inference time of 0.37 s, offering an ideal balance between speed and accuracy. The trained model without the self-attention module (AE-GMM) achieved perfect recall but suffered from a higher FPR (7.01%). In addition, SAE-GMM provides a clear balance between recall and false alarms. These results clearly demonstrate that integrating self-attention improves robustness and reduces false positives, with only a slight increase in runtime.
Figure 5 shows the confusion matrices for all baseline models and the proposed model, highlighting the effectiveness of each in distinguishing between healthy and damaged samples. The Oc-SVM and Deep-SVDD models exhibited limited capabilities, misclassifying nearly half of the damaged samples. Although Deep-SVDD achieved the lowest FPR, its overall detection performance was inadequate for practical applications. Models such as IsoForest, LOF, and IRAD demonstrated moderate performance in detecting true positives.
The KNN model achieved the second-highest number of true positives, correctly identifying 1074 out of 1080 damaged samples; however, it suffered from a high false positive rate, misclassifying 33.6% of the healthy signals. The VAE also performed well, misclassifying only 29 damaged samples and yielding a low FPR of 5.04%. The transformer autoencoder (TAE) achieved decent classification performance, correctly identifying 1013 damaged samples (93.8% recall) and 1207 healthy samples (90.01%). However, it misclassified 67 damaged and 134 healthy instances, indicating a moderate trade-off between detection and precision.
In contrast, the proposed SAE-GMM model outperformed all baselines, achieving the highest number of true positives—correctly classifying 1079 out of 1080 damaged samples and misclassifying only one sample. This highlights its robustness in differentiating between healthy and damaged signals. These results demonstrate that the superior performance of the proposed model stems from its ability to effectively capture the underlying structure of healthy data through a combination of deep representation learning, self-attention mechanisms, and GMM-based clustering in the latent space.

4.3. Visualization of the Results

To further evaluate the separation capabilities of the proposed model relative to baseline approaches, we employed t-distributed Stochastic Neighbor Embedding (t-SNE) and Kernel Density Estimation (KDE) visualizations. As shown on the left side of Figure 6, the t-SNE results indicate that both Deep-SVDD and Oc-SVM exhibited a limited ability to discriminate between healthy (blue) and damaged (red) samples in the latent space. A significant portion of the damaged samples were incorrectly clustered with healthy samples, reflecting suboptimal separation. In contrast, the KNN model tended to misclassify a large number of healthy samples as damaged. Deep learning-based models, including the VAE, IRAD, and LOF demonstrated improved class separation with reduced overlap between healthy and damaged sample distributions compared with shallow machine learning methods. Among these, the proposed model achieved the most distinct separation, followed by the VAE model. In the t-SNE plot, the healthy samples form a compact cluster, whereas the damaged samples are more widely distributed and separated. This clear delineation suggests that the proposed model is more effective in learning discriminative features, thereby enhancing its capacity for reliable and accurate damage detection. Additionally, the KDE plots shown on the right side of Figure 6 are used to illustrate the distribution of anomaly scores and reconstruction errors for the two classes. Note that the Oc-SVM achieved the worst separation between healthy and damaged samples, whereas the proposed model exhibited the best separation. The healthy (green) and damaged samples (red) show a clearer overlap around the decision threshold at zero, suggesting the limited ability of the Oc-SVM model to separate false positives and false negatives.
To highlight the impact of the attention mechanism on the proposed model, the feature distributions before and after applying attention are shown in Figure 7. Figure 7a shows the t-SNE plot of the features before the implementation of the attention mechanism, where a significant overlap between healthy and damaged samples is observed, indicating a poor class distribution. By contrast, Figure 7b shows the feature distribution after applying attention, demonstrating an improved separation between the two classes with a distinct gap. This indicates that the attention mechanism effectively enhances the feature representation for the proposed SAE-GMM model, resulting in better class separation compared to other models that do not utilize attention.
The proposed model outperformed all baseline models, with a much more distinct separation between the samples. It is evident that the healthy samples were concentrated near very low anomaly scores, whereas the damaged samples were spread over a broader range of high anomaly scores. The 95% threshold determined from healthy samples cleanly separated the most damaged samples with a small overlap. Overall, the SAE-GMM model achieved superior class separability and a more reliable threshold margin for identifying structural damage.

4.4. Robust Analysis Under Varying Gaussian Noise Levels

To evaluate the generalization capability of the proposed SAE-GMM model under noisy conditions, Gaussian noise with three standard deviations (σ = 0.05, 0.075, 0.1) was added to training data. Rather than augmenting the dataset with additional noisy samples, 50% of the original training data were perturbed to mitigate computational overhead. The model’s performance under each noise condition was evaluated and compared to its baseline performance on clean data, as summarized in Table 4. Under moderate noise levels (σ = 0.05 and σ = 0.075), the model retained stable F-1 scores of 0.97 and 0.96 with strong recall values of 97% and 95%, respectively. At the highest noise level (σ = 0.1), the model demonstrated notable resilience, achieving an F1-score of 0.93 and recall of 90%, highlighting its effectiveness in anomaly detection. The false positive rate (FPR) experienced a steady increase with noise level, from 4.99% (clean data) to 6.10% (σ = 0.1), yet remained within an acceptable range. These results validate the robustness of the proposed model under realistic signal perturbations.

5. Conclusions

This study presents a robust unsupervised anomaly detection framework for laminated composite plates, combining GAN-based data augmentation, a self-attention autoencoder, and Gaussian mixture modeling. The proposed model successfully captures complex signal patterns and separates damaged signals with high metric scores. Compared to traditional and deep learning baseline models, SAE-GMM achieves high detection accuracy, low false positive rates, low computational time, and strong generalization under noisy conditions. The attention mechanism enriches the model’s capability of separating features, while the Gaussian mixture model (GMM) provides a reliable decision boundary. These obtained results emphasize the proposed model’s potential for real-world structural health monitoring applications. For the future, we plan to explore cross-plate validation and real-time deployment further to evaluate scalability and adaptability across diverse composite structures.

Author Contributions

Conceptualization, J.W.S.; methodology, O.M.; writing—original draft preparation, O.M.; writing—review and editing, O.M. and J.W.S.; supervision, J.W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean Government (Ministry of Science and ICT, MSIT) (No. RS-2023-NR076925, 50%). This work was partly supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP)’s Innovative Human Resource Development for Local Intellectualization program grant funded by the Korea government (MSIT) (IITP-2025-RS-2020-II201612, 50%).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liew, K.M.; Pan, Z.Z.; Zhang, L.W. An overview of layerwise theories for composite laminates and structures: Development, numerical implementation and application. Compos. Struct. 2019, 216, 240–259. [Google Scholar] [CrossRef]
  2. Munyaneza, O.; Yuk, D.-G.; Sohn, J.W. A State-of-art on Damage Detection of Composite Structure Using Lamb Wave and Deep Learning Techniques. Int. J. Precis. Eng. Manuf.-Smart Technol. 2025, 3, 65–82. [Google Scholar] [CrossRef]
  3. Ahmed, O.; Wang, X.; Tran, M.-V.; Ismadi, M.-Z. Advancements in fiber-reinforced polymer composite materials damage detection methods: Towards achieving energy-efficient SHM systems. Compos. Part B Eng. 2021, 223, 109136. [Google Scholar] [CrossRef]
  4. Sawant, S.; Sethi, A.; Banerjee, S.; Tallur, S. Unsupervised learning framework for temperature compensated damage identification and localization in ultrasonic guided wave SHM with transfer learning. Ultrasonics 2023, 130, 106931. [Google Scholar] [CrossRef] [PubMed]
  5. Yang, Z.; Zhang, J.; Xu, H.; Yang, L.; Ma, S.; Wu, Z. Phased Array System for Damage Detection in Plate-Like Structures Based on Single Lamb Wave Mode Extraction. J. Aerosp. Eng. 2023, 36, 04023004. [Google Scholar] [CrossRef]
  6. Mariani, S.; Kalantari, A.; Kromanis, R.; Marzani, A. Data-driven modeling of long temperature time-series to capture the thermal behavior of bridges for SHM purposes. Mech. Syst. Signal Process. 2024, 206, 110934. [Google Scholar] [CrossRef]
  7. Yang, Z.; Yang, H.; Tian, T.; Deng, D.; Hu, M.; Ma, J.; Gao, D.; Zhang, J.; Ma, S.; Yang, L.; et al. A review on guided-ultrasonic-wave-based structural health monitoring: From fundamental theory to machine learning techniques. Ultrasonics 2023, 133, 107014. [Google Scholar] [CrossRef]
  8. Eltouny, K.; Gomaa, M.; Liang, X. Unsupervised Learning Methods for Data-Driven Vibration-Based Structural Health Monitoring: A Review. Sensors 2023, 23, 3290. [Google Scholar] [CrossRef]
  9. Khan, A.; Kim, H.S. Damage assessment of laminated composites using unsupervised autonomous features. J. Thermoplast. Compos. Mater. 2024, 37, 2123–2148. [Google Scholar] [CrossRef]
  10. Abbassi, A.; Römgens, N.; Tritschel, F.F.; Penner, N.; Rolfes, R. Evaluation of machine learning techniques for structural health monitoring using ultrasonic guided waves under varying temperature conditions. Struct. Health Monit. 2023, 22, 1308–1325. [Google Scholar] [CrossRef]
  11. Junges, R.; Rastin, Z.; Lomazzi, L.; Giglio, M.; Cadini, F. Convolutional autoencoders and CGANs for unsupervised structural damage localization. Mech. Syst. Signal Process. 2024, 220, 111645. [Google Scholar] [CrossRef]
  12. Zhao, P.; Ding, Z.; Li, Y.; Zhang, X.; Zhao, Y.; Wang, H.; Yang, Y. SGAD-GAN: Simultaneous Generation and Anomaly Detection for time-series sensor data with Generative Adversarial Networks. Mech. Syst. Signal Process. 2024, 210, 111141. [Google Scholar] [CrossRef]
  13. Luleci, F.; Catbas, F.N.; Avci, O. Generative adversarial networks for labeled acceleration data augmentation for structural damage detection. J. Civ. Struct. Health Monit. 2023, 13, 181–198. [Google Scholar] [CrossRef]
  14. Msigwa, G.; Ntambala, E.; Yun, J. Enhancing atrial fibrillation classification from single-lead electrocardiogram signals using attention-based networks and generative adversarial networks with density-based clustering. Eng. Appl. Artif. Intell. 2024, 133, 108607. [Google Scholar] [CrossRef]
  15. Dreher, N.R.; Storti, G.C.; Machado, T.H. Machado. Vibration Signal Evaluation Based on K-Means Clustering as a Pre-Stage of Operational Modal Analysis for Structural Health Monitoring of Rotating Machines. Energies 2023, 16, 7848. [Google Scholar] [CrossRef]
  16. Yu, Y.-F.; Wei, P.; Wu, X.; Feng, Q.; Zhang, C. Discriminative fuzzy K-means clustering with local structure preservation for high-dimensional data. Knowl.-Based Syst. 2024, 304, 112537. [Google Scholar] [CrossRef]
  17. He, Y.; Chen, X.; Miao, D.; Zhang, H.; Qin, X.; Du, S.; Lu, P. Graph-enhanced anomaly detection framework in multivariate time series using Graph Attention and Enhanced Generative Adversarial Networks. Expert Syst. Appl. 2025, 271, 126667. [Google Scholar] [CrossRef]
  18. Chen, L.; Jiang, H.; Wang, L.; Li, J.; Yu, M.; Shen, Y.; Du, X. Generative adversarial synthetic neighbors-based unsupervised anomaly detection. Sci. Rep. 2025, 15, 16. [Google Scholar] [CrossRef] [PubMed]
  19. Ma, C.; Wang, Y.; Li, F.; Zhang, H.; Zhang, Y.; Edalatpanah, S.A. Constructing Attention-LSTM-VAE Power Load Model Based on Multiple Features. Adv. Math. Phys. 2024, 2024, 1041791. [Google Scholar] [CrossRef]
  20. Roy, M.; Halder, A.; Majumder, S.; Biswas, U. AttentivECGRU: GRU based autoencoder with attention mechanism and automated fuzzy thresholding for ECG arrhythmia detection. Appl. Soft Comput. 2024, 167, 112337. [Google Scholar] [CrossRef]
  21. Zong, B.; Song, Q.; Min, M.R.; Cheng, W.; Lumezanu, C.; Cho, D.; Chen, H. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  22. Khan, W.; Khan, A.N.; Hasan, M.K.; Khan, A.; Mokhtar, U.A.; Islam, S. DVAEGMM: Dual Variational Autoencoder with Gaussian Mixture Model for Anomaly Detection on Attributed Networks. IEEE Access 2022, 10, 91160–91176. [Google Scholar] [CrossRef]
  23. Zhou, Y.-C.; Li, M.-Q.; Ji, L.-B. Denoising Deep Autoencoder Gaussian Mixture Model and Its Application for Robust Nonlinear Industrial Process Monitoring. In Proceedings of the 2021 International Conference on Computer Information Science and Artificial Intelligence (CISAI), Kunming, China, 17–19 September 2021; IEEE: New York, NY, USA; pp. 67–73. [Google Scholar] [CrossRef]
  24. Bigoni, C.; Hesthaven, J.S. Simulation-based Anomaly Detection and Damage Localization: An application to Structural Health Monitoring. Comput. Methods Appl. Mech. Eng. 2020, 363, 112896. [Google Scholar] [CrossRef]
  25. Li, C.; Guo, L.; Gao, H.; Li, Y. Similarity-Measured Isolation Forest: Anomaly Detection Method for Machine Monitoring Data. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
  26. Swana, E.F.; Bokoro, P.; Doorsamy, W. Isolation Forest and Local Outlier Factor-Based Anomaly Detection on an Induction Machine. In Proceedings of the 2024 IEEE PES/IAS PowerAfrica, Johannesburg, South Africa, 7–11 October 2024; IEEE: New York, NY, USA; pp. 1–5. [Google Scholar] [CrossRef]
  27. Nizan, O.; Tal, A. k-NNN: Nearest Neighbors of Neighbors for Anomaly Detection. arXiv 2023, arXiv:2305.17695. [Google Scholar]
  28. Komorska, I.; Puchalski, A. Condition Monitoring Using a Latent Space of Variational Autoencoder Trained Only on a Healthy Machine. Sensors 2024, 24, 6825. [Google Scholar] [CrossRef]
  29. Yang, Z.; Bozchalooi, I.S.; Darve, E. Anomaly Detection with Domain Adaptation. arXiv 2020, arXiv:2006.03689. [Google Scholar]
  30. Ruff, L.; Vandermeulen, R.; Goernitz, N.; Deecke, L.; Siddiqui, S.A.; Binder, A.; Müller, E.; Kloft, M. Deep One-Class Classification. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Dy, J., Krause, A., Eds.; Volume 80, pp. 4393–4402. Available online: https://proceedings.mlr.press/v80/ruff18a.html (accessed on 5 February 2025).
  31. Tuli, S.; Casale, G.; Jennings, N.R. TranAD: Deep Transformer Networks for Anomaly Detection in Multivariate Time Series Data. arXiv 2022, arXiv:2201.07284. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed model, SAE-GMM. The proposed framework comprises of three main components: (1) an experimental setup and data acquisition, (2) a GAN for generating synthetic data, and (3) a self-attention autoencoder (SAE) with a Gaussian Mixture Model (GMM) for anomaly detection.
Figure 1. The flowchart of the proposed model, SAE-GMM. The proposed framework comprises of three main components: (1) an experimental setup and data acquisition, (2) a GAN for generating synthetic data, and (3) a self-attention autoencoder (SAE) with a Gaussian Mixture Model (GMM) for anomaly detection.
Mathematics 13 02445 g001
Figure 2. Data acquisition setup: (a) laminated composite with PZT layout and sensing paths; (b) different masses used to simulate artificial damage; (c) experimental setup.
Figure 2. Data acquisition setup: (a) laminated composite with PZT layout and sensing paths; (b) different masses used to simulate artificial damage; (c) experimental setup.
Mathematics 13 02445 g002
Figure 3. Measured Lamb wave signals along path 3: (a) excitation signal; (b) response under healthy conditions; (c) comparison between healthy and damaged case D1; (d) comparison between healthy and damaged case D2; (e) comparison between healthy and damaged case D3; (f) comparison of all damage cases (D1–D3).
Figure 3. Measured Lamb wave signals along path 3: (a) excitation signal; (b) response under healthy conditions; (c) comparison between healthy and damaged case D1; (d) comparison between healthy and damaged case D2; (e) comparison between healthy and damaged case D3; (f) comparison of all damage cases (D1–D3).
Mathematics 13 02445 g003
Figure 4. Augmented Lamb wave data: (a) K-means clustering of synthetic signals based on mean squared error (MSE) values; (b) comparison between the original and the generated healthy Lamb wave signals.
Figure 4. Augmented Lamb wave data: (a) K-means clustering of synthetic signals based on mean squared error (MSE) values; (b) comparison between the original and the generated healthy Lamb wave signals.
Mathematics 13 02445 g004
Figure 5. Confusion matrices for baselines and proposed model: (a) Oc-SVM; (b) IsoForest; (c) KNN; (d) LOF; (e) IRAD; (f) Deep-SVDD; (g) VAE; (h) SAE-GMM; (i) TAE.
Figure 5. Confusion matrices for baselines and proposed model: (a) Oc-SVM; (b) IsoForest; (c) KNN; (d) LOF; (e) IRAD; (f) Deep-SVDD; (g) VAE; (h) SAE-GMM; (i) TAE.
Mathematics 13 02445 g005
Figure 6. t-SNE visualizations and KDE plots: (a) Oc-SVM; (b) IsoForest; (c) KNN; (d) LOF; (e) IRAD; (f) Deep-SVDD; (g) VAE; (h) SAE-GMM.
Figure 6. t-SNE visualizations and KDE plots: (a) Oc-SVM; (b) IsoForest; (c) KNN; (d) LOF; (e) IRAD; (f) Deep-SVDD; (g) VAE; (h) SAE-GMM.
Mathematics 13 02445 g006
Figure 7. Comparison of feature distribution using t-SNE plots: (a) pre-attention; (b) post-attention.
Figure 7. Comparison of feature distribution using t-SNE plots: (a) pre-attention; (b) post-attention.
Mathematics 13 02445 g007
Table 1. Detailed architecture of the proposed SAE-GMM model.
Table 1. Detailed architecture of the proposed SAE-GMM model.
LayersOutput ShapeParametersHyperparameters
Input layer(None, 1500)0input shape = 1500
Dense(None, 128)192,128units = 128; activation = ReLu
Dense(None, 64)8256units = 64; activation = ReLu
Attention (encoder)(None, 64)0self-attention over 64-d latent space.
Multiply(None, 64)0 encoded   attention output
Dense (latent)(None, 32)2080units = 32; activation = ReLu
Dense(None, 64)2112units = 64; activation = ReLu
Dense(None, 128)8320units = 128; activation = ReLu
Dense (reconstruction)(None, 1500)193,500units = 1500; activation = sigmoid
Concatenate(None, 1532)0combined latent (32) + abs (1500)
Dense(None, 16)24,528units = 16; activation = ReLu
Dense(None, 8)136units = 8; activation = ReLu
Dense (GMM output)(None, 4)36units = 4; activation = SoftMax
Total params: 431,096
Table 2. Performance of SAE-GMM under various real–synthetic data configurations.
Table 2. Performance of SAE-GMM under various real–synthetic data configurations.
ScenarioReal SamplesSynthetic SamplesAccuracyPrecisionRecallF1-ScoreFPR
160000.920.970.930.959.50
26006540.970.961.00.965.06
360032700.970.981.00.975.02
4032700.970.941.00.966.40
Table 3. Anomaly detection performance for baselines and our proposed model. For each metric, the highest score is indicated in bold, and the second-highest score is underlined.
Table 3. Anomaly detection performance for baselines and our proposed model. For each metric, the highest score is indicated in bold, and the second-highest score is underlined.
ModelsF1-ScoreRecallFalse Positive RateInference Time (s)
Oc-SVM60.0 ± 0.151.1 ± 0.2211.0 ± 180.09
IsoForest91.4 ± 0.193.8 ± 0.0311.3 ± 210.01
KNN83.8 ± 0.299.4 ± 0.0131.8 ± 150.03
LOF78.8 ± 0.380.1 ± 0.348.5 ± 0.010.02
IRAD88.7 ± 0.188.7 ± 0.098.2 ± 0.042.3 × 10−3
Deep-SVDD60.6 ± 0.044.0 ± 0.010.8 ± 0.330.93
TAE90.0 ± 0.494.0 ± 0.0610.0 ± 0.035.56
VAE96.1 ± 0.198.3 ± 0.035.16 ± 0.090.75
AE-GMM95.8 ± 0.3100 ± 0.07.01 ± 0.010.32
SAE-GMM97.1 ± 0.099.9 ± 0.054.99 ± 0.030.37
Table 4. Proposed model performance under varying noise levels.
Table 4. Proposed model performance under varying noise levels.
Noise LevelAccuracyPrecisionRecallF1-ScoreFPR
SAE-GMMclean0.970.9699.90.974.99%
σ = 0.050.950.940.970.975.15%
σ = 0.0750.950.940.950.965.54%
σ = 0.10.920.940.900.936.10%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Munyaneza, O.; Sohn, J.W. Anomaly Detection on Laminated Composite Plate Using Self-Attention Autoencoder and Gaussian Mixture Model. Mathematics 2025, 13, 2445. https://doi.org/10.3390/math13152445

AMA Style

Munyaneza O, Sohn JW. Anomaly Detection on Laminated Composite Plate Using Self-Attention Autoencoder and Gaussian Mixture Model. Mathematics. 2025; 13(15):2445. https://doi.org/10.3390/math13152445

Chicago/Turabian Style

Munyaneza, Olivier, and Jung Woo Sohn. 2025. "Anomaly Detection on Laminated Composite Plate Using Self-Attention Autoencoder and Gaussian Mixture Model" Mathematics 13, no. 15: 2445. https://doi.org/10.3390/math13152445

APA Style

Munyaneza, O., & Sohn, J. W. (2025). Anomaly Detection on Laminated Composite Plate Using Self-Attention Autoencoder and Gaussian Mixture Model. Mathematics, 13(15), 2445. https://doi.org/10.3390/math13152445

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop