Implementation of Principal Component Analysis (PCA)/Singular Value Decomposition (SVD) and Neural Networks in Constructing a Reduced-Order Model for Virtual Sensing of Mechanical Stress
Abstract
:1. Introduction
Structure of the Document
2. Methodology
- Structural design: The first step involved designing the structure itself, defining the geometry, materials, and boundary conditions such as restraints and applied loads.
- FEM: Equilibrium equations were solved for each finite element, considering the applied loads and boundary conditions, providing detailed insights into stress distribution throughout the structure.
- Data preprocessing: Given the large volume of output data generated from multiple load combination scenarios simulated with FEM, the data were processed in batches. PCA was applied as a data reduction technique, ensuring that the dataset was manageable for subsequent training steps.
- DL training: A ROM was created to learn the complex relationships between applied load and resulting stresses. Once the model was trained, validation was performed to evaluate its performance.
2.1. General Description of the FEM
2.2. Theoretical Foundations of the ROM
2.2.1. Numerical Reduction with PCA
2.2.2. Dataset Splitting Strategy in PCA
2.2.3. Deep Learning Algorithm
- Capturing complex and spatial dependencies;
- Adaptability to high-dimensional output spaces;
- Capacity for hierarchical feature learning.
- Key definitions: the architecture employs sequential dense layers in a feedforward configuration, where the output of one layer serves as the input for the next. This setup progressively transforms raw input features into meaningful outputs while maintaining computational efficiency and reliability. The dense layers, which form the core of the model, are introduced through the following key definitions:
- –
- Dense layers: fully connected layers that enable the network to learn intricate patterns by connecting each neuron to all neurons in the preceding layer.
- –
- Activation function: all dense layers, except the output layer, employ the ReLU (Rectified Linear Unit) activation function to introduce non-linearity, facilitating the modeling of complex relationships.
- –
- Neuron scaling: the number of neurons in each dense layer is scaled relative to the number of input features, balancing model complexity with computational feasibility.
- The overall structure and functionality of the ANN architecture are outlined below:
- –
- Input layer: a dense layer that directly receives the four input features (), initialized with constant weights and employing the ReLU activation function to capture non-linear input–output relationships from the outset.
- –
- Hidden layers: intermediate layers positioned between the input and output layers, responsible for processing input features into abstract representations through dense connections and activation functions.
- –
- Output layer: a dense layer that maps the learned features to the high-dimensional output space (r outputs). This layer, suited for regression tasks, does not use an activation function, allowing direct numerical predictions.
- Model compilation: after defining the ANN architecture, the model must be compiled with specific settings that define how it learns and how its performance is assessed. This involves selecting an optimizer to adjust the model’s parameters, defining a loss function to quantify prediction errors, and establishing evaluation metrics to measure the model’s effectiveness in capturing the relationships between input and output variables. These elements, detailed below, are fundamental to ensuring that the training process is both efficient and capable of producing accurate predictions.
- –
- Optimizer: the Adam optimizer is selected for its efficiency in managing sparse gradients and adapting learning rates, facilitating better and faster convergence.
- –
- Loss Function: the ANN uses the Mean Squared (MSE) as the loss function, which is well suited for regression tasks aiming to minimize error. MSE is a common loss function used for regression tasks which measures the average squared difference between the predicted and actual values. The formula for MSE is given by Equation (10):
- ∗
- n is the number of samples.
- ∗
- is the actual value for the -th sample.
- ∗
- is the predicted value for the -th sample.
- –
- Early Stopping: it is a regularization technique employed to halt the training process when the model’s performance on the validation set ceases to improve. Specifically, if the validation loss does not decrease after a predefined number of consecutive epochs, the training is stopped. This prevents overfitting by avoiding unnecessary iterations, conserving computational resources, and ensuring the model retains its generalization capabilities.
- –
- Performance Evaluation: the performance of the model is assessed using metrics such as the MSE, the Mean Absolute Error (MAE), and the Pearson correlation coefficient. The formula for MAE is given by Equation (11):The Pearson correlation coefficient is a statistical measure that quantifies the linear relationship between two variables, and it is given by Equation (12):
- ∗
- and are individual data points of variables X and Y.
- ∗
- and are the means of X and Y, respectively.
- ∗
- n denotes the number of observations.
Its value ranges from to 1, where the following hold:- ∗
- 1 indicates a perfect positive correlation, meaning that as one variable increases, the other increases proportionally.
- ∗
- indicates a perfect negative correlation, meaning that as one variable increases, the other decreases proportionally.
- ∗
- 0 indicates no linear relationship between the two variables.
2.2.4. PCA Inversion and Descaling
2.2.5. ROM Final Architecture Testing
- Input forces: The process begins with four input forces, labeled as . These inputs are random or aleatory in nature such that N and serve as the driving factors for the ROM
- DL model: The input forces are fed into an ANN-based DL model. This model processes the inputs to generate an intermediate output, denoted as .
- Space SVD Matrices (U, , V): By applying Equation (13), the intermediate output is inversed back to .
- Discaling process: By inversing the scale process previously executed during the standardization step, finally we obtain as the final output of the ROM.
3. Development of the ROM
3.1. Dataset Processing
- Filtering, aggregation, and calculation operations on large datasets can be slow and memory intensive. Dask avoids this by working with chunks.
- It allows working with large datasets that do not fit into memory by processing them sequentially or in parallel.
- Instead of working with a single thread as in the conventional way, Dask’s parallel operations utilize multiple CPU cores, improving performance.
- It allows for deferring the execution of tasks until explicitly calling a method with .compute().
3.2. Scaling and PCA
- is the original value of stress (MPa).
- is the scaled value.
- is the minimum value of node stress in G matrix.
- is the maximum value of node stress in G matrix.
- The factor is derived from the desired range width (–).
- The addition of shifts the entire scale up to start from .
- With 70% of the data (training set), PCA was applied with a 1% truncation. Of the 141,100 possible eigenvalues, we truncated to 125 (see Figure 8). This reduced the numerical space from (1828, 141,100) to (1828, 125).
- The validation space (20% of the dataset) was projected onto the truncated space. This reduced the numerical space from (500, 141,100) to (500, 125).
- The testing space (10% of the dataset) was projected onto the truncated space. This reduced the numerical space from (264, 141,100) to (264, 125).
- In total, we reduced a data space from (2592, 141,100) to , which implies a 99% reduction in the data space. This translates from 182 million data points to 0.1 million data points.
3.3. Deep Learning Model Training
- Dense layer. Units 4. Kernel initializer: Tensor of ones. Activation: ReLU.Input shape: (1828, 4)
- Dense layer. Units 80. Activation: ReLU.
- Dense layer. Units 40. Activation: ReLU.
- Dense layer. Units 125.Output shape: (1828, 125)
3.4. Hardware and Software Used
- FEM simulations:
- –
- Processors: two Intel Xeon E5-2630 v4;
- –
- RAM: 512 GB.
- ROM calculations:
- –
- Processor: AMD Ryzen 7 5800X (3.8 GHz);
- –
- RAM: 32 GB DDR4 (2666 MHz).
4. Results and Discussion
- train vs. validation: KS Statistic = 0.029, p-value = 0.86;
- train vs. test: KS Statistic = 0.042, p-value = 0.81;
- validation vs. test: KS Statistic = 0.052, p-value = 0.72.
- The anticipated asymptotic reduction in the network’s losses as training advances, indicating effective minimization of losses, as well as the MAE;
- Achieving a stable minimum loss by the conclusion of the training phase, signifying completion of the training process;
- Consistent behavior of the model’s losses across both training and validation datasets, suggesting the successful mitigation of overfitting concerns.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hribernik, K.; Wuest, T.; Thoben, K.D. Towards Product Avatars Representing Middle-of-Life Information for Improving Design, Development and Manufacturing Processes. In Digital Product and Process Development Systems, Proceedings of the IFIP TC 5 International Conference, NEW PROLAMAT 2013, Dresden, Germany, 10–11 October 2013; Kovács, G.L., Kochan, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 85–96. [Google Scholar] [CrossRef]
- Holler, M.; Uebernickel, F.; Brenner, W. Digital twin concepts in manufacturing industries-a literature review and avenues for further research. In Proceedings of the 18th International Conference on Industrial Engineering (IJIE), Seoul, Republic of Korea, 10–12 October 2016. [Google Scholar]
- Ghosh, A.; Ullah, A.; Kubo, A. Hidden Markov model-based digital twin construction for futuristic manufacturing systems. Artif. Intell. Eng. Des. Anal. Manuf. 2019, 33, 317–331. [Google Scholar] [CrossRef]
- Tao, F. Digital twin-driven product design framework. Int. J. Prod. Res. 2018, 57, 3935–3953. [Google Scholar] [CrossRef]
- Zhuang, C.; Liu, J.; Xiong, H. Digital twin-based smart production management and control framework for the complex product assembly shop-floor. Int. J. Adv. Manuf. Technol. 2018, 96, 1149–1163. [Google Scholar] [CrossRef]
- ASME (Ed.) Data Flow and Communication Framework Supporting Digital Twin for Geometry Assurance. In Proceedings of the ASME 2017 International Mechanical Engineering Congress and Exposition, Tampa, FL, USA, 3–9 November 2017; Volume 2: Advanced Manufacturing. [Google Scholar] [CrossRef]
- Chen, B.; Wan, J.; Celesti, A.; Li, D.; Abbas, H.; Zhang, Q. Edge Computing in IoT-Based Manufacturing. IEEE Commun. Mag. 2018, 56, 103–109. [Google Scholar] [CrossRef]
- Tan, Y.; Yang, W.; Yoshida, K.; Takakuwa, S. Application of IoT-Aided Simulation to Manufacturing Systems in Cyber-Physical System. Machines 2019, 7, 2. [Google Scholar] [CrossRef]
- Boschert, S.; Rosen, R. Digital Twin—The Simulation Aspect. In Mechatronic Futures; Springer International Publishing: Cham, Switzerland, 2016; pp. 59–74. [Google Scholar] [CrossRef]
- Schluse, M.; Rossmann, J. From simulation to experimentable digital twins: Simulation-based development and operation of complex technical systems. In Proceedings of the 2016 IEEE International Symposium on Systems Engineering (ISSE), Edinburgh, UK, 3–5 October 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Erikstad, S.; Ove, S. Merging Physics, Big Data Analytics and Simulation for the Next-Generation Digital Twins. In Proceedings of the HIPER 2017, High-Performance Marine Vehicles, Zevenwacht, South Africa, 11–13 September 2017; pp. 11–13. [Google Scholar]
- Bhupathiraju, V.; Ravuri, R. The dawn of Big Data-Hbase. In Proceedings of the 2014 Conference on IT in Business, Industry and Government (CSIBIGP), Indore, India, 8–9 March 2014; pp. 1–4. [Google Scholar] [CrossRef]
- Syafrudin, M.; Alfian, G.; Fitriyani, N.; Rhee, J. Performance Analysis of IoT-Based Sensor, Big Data Processing, and Machine Learning Model for Real-Time Monitoring System in Automotive Manufacturing. Sensors 2018, 18, 2946. [Google Scholar] [CrossRef] [PubMed]
- Calabuig, N.; Laarossi, I.; González, A.; Nuñez, A.; Pérez, L.; García-Minguillán, A. Development of a Low-Cost Smart Sensor GNSS System for Real-Time Positioning and Orientation for Floating Offshore Wind Platform. Sensors 2023, 23, 925. [Google Scholar] [CrossRef]
- Parrott, A.; Warshaw, L. Industry 4.0 and the Digital Twin; Deloitte: London, UK, 2017. [Google Scholar]
- Schluse, M.; Priggemeyer, M.; Atorf, L.; Rossmann, J. Experimentable Digital Twins—Streamlining Simulation-Based Systems Engineering for Industry 4.0. IEEE Trans. Ind. Inform. 2018, 14, 1722–1731. [Google Scholar] [CrossRef]
- Ullah, A. Modeling and simulation of complex manufacturing phenomena using sensor signals from the perspective of Industry 4.0. Adv. Eng. Inform. 2019, 39, 1–13. [Google Scholar] [CrossRef]
- Lin, C.C.; Deng, D.J.; Chen, Z.Y.; Chen, K.C. Key design of driving industry 4.0: Joint energy-efficient deployment and scheduling in group-based industrial wireless sensor networks. IEEE Commun. Mag. 2016, 54, 46–52. [Google Scholar] [CrossRef]
- Kano, M.; Fujiwara, K. Virtual Sensing Technology in Process Industries: Trends and Challenges Revealed by Recent Industrial Applications. J. Chem. Eng. Jpn. 2013, 46, 1–17. [Google Scholar] [CrossRef]
- Soori, M.; Arezoo, B.; Dastres, R. Virtual manufacturing in Industry 4.0: A review. Data Sci. Manag. 2024, 7, 47–63. [Google Scholar] [CrossRef]
- Gräfe, M.; Pettas, V.; Dimitrov, N.; Cheng, P.W. Machine-learning-based virtual load sensors for mooring lines using simulated motion and lidar measurements. Wind Energy Sci. 2024, 9, 2175–2193. [Google Scholar] [CrossRef]
- Cristaldi, L.; Ferrero, A.; Macchi, M.; Mehrafshan, A.; Arpaia, P. Virtual Sensors: A Tool to Improve Reliability. In Proceedings of the 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT, Roma, Italy, 3–5 June 2020; pp. 142–145. [Google Scholar] [CrossRef]
- Ruiz, D.; Casas, A.; Escobar, C.A.; Perez, A.; Gonzalez, V. Advanced Machine Learning Techniques for Corrosion Rate Estimation and Prediction in Industrial Cooling Water Pipelines. Sensors 2024, 24, 3564. [Google Scholar] [CrossRef]
- Mohamed, K. Machine Learning for Model Order Reduction; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
- Lu, Y.; Li, H.; Saha, S.; Mojumder, S.; Amin, A.A.; Suarez, D.; Liu, Y.; Qian, D.; Liu, W.K. Reduced Order Machine Learning Finite Element Methods: Concept, Implementation, and Future Applications. Comput. Model. Eng. Sci. 2021, 129, 1351–1371. [Google Scholar] [CrossRef]
- Adel, A.; Salah, K. Model order reduction using genetic algorithm. In Proceedings of the 2016 IEEE 7th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 20–22 October 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Adel, A.; Salah, K. Model order reduction using artificial neural networks. In Proceedings of the 2016 IEEE International Conference on Electronics, Circuits and Systems (ICECS), Monte Carlo, Monaco, 11–14 December 2016; pp. 89–92. [Google Scholar] [CrossRef]
- Magargle, R. A Simulation-Based Digital Twin for Model-Driven Health Monitoring and Predictive Maintenance of an Automotive Braking System. In Proceedings of the 12th International Modelica Conference, Prague, Czech Republic, 15–17 May 2017. [Google Scholar] [CrossRef]
- Sugeno, M.; Yasukawa, T. A fuzzy-logic-based approach to qualitative modeling. IEEE Trans. Fuzzy Syst. 1993, 1, 7. [Google Scholar] [CrossRef]
- Abdullah, H.N. An Improvement in LQR Controller Design based on Modified Chaotic Particle Swarm Optimization and Model Order Reduction. Int. J. Intell. Eng. Syst. 2021, 14, 157–168. [Google Scholar] [CrossRef]
- Moore, B. Principal component analysis in linear systems: Controllability, observability, and model reduction. IEEE Trans. Autom. Control 1981, 26, 17–32. [Google Scholar] [CrossRef]
- Suman, S.K.; Kumar, A. Model reduction of power system by modified balanced truncation method. Univers. J. Control Autom 2020, 8, 41–52. [Google Scholar] [CrossRef]
- Gopi, E.S. Algorithm Collections for Digital Signal Processing Applications Using Matlab; Springer: Dordrecht, The Netherlands, 2007. [Google Scholar] [CrossRef]
- Palulli, R.; Zhang, K.; Dybe, S.; Paschereit, C.O.; Duwig, C. A novel data-driven reduced order modelling methodology for simulation of humid blowout in wet combustion applications. Energy 2024, 297, 131310. [Google Scholar] [CrossRef]
- Luo, Z.; Wang, L.; Xu, J.; Chen, M.; Yuan, J.; Tan, A.C.C. Flow reconstruction from sparse sensors based on reduced-order autoencoder state estimation. Phys. Fluids 2023, 35, 075127. [Google Scholar] [CrossRef]
- Takano, M.; Shinya, M.; Miyakawa, H.; Yoshida, Y.; Hirosaki, K. Virtual sensor using model order reduction for real-time estimation of tool edge temperature. Trans. JSME 2023, 89, 23-00159. (In Japanese) [Google Scholar] [CrossRef]
- Bengoechea-Cuadrado, C.; García-Camprubí, M.; Zambrano, V.; Mazuel, F.; Izquierdo, S. Virtual Sensor Development Based on Reduced Order Models of CFD Data. In Proceedings of the 2019 IEEE 17th International Conference on Industrial Informatics (INDIN), Helsinki, Finland, 22–25 July 2019; Volume 1, pp. 1644–1648. [Google Scholar] [CrossRef]
- Wu, B.; Wei, Q.; Li, X.; Kou, Y.; Lu, W.; Ge, H.; Guo, X. A four-dimensional digital twin framework for fatigue damage assessment of semi-submersible platforms and practical application. Ocean Eng. 2024, 301, 117273. [Google Scholar] [CrossRef]
- Pacheco-Blazquez, R.; Garcia-Espinosa, J.; Di Capua, D.; Pastor Sanchez, A. A Digital Twin for Assessing the Remaining Useful Life of Offshore Wind Turbine Structures. J. Mar. Sci. Eng. 2024, 12, 573. [Google Scholar] [CrossRef]
- Ares de Parga Regalado, S. Projection-Based Hyper-Reduced Order Modeling of Stress and Reaction Fields, and Application of Static Condensation for Multibody Problems. Master’s Thesis, Universitat Politècnica de Catalunya, Barcelona, Spain, 2021. [Google Scholar]
- Yvonnet, J.; He, Q.C. The reduced model multiscale method (R3M) for the non-linear homogenization of hyperelastic media at finite strains. J. Comput. Phys. 2007, 223, 341–368. [Google Scholar] [CrossRef]
- Hughes, T.J.R. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis; Dover Civil and Mechanical Engineering, Dover Publications: Mineola, NY, USA, 2000. [Google Scholar]
- Shahrivari, S. Beyond Batch Processing: Towards Real-Time and Streaming Big Data. Computers 2014, 3, 117–129. [Google Scholar] [CrossRef]
- Benner, P.; Gugercin, S.; Willcox, K. A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems. SIAM Rev. 2015, 57, 483–531. [Google Scholar] [CrossRef]
- ANSYS Inc. Ansys Mechanical: Structural FEA Analysis Software. Available online: https://www.ansys.com/products/structures/ansys-mechanical (accessed on 23 May 2024).
- Department of Applied Mechanics, Budapest University of Technology and Economics. SOLID187-3-D 10-Node Tetrahedral Structural Solid. Available online: https://www.mm.bme.hu/~gyebro/files/ans_help_v182/ans_elem/Hlp_E_SOLID187.html (accessed on 23 May 2024).
- Shlens, J. A Tutorial on Principal Component Analysis. arXiv 2014, arXiv:1404.1100. [Google Scholar] [CrossRef]
- Abdi, H. Singular Value Decomposition (SVD) and Generalized Singular Value Decomposition. 2007. Available online: https://personal.utdallas.edu/~herve/Abdi-SVD2007-pretty.pdf (accessed on 11 April 2024).
- Dask Development Team. Dask: Library for Dynamic Task Scheduling. 2016. Available online: http://dask.pydata.org (accessed on 13 April 2024).
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: https://zenodo.org/records/13989084 (accessed on 2 August 2024).
- Python Software Foundation. Python Programming Language. 2024. Available online: https://www.python.org/ (accessed on 2 August 2024).
- Pandas Development Team. Pandas: Open Source Data Analysis Tool. 2024. Available online: https://pandas.pydata.org (accessed on 2 August 2024).
- Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Keras. 2015. Available online: https://github.com/fchollet/keras (accessed on 2 August 2024).
Index | Features (Input Forces) | Tags (Output Projection Coefficients) | ||||||
---|---|---|---|---|---|---|---|---|
⋯ | ||||||||
1 | ⋯ | |||||||
2 | ⋯ | |||||||
3 | ⋯ | |||||||
4 | ⋯ | |||||||
5 | ⋯ | |||||||
6 | ⋯ | |||||||
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋯ | ⋮ |
1296 | ⋯ | |||||||
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋯ | ⋮ |
2592 | ⋯ |
Software Package | Version | Purpose |
---|---|---|
Dask [49] | 2024.5.1 | Used for data chunking and parallel computing during data preprocessing. |
TensorFlow [50] | 2.17.0 | Employed for building and training ANNs. |
Python [51] | 3.12.3 | General-purpose programming language used for implementing the workflow. |
Pandas [52] | 2.2.2 | Library for data manipulation and analysis, employed for organizing datasets. |
Numpy [53] | 1.26.4 | Library for numerical computations and array manipulations used throughout. |
Scikit-learn [54] | 1.5.0 | Utilized for PCA and test metrics in ROM. |
Keras [55] | 3.3.3 | High-level API for creating and managing neural network architectures. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Melgarejo, M.A.; Pérez, A.; Ruiz, D.; Casas, A.; González, F.; González de Lena Alonso, V. Implementation of Principal Component Analysis (PCA)/Singular Value Decomposition (SVD) and Neural Networks in Constructing a Reduced-Order Model for Virtual Sensing of Mechanical Stress. Sensors 2024, 24, 8065. https://doi.org/10.3390/s24248065
Melgarejo MA, Pérez A, Ruiz D, Casas A, González F, González de Lena Alonso V. Implementation of Principal Component Analysis (PCA)/Singular Value Decomposition (SVD) and Neural Networks in Constructing a Reduced-Order Model for Virtual Sensing of Mechanical Stress. Sensors. 2024; 24(24):8065. https://doi.org/10.3390/s24248065
Chicago/Turabian StyleMelgarejo, M. A., A. Pérez, D. Ruiz, A. Casas, F. González, and V. González de Lena Alonso. 2024. "Implementation of Principal Component Analysis (PCA)/Singular Value Decomposition (SVD) and Neural Networks in Constructing a Reduced-Order Model for Virtual Sensing of Mechanical Stress" Sensors 24, no. 24: 8065. https://doi.org/10.3390/s24248065
APA StyleMelgarejo, M. A., Pérez, A., Ruiz, D., Casas, A., González, F., & González de Lena Alonso, V. (2024). Implementation of Principal Component Analysis (PCA)/Singular Value Decomposition (SVD) and Neural Networks in Constructing a Reduced-Order Model for Virtual Sensing of Mechanical Stress. Sensors, 24(24), 8065. https://doi.org/10.3390/s24248065