Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = AI-based adaptive surrogate model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2712 KB  
Review
Physics–Data-Integrated Hybrid Simulation for Transient Stability in New Power Systems: Status, Challenges, and Prospects
by Ruiqi Jiao, Shuqing Zhang, Hao Zhang, Beila Deng, Tongtong Zhang, Shaopu Tang, Xianfa Hu and Weijie Zhang
Energies 2026, 19(7), 1687; https://doi.org/10.3390/en19071687 - 30 Mar 2026
Viewed by 415
Abstract
The strong non-linearity and multi-scale coupling characteristics of massive heterogeneous components in modern power systems pose severe challenges to traditional numerical simulation methods, rendering them inadequate for urgent online real-time assessment. This paper systematically reviews state-of-the-art hybrid transient stability simulation technologies that deeply [...] Read more.
The strong non-linearity and multi-scale coupling characteristics of massive heterogeneous components in modern power systems pose severe challenges to traditional numerical simulation methods, rendering them inadequate for urgent online real-time assessment. This paper systematically reviews state-of-the-art hybrid transient stability simulation technologies that deeply integrate physics and data. It first dissects the critical bottlenecks of traditional numerical simulations—specifically computational inefficiency, convergence fragility, and model fidelity gaps—to elucidate the necessity of evolving toward a new physics–data integration paradigm. Subsequently, the review categorizes current methodologies into three technical dimensions: artificial intelligence (AI)-enhanced numerical solvers, AI-based surrogate modeling, and physics-embedded AI modeling. These approaches are synthesized to demonstrate their unique advantages in breaking through computational speed limits, enhancing numerical robustness, and effectively bridging the fidelity gap between simulation models and physical reality. Finally, addressing existing limitations regarding physical consistency and generalization, the paper proposes future research directions, including constructing network architectures with hard physical constraints, enhancing adaptability to complex grid scenarios, and developing self-evolving intelligent simulation frameworks to ensure future grid security. Full article
Show Figures

Figure 1

49 pages, 2911 KB  
Article
From LQ to AI-BED-Fx: A Unified Multi-Fraction Radiobiological and Machine-Learning Framework for Gamma Knife Radiosurgery Across Intracranial Pathologies
by Răzvan Buga, Călin Gheorghe Buzea, Valentin Nedeff, Florin Nedeff, Diana Mirilă, Maricel Agop, Letiția Doina Duceac and Lucian Eva
Cancers 2026, 18(6), 985; https://doi.org/10.3390/cancers18060985 - 18 Mar 2026
Viewed by 360
Abstract
Background: Gamma Knife radiosurgery (GKS) delivers highly conformal intracranial irradiation, yet clinical decision-making still relies predominantly on physical dose metrics that do not account for fractionation, dose rate, treatment time, or DNA repair. Classical radiobiological models—including the linear–quadratic (LQ) formula and the Jones–Hopewell [...] Read more.
Background: Gamma Knife radiosurgery (GKS) delivers highly conformal intracranial irradiation, yet clinical decision-making still relies predominantly on physical dose metrics that do not account for fractionation, dose rate, treatment time, or DNA repair. Classical radiobiological models—including the linear–quadratic (LQ) formula and the Jones–Hopewell single-session repair model—do not extend naturally to 3- and 5-fraction GKS. Meanwhile, growing evidence suggests that biologically effective dose (BED) may better capture radiosurgical response in selected pathologies. A unified, biologically grounded, multi-fraction GKS framework has been lacking. Methods: We developed AI-BED-Fx, the first multi-fraction extension of the Jones–Hopewell radiobiological model capable of computing fraction-resolved BED for 1-, 3-, and 5-fraction GKS. The framework incorporates α/β ratio, dual-component repair kinetics, isocentre geometry, beam-on–time structure, and lesion-specific biological parameters. Four synthetic pathology-specific cohorts—arteriovenous malformation (AVM), meningioma (MEN), vestibular schwannoma (VS), and brain metastasis (BM)—were generated using distinct radiobiological signatures. Machine-learning models were trained to quantify the predictive value of physical dose versus BED for local control or obliteration. Additional experiments included Bayesian estimation of α/β and a neural-network surrogate for fast BED prediction. An exploratory comparison with a 60-lesion clinical brain–metastasis dataset was performed to assess whether key trends observed in the synthetic BM cohort were consistent with real radiosurgical outcomes. Results: AI-BED-Fx produced realistic pathology-specific BED distributions (AVM 60–210 Gy2.47; MEN 41–85 Gy3.5; VS 46–68 Gy3; BM 37–75 Gy10) and biologically coherent dose–response relationships. Predictive modeling demonstrated strong pathology dependence. In AVM, the three models achieved AUCs of 0.921 (Model A), 0.922 (Model B), and 0.924 (Model C), with corresponding Brier scores of 0.054, 0.051, and 0.051, with BED-based models performing best. In meningioma, BED was the dominant predictor, with AUCs of 0.642 (Model A), 0.660 (Model B), and 0.661 (Model C) and Brier scores of 0.181, 0.177, and 0.179, respectively. In vestibular schwannoma, the narrow BED range resulted in minimal BED contribution, with AUCs of 0.812, 0.827, and 0.830 and Brier scores of 0.165, 0.160, and 0.162, with physical dose and tumor volume determining performance. In brain metastases, outcomes were driven primarily by volume and physical dose, with AUCs of 0.614, 0.630, and 0.629 and Brier scores of 0.254, 0.250, and 0.253, showing negligible improvement from BED. AI-BED-Fx also accurately recovered the true α/β from synthetic outcomes (posterior mean 2.54 vs. true 2.47), and a neural-network surrogate reproduced full radiobiological BED calculations with near-perfect fidelity (R2 = 0.9991). Conclusions: AI-BED-Fx provides the first unified, biologically explicit framework for modeling single- and multi-fraction Gamma Knife radiosurgery. The findings show that the predictive usefulness of BED is pathology-specific rather than universal, and that radiobiological dose provides additional predictive value only when repair kinetics and dose–response biology support it. By integrating mechanistic radiobiology with machine learning, AI-BED-Fx establishes the conceptual and computational foundations for biologically adaptive, AI-guided radiosurgery, and cross-pathology comparison of treatment response. This work uses large radiobiologically grounded synthetic cohorts for methodological validation; limited real-patient data are included only for exploratory consistency checks, and full clinical validation is planned. Full article
(This article belongs to the Special Issue Novel Insights into Glioblastoma and Brain Metastases (2nd Edition))
Show Figures

Figure 1

23 pages, 2456 KB  
Article
Research on Intelligent Thermal Optimization for Chiplet-Based Heterogeneously Integrated AI Chip Embedded with Leaf-Vein-Inspired Fractal Microchannels
by Jie Wu, Yu Liang, Guibin Liu, Ruiyang Pang, Yi Teng, Chen Li, Xuetian Bao, Shi Lei and Zhikuang Cai
Materials 2026, 19(4), 679; https://doi.org/10.3390/ma19040679 - 10 Feb 2026
Viewed by 1052
Abstract
Conventional cooling schemes that rely on rigid heat-sink-to-die coupling in vertical stacks fail to track the dynamic, non-uniform heat map of high-performance artificial-intelligence (AI) chips employing chiplet-based heterogeneous integration, giving rise to local hot spots. To eliminate this mismatch, we present a leaf-vein-inspired [...] Read more.
Conventional cooling schemes that rely on rigid heat-sink-to-die coupling in vertical stacks fail to track the dynamic, non-uniform heat map of high-performance artificial-intelligence (AI) chips employing chiplet-based heterogeneous integration, giving rise to local hot spots. To eliminate this mismatch, we present a leaf-vein-inspired fractal microchannel tailored for such AI processors. Its hierarchical bifurcation–confluence topology adaptively reshapes the flow field, delivering ultra-low thermal resistance, high heat-transfer coefficients, and uniform dissipation. Coupled with reconfigurable chiplet placement, the design is evaluated through FEM-based orthogonal experiments that rank the influence of coolant, channel diameter/depth, inlet/outlet position, substrate thickness, and flow rate via range analysis and Analysis of Variance (ANOVA). A machine-learned surrogate model of junction temperature is then fed to Particle Swarm Optimization (PSO) for multi-parameter optimization. When re-simulated with the optimal parameter set, the symmetric fractal network lowered the AI chip junction temperature from 127.80 °C to 30.97 °C, a 76% improvement, offering a theoretical basis for hotspot mitigation in advanced heterogeneous AI packages. Full article
(This article belongs to the Special Issue Microstructural and Mechanical Characteristics of Welded Joints)
Show Figures

Graphical abstract

27 pages, 2146 KB  
Article
Intelligent Optimization and Real-Time Control of Wireless Power Transfer for Electric Vehicles
by Yosra Ben Fadhel and Antonio J. Marques Cardoso
Electronics 2025, 14(22), 4478; https://doi.org/10.3390/electronics14224478 - 17 Nov 2025
Cited by 2 | Viewed by 1035
Abstract
Wireless Power Transfer (WPT) for Electric Vehicles (EVs) offers a promising solution for convenient and efficient charging. However, misalignments, sensor noise, and parameter variability can significantly degrade Power Transfer Efficiency (PTE). This study proposes a novel unified artificial intelligence (AI)-driven optimization and control [...] Read more.
Wireless Power Transfer (WPT) for Electric Vehicles (EVs) offers a promising solution for convenient and efficient charging. However, misalignments, sensor noise, and parameter variability can significantly degrade Power Transfer Efficiency (PTE). This study proposes a novel unified artificial intelligence (AI)-driven optimization and control framework that integrates Genetic Algorithm (GA)-based static optimization, Artificial Neural Network (ANN) surrogate modeling, and Reinforcement Learning (RL) dynamic control using the Proximal Policy Optimization (PPO) algorithm. This unified design bridges the gap between previous static-only optimization methods and dynamic adaptive controllers, enabling both peak efficiency and verified robustness within a single digital twin simulation environment. A high-fidelity MATLAB/Simulink model of the WPT system was developed and validated using an ANN surrogate model (Test MSE: 7.87×1013). The GA-optimized configuration achieved a peak PTE of 85.47%, representing a 2.11 percentage-point improvement over the baseline. The RL controller, based on PPO, maintained a mean efficiency of approximately 80% under unseen trajectories, ±10% hardware parameter variations, and Gaussian sensor noise (σ=0.56%), demonstrating superior adaptability. Comparative analysis with state-of-the-art studies confirms that the proposed approach not only matches or exceeds the reported efficiency gains, but also uniquely integrates robustness validation and generalization testing. The results suggest that combining offline GA optimization with online RL adaptation provides a scalable, real-time control strategy for practical WPT deployments. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

56 pages, 3273 KB  
Systematic Review
Artificial Intelligence and Machine Learning in Cold Spray Additive Manufacturing: A Systematic Literature Review
by Habib Afsharnia and Javaid Butt
J. Manuf. Mater. Process. 2025, 9(10), 334; https://doi.org/10.3390/jmmp9100334 - 13 Oct 2025
Cited by 1 | Viewed by 2970
Abstract
Due to its unique benefits over conventional subtractive manufacturing, additive manufacturing methods continue to attract interest in both academia and industry. One such method is called Cold Spray Additive Manufacturing (CSAM), a solid-state coating deposition technology to manufacture repair metallic components using a [...] Read more.
Due to its unique benefits over conventional subtractive manufacturing, additive manufacturing methods continue to attract interest in both academia and industry. One such method is called Cold Spray Additive Manufacturing (CSAM), a solid-state coating deposition technology to manufacture repair metallic components using a gas jet and powder particles. CSAM offers low heat input, stable phases, suitability for heat-sensitive substrates, and high deposition rates. However, persistent challenges include porosity control, geometric accuracy near edges and concavities, anisotropy, and cost sensitivities linked to gas selection and nozzle wear. Interdisciplinary research across manufacturing science, materials characterisation, robotics, control, artificial intelligence (AI), and machine learning (ML) is deployed to overcome these issues. ML supports quality prediction, inverse parameter design, in situ monitoring, and surrogate models that couple process physics with data. To demonstrate the impact of AI and ML on CSAM, this study presents a systematic literature review to identify, evaluate, and analyse published studies in this domain. The most relevant studies in the literature are analysed using keyword co-occurrence and clustering. Four themes were identified: design for CSAM, material analytics, real-time monitoring and defect analytics, and deposition and AI-enabled optimisation. Based on this synthesis, core challenges are identified as small and varied datasets, transfer and identifiability limits, and fragmented sensing. Main opportunities are outlined as physics-based surrogates, active learning, uncertainty-aware inversion, and cloud-edge control for reliable and adaptable ML use in CSAM. By systematically mapping the current landscape, this work provides a critical roadmap for researchers to target the most significant challenges and opportunities in applying AI/ML to industrialise CSAM. Full article
Show Figures

Figure 1

35 pages, 3108 KB  
Review
Data-Driven Optimization of Discontinuous and Continuous Fiber Composite Processes Using Machine Learning: A Review
by Ivan Malashin, Dmitry Martysyuk, Vadim Tynchenko, Andrei Gantimurov, Vladimir Nelyub and Aleksei Borodulin
Polymers 2025, 17(18), 2557; https://doi.org/10.3390/polym17182557 - 22 Sep 2025
Cited by 13 | Viewed by 3328
Abstract
This paper surveys the application of machine learning in fiber composite manufacturing, highlighting its role in adaptive process control, defect detection, and real-time quality assurance. First, the need for ML in composite processing is highlighted, followed by a review of data-driven approaches—including predictive [...] Read more.
This paper surveys the application of machine learning in fiber composite manufacturing, highlighting its role in adaptive process control, defect detection, and real-time quality assurance. First, the need for ML in composite processing is highlighted, followed by a review of data-driven approaches—including predictive modeling, sensor fusion, and adaptive control—that address material heterogeneity and process variability. An in-depth analysis examines six case studies, among which are XPBD-based surrogates for RL-driven robotic draping, hyperspectral imaging (HSI) with U-Net segmentation for adhesion prediction, and CNN-driven surrogate optimization for variable-geometry forming. Building on these insights, a hybrid AI model architecture is proposed for natural-fiber composites, integrating a physics-informed GNN surrogate, a 3D Spectral-UNet for defect segmentation, and a cross-attention controller for closed-loop parameter adjustment. Validation on synthetic data—including visualizations of HSI segmentation, graph topologies, and controller action weights—demonstrates end-to-end operability. The discussion addresses interpretability, domain randomization, and sim-to-real transfer and highlights emerging trends such as physics-informed neural networks and digital twins. This paper concludes by outlining future challenges in small-data regimes and industrial scalability, thereby providing a comprehensive roadmap for ML-enabled composite manufacturing. Full article
(This article belongs to the Special Issue Artificial Intelligence in Polymers)
Show Figures

Figure 1

36 pages, 6566 KB  
Article
Algorithmic Optimal Control of Screw Compressors for Energy-Efficient Operation in Smart Power Systems
by Kassym Yelemessov, Dinara Baskanbayeva, Leyla Sabirova, Nikita V. Martyushev, Boris V. Malozyomov, Tatayeva Zhanar and Vladimir I. Golik
Algorithms 2025, 18(9), 583; https://doi.org/10.3390/a18090583 - 14 Sep 2025
Cited by 6 | Viewed by 1795
Abstract
This work presents the results of a research study focused on the development and evaluation of an algorithmic optimal control framework for energy-efficient operation of screw compressors in smart power systems. The proposed approach is based on the Pontryagin maximum principle (PMP), which [...] Read more.
This work presents the results of a research study focused on the development and evaluation of an algorithmic optimal control framework for energy-efficient operation of screw compressors in smart power systems. The proposed approach is based on the Pontryagin maximum principle (PMP), which enables the synthesis of a mathematically grounded regulator that minimizes the total energy consumption of a nonlinear electromechanical system composed of a screw compressor and a variable-frequency induction motor. Unlike conventional PID controllers, the developed algorithm explicitly incorporates system constraints, nonlinear dynamics, and performance trade-offs into the control law, allowing for improved adaptability and energy-aware operation. Simulation results obtained using MATLAB/Simulink confirm that the PMP-based regulator outperforms classical PID solutions in both transient and steady-state regimes. Experimental tests conducted in accordance with standard energy consumption evaluation methods showed that the proposed PMP-based controller provides a reduction in specific energy consumption of up to 18% under dynamic load conditions compared to a well-tuned basic PID controller, while maintaining high control accuracy, faster settling, and complete suppression of overshoot under external disturbances. The control system demonstrates robustness to parametric uncertainty and load variability, maintaining a statistical pressure error below 0.2%. The regulator’s structure is compatible with real-time execution on industrial programmable logic controllers (PLCs), supporting integration into intelligent automation systems and smart grid infrastructures. The discrete-time PLC implementation of the regulator requires only 103 arithmetic operations per cycle and less than 102 kB of RAM for state, buffers, and logging, making it suitable for mid-range industrial controllers under 2–10 ms task cycles. Fault-tolerance is ensured via range and rate-of-change checks, residual-based plausibility tests, and safe fallbacks (baseline PID or torque-limited speed hold) in case of sensor faults. Furthermore, the proposed approach lays the groundwork for hybrid extensions combining model-based control with AI-driven optimization and learning mechanisms, including reinforcement learning, surrogate modeling, and digital twins. These enhancements open pathways toward predictive, self-adaptive compressor control with embedded energy optimization. The research outcomes contribute to the broader field of algorithmic control in power electronics, offering a scalable and analytically justified alternative to heuristic and empirical tuning approaches commonly used in industry. The results highlight the potential of advanced control algorithms to enhance the efficiency, stability, and intelligence of energy-intensive components within the context of Industry 4.0 and sustainable energy systems. Full article
(This article belongs to the Special Issue AI-Driven Control and Optimization in Power Electronics)
Show Figures

Figure 1

18 pages, 1321 KB  
Article
Enhanced AI-Driven Harmonic Optimization in 36-Pulses Converters for SCADA Integration
by Antonio Valderrabano-Gonzalez and Carlos E. Castañeda
Electronics 2025, 14(18), 3623; https://doi.org/10.3390/electronics14183623 - 12 Sep 2025
Viewed by 1010
Abstract
This paper presents an integrated approach for optimizing the performance of a 36-pulses converter system by using artificial intelligence (AI) techniques to be included in a Supervisory Control and Data Acquisition (SCADA) environment. The focus of the proposal is on enhancing harmonic reduction [...] Read more.
This paper presents an integrated approach for optimizing the performance of a 36-pulses converter system by using artificial intelligence (AI) techniques to be included in a Supervisory Control and Data Acquisition (SCADA) environment. The focus of the proposal is on enhancing harmonic reduction through intelligent adjustment of switching angles and coordinated control of the reinjection transformer included in the power converter topology. A key component of the proposed methodology involves a simulation-based process to determine optimal firing angles (α1, α2, and α3), based on Selective Harmonic Elimination (SHE) theory, that minimize Total Harmonic Distortion (THD). Using MATLAB with Simulink and PLECS models, a parametric sweep of the firing angles, generating a comprehensive dataset of THD outcomes. This dataset, consisting of THD evaluations across fine-grained angle variations, serves as the training foundation for supervised machine learning models—specifically, neural network regressors—that approximate the nonlinear mapping between firing angles and harmonic distortion. These predictive models are then employed as surrogates to estimate THD rapidly and guide the selection of optimal switching angles in real time without requiring iterative numerical solvers. Optimization heuristics and predictive models are then deployed to dynamically adapt system parameters in real time under varying load conditions. The proposed method demonstrates significant improvements in power quality and operational reliability, highlighting the potential of AI-assisted SCADA systems in advanced power electronics applications. Implementation results performed on a 36-pulses voltage source converter prototype are included to illustrate the appropriateness of the proposal. Full article
Show Figures

Figure 1

46 pages, 26730 KB  
Review
AI-Driven Multi-Objective Optimization and Decision-Making for Urban Building Energy Retrofit: Advances, Challenges, and Systematic Review
by Rudai Shan, Xiaohan Jia, Xuehua Su, Qianhui Xu, Hao Ning and Jiuhong Zhang
Appl. Sci. 2025, 15(16), 8944; https://doi.org/10.3390/app15168944 - 13 Aug 2025
Cited by 11 | Viewed by 5784
Abstract
Urban building energy retrofit (UBER) is a critical strategy for advancing the low-carbon and climate-resilience transformation of cities. The integration of machine learning (ML), data-driven clustering, and multi-objective optimization (MOO) is a key aspect of artificial intelligence (AI) that is transforming the process [...] Read more.
Urban building energy retrofit (UBER) is a critical strategy for advancing the low-carbon and climate-resilience transformation of cities. The integration of machine learning (ML), data-driven clustering, and multi-objective optimization (MOO) is a key aspect of artificial intelligence (AI) that is transforming the process of retrofit decision-making. This integration enables the development of scalable, cost-effective, and robust solutions on an urban scale. This systematic review synthesizes recent advances in AI-driven MOO frameworks for UBER, focusing on how state-of-the-art methods can help to identify and prioritize retrofit targets, balance energy, cost, and environmental objectives, and develop transparent, stakeholder-oriented decision-making processes. Key advances highlighted in this review include the following: (1) the application of ML-based surrogate models for efficient evaluation of retrofit design alternatives; (2) data-driven clustering and classification to identify high-impact interventions across complex urban fabrics; (3) MOO algorithms that support trade-off analysis under real-world constraints; and (4) the emerging integration of explainable AI (XAI) for enhanced transparency and stakeholder engagement in retrofit planning. Representative case studies demonstrate the practical impact of these approaches in optimizing envelope upgrades, active system retrofits, and prioritization schemes. Notwithstanding these advancements, considerable challenges persist, encompassing data heterogeneity, the transferability of models across disparate urban contexts, fragmented digital toolchains, and the paucity of real-world validation of AI-based solutions. The subsequent discussion encompasses prospective research directions, with particular emphasis on the potential of deep learning (DL), spatiotemporal forecasting, generative models, and digital twins to further advance scalable and adaptive urban retrofit. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Energy Systems)
Show Figures

Figure 1

16 pages, 2221 KB  
Article
Efficient Training of Deep Spiking Neural Networks Using a Modified Learning Rate Scheduler
by Sung-Hyun Cha and Dong-Sun Kim
Mathematics 2025, 13(8), 1361; https://doi.org/10.3390/math13081361 - 21 Apr 2025
Cited by 1 | Viewed by 2650
Abstract
Deep neural networks (DNNs) have achieved high accuracy in various applications, but with the rapid growth of AI and the increasing scale and complexity of datasets, their computational cost and power consumption have become even more significant challenges. Spiking neural networks (SNNs), inspired [...] Read more.
Deep neural networks (DNNs) have achieved high accuracy in various applications, but with the rapid growth of AI and the increasing scale and complexity of datasets, their computational cost and power consumption have become even more significant challenges. Spiking neural networks (SNNs), inspired by biological neurons, offer an energy-efficient alternative by using spike-based information processing. However, training SNNs is difficult due to the non-differentiability of their activation function and the challenges in constructing deep architectures. This study addresses these issues by integrating DNN-like backpropagation into SNNs using a supervised learning approach. A surrogate gradient descent based on the arctangent function is applied to approximate the non-differentiable activation function, enabling stable gradient-based learning. The study also explores the interplay between the spatial domain (layer-wise propagation) and the temporal domain (time step), ensuring proper gradient propagation using the chain rule. Additionally, mini-batch training, Adam optimization, and layer normalization are incorporated to improve training efficiency and mitigate gradient vanishing. A softmax-based probability representation and cross-entropy loss function are used to optimize classification performance. Along with these techniques, a deep SNN was designed to converge to the optimal point faster than other models in the early stages of training by utilizing a modified learning rate scheduler. The proposed learning method allows deep SNNs to achieve competitive accuracy while maintaining their inherent low-power characteristics. These findings contribute to making SNNs more practical for machine learning applications by combining the advantages of deep learning and biologically inspired computing. In summary, this study contributes to the field by analyzing and adapting deep learning techniques—such as dropout, layer normalization, mini-batch training, and Adam optimization—to the spiking domain, and by proposing a novel learning rate scheduler that enables faster convergence during early training phases with fewer epochs. Full article
Show Figures

Figure 1

14 pages, 3528 KB  
Article
An AI-Based Adaptive Surrogate Modeling Method for the In-Service Response of UVLED Modules
by Cadmus Yuan
Electronics 2022, 11(18), 2861; https://doi.org/10.3390/electronics11182861 - 9 Sep 2022
Cited by 2 | Viewed by 2401
Abstract
The response forecasting of in-service complex electronic systems remains a challenge due to its uncertainty. An AI-based adaptive surrogate modeling method, including offline and online learning procedures, is proposed in this research for different systems with significant variety. The offline learning aims to [...] Read more.
The response forecasting of in-service complex electronic systems remains a challenge due to its uncertainty. An AI-based adaptive surrogate modeling method, including offline and online learning procedures, is proposed in this research for different systems with significant variety. The offline learning aims to abstract the knowledge from the known information and represent it as root models. The in-service response is modeled by a linear combination of the online learning of these root models against the continuous new measurement. This research applies a performance measurement dataset of the UVLED modules with considerable deviation to verify the proposed method. Part of the datasets is selected to generate the root models by offline learning, and these root models are applied to the online learning procedures for the adaptive surrogate model (ASM) of the different systems. The results show that after approximately 10 online learning iterations, the ASM achieves the capability of predicting 1000 h of response. Full article
(This article belongs to the Special Issue Applications of AI in Intelligent System Development)
Show Figures

Figure 1

25 pages, 1010 KB  
Article
ASAMS: An Adaptive Sequential Sampling and Automatic Model Selection for Artificial Intelligence Surrogate Modeling
by Carlos A. Duchanoy, Hiram Calvo and Marco A. Moreno-Armendáriz
Sensors 2020, 20(18), 5332; https://doi.org/10.3390/s20185332 - 17 Sep 2020
Cited by 8 | Viewed by 3949
Abstract
Surrogate Modeling (SM) is often used to reduce the computational burden of time-consuming system simulations. However, continuous advances in Artificial Intelligence (AI) and the spread of embedded sensors have led to the creation of Digital Twins (DT), Design Mining (DM), and Soft Sensors [...] Read more.
Surrogate Modeling (SM) is often used to reduce the computational burden of time-consuming system simulations. However, continuous advances in Artificial Intelligence (AI) and the spread of embedded sensors have led to the creation of Digital Twins (DT), Design Mining (DM), and Soft Sensors (SS). These methodologies represent a new challenge for the generation of surrogate models since they require the implementation of elaborated artificial intelligence algorithms and minimize the number of physical experiments measured. To reduce the assessment of a physical system, several existing adaptive sequential sampling methodologies have been developed; however, they are limited in most part to the Kriging models and Kriging-model-based Monte Carlo Simulation. In this paper, we integrate a distinct adaptive sampling methodology to an automated machine learning methodology (AutoML) to help in the process of model selection while minimizing the system evaluation and maximizing the system performance for surrogate models based on artificial intelligence algorithms. In each iteration, this framework uses a grid search algorithm to determine the best candidate models and perform a leave-one-out cross-validation to calculate the performance of each sampled point. A Voronoi diagram is applied to partition the sampling region into some local cells, and the Voronoi vertexes are considered as new candidate points. The performance of the sample points is used to estimate the accuracy of the model for a set of candidate points to select those that will improve more the model’s accuracy. Then, the number of candidate models is reduced. Finally, the performance of the framework is tested using two examples to demonstrate the applicability of the proposed method. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop