Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,445)

Search Parameters:
Keywords = learning control

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1251 KiB  
Article
Enhanced Detection of Intrusion Detection System in Cloud Networks Using Time-Aware and Deep Learning Techniques
by Nima Terawi, Huthaifa I. Ashqar, Omar Darwish, Anas Alsobeh, Plamen Zahariev and Yahya Tashtoush
Computers 2025, 14(7), 282; https://doi.org/10.3390/computers14070282 (registering DOI) - 17 Jul 2025
Abstract
This study introduces an enhanced Intrusion Detection System (IDS) framework for Denial-of-Service (DoS) attacks, utilizing network traffic inter-arrival time (IAT) analysis. By examining the timing between packets and other statistical features, we detected patterns of malicious activity, allowing early and effective DoS threat [...] Read more.
This study introduces an enhanced Intrusion Detection System (IDS) framework for Denial-of-Service (DoS) attacks, utilizing network traffic inter-arrival time (IAT) analysis. By examining the timing between packets and other statistical features, we detected patterns of malicious activity, allowing early and effective DoS threat mitigation. We generate real DoS traffic, including normal, Internet Control Message Protocol (ICMP), Smurf attack, and Transmission Control Protocol (TCP) classes, and develop nine predictive algorithms, combining traditional machine learning and advanced deep learning techniques with optimization methods, including the synthetic minority sampling technique (SMOTE) and grid search (GS). Our findings reveal that while traditional machine learning achieved moderate accuracy, it struggled with imbalanced datasets. In contrast, Deep Neural Network (DNN) models showed significant improvements with optimization, with DNN combined with GS (DNN-GS) reaching 89% accuracy. However, we also used Recurrent Neural Networks (RNNs) combined with SMOTE and GS (RNN-SMOTE-GS), which emerged as the best-performing with a precision of 97%, demonstrating the effectiveness of combining SMOTE and GS and highlighting the critical role of advanced optimization techniques in enhancing the detection capabilities of IDS models for the accurate classification of various types of network traffic and attacks. Full article
Show Figures

Figure 1

50 pages, 763 KiB  
Review
A Comprehensive Review on Sensor-Based Electronic Nose for Food Quality and Safety
by Teodora Sanislav, George D. Mois, Sherali Zeadally, Silviu Folea, Tudor C. Radoni and Ebtesam A. Al-Suhaimi
Sensors 2025, 25(14), 4437; https://doi.org/10.3390/s25144437 (registering DOI) - 16 Jul 2025
Abstract
Food quality and safety are essential for ensuring public health, preventing foodborne illness, reducing food waste, maintaining consumer confidence, and supporting regulatory compliance and international trade. This has led to the emergence of many research works that focus on automating and streamlining the [...] Read more.
Food quality and safety are essential for ensuring public health, preventing foodborne illness, reducing food waste, maintaining consumer confidence, and supporting regulatory compliance and international trade. This has led to the emergence of many research works that focus on automating and streamlining the assessment of food quality. Electronic noses have become of paramount importance in this context. We analyze the current state of research in the development of electronic noses for food quality and safety. We examined research papers published in three different scientific databases in the last decade, leading to a comprehensive review of the field. Our review found that most of the efforts use portable, low-cost electronic noses, coupled with pattern recognition algorithms, for evaluating the quality levels in certain well-defined food classes, reaching accuracies exceeding 90% in most cases. Despite these encouraging results, key challenges remain, particularly in diversifying the sensor response across complex substances, improving odor differentiation, compensating for sensor drift, and ensuring real-world reliability. These limitations indicate that a complete device mimicking the flexibility and selectivity of the human olfactory system is not yet available. To address these gaps, our review recommends solutions such as the adoption of adaptive machine learning models to reduce calibration needs and enhance drift resilience and the implementation of standardized protocols for data acquisition and model validation. We introduce benchmark comparisons and a future roadmap for electronic noses that demonstrate their potential to evolve from controlled studies to scalable industrial applications. In doing so, this review aims not only to assess the state of the field but also to support its transition toward more robust, interpretable, and field-ready electronic nose technologies. Full article
(This article belongs to the Special Issue Sensors in 2025)
Show Figures

Figure 1

26 pages, 6624 KiB  
Article
Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge
by Shuntaro Aotake, Takuya Otani, Masatoshi Funabashi and Atsuo Takanishi
Agriculture 2025, 15(14), 1536; https://doi.org/10.3390/agriculture15141536 - 16 Jul 2025
Abstract
We propose a data-efficient framework for automating sowing operations by agricultural robots in densely mixed polyculture environments. This study addresses the challenge of enabling robots to identify suitable sowing positions with minimal labeled data by integrating image-based field sensing with expert agricultural knowledge. [...] Read more.
We propose a data-efficient framework for automating sowing operations by agricultural robots in densely mixed polyculture environments. This study addresses the challenge of enabling robots to identify suitable sowing positions with minimal labeled data by integrating image-based field sensing with expert agricultural knowledge. We collected 84 RGB-depth images from seven field sites, labeled by synecological farming practitioners of varying proficiency levels, and trained a regression model to estimate optimal sowing positions and seeding quantities. The model’s predictions were comparable to those of intermediate-to-advanced practitioners across diverse field conditions. To implement this estimation in practice, we mounted a Kinect v2 sensor on a robot arm and integrated its 3D spatial data with axis-specific movement control. We then applied a trajectory optimization algorithm based on the traveling salesman problem to generate efficient sowing paths. Simulated trials incorporating both computation and robotic control times showed that our method reduced sowing operation time by 51% compared to random planning. These findings highlight the potential of interpretable, low-data machine learning models for rapid adaptation to complex agroecological systems and demonstrate a practical approach to combining structured human expertise with sensor-based automation in biodiverse farming environments. Full article
Show Figures

Figure 1

23 pages, 1631 KiB  
Article
Detecting Malicious Anomalies in Heavy-Duty Vehicular Networks Using Long Short-Term Memory Models
by Mark J. Potvin and Sylvain P. Leblanc
Sensors 2025, 25(14), 4430; https://doi.org/10.3390/s25144430 - 16 Jul 2025
Abstract
Utilizing deep learning models to detect malicious anomalies within the traffic of application layer J1939 protocol networks, found on heavy-duty commercial vehicles, is becoming a critical area of research in platform protection. At the physical layer, the controller area network (CAN) bus is [...] Read more.
Utilizing deep learning models to detect malicious anomalies within the traffic of application layer J1939 protocol networks, found on heavy-duty commercial vehicles, is becoming a critical area of research in platform protection. At the physical layer, the controller area network (CAN) bus is the backbone network for most vehicles. The CAN bus is highly efficient and dependable, which makes it a suitable networking solution for automobiles where reaction time and speed are of the essence due to safety considerations. Much recent research has been conducted on securing the CAN bus explicitly; however, the importance of protecting the J1939 protocol is becoming apparent. Our research utilizes long short-term memory models to predict the next binary data sequence of a J1939 packet. Our primary objective is to compare the performance of our J1939 detection system trained on data sub-fields against a published CAN system trained on the full data payload. We conducted a series of experiments to evaluate both detection systems by utilizing a simulated attack representation to generate anomalies. We show that both detection systems outperform one another on a case-by-case basis and determine that there is a clear requirement for a multifaceted security approach for vehicular networks. Full article
Show Figures

Figure 1

23 pages, 2091 KiB  
Article
Hybrid NARX Neural Network with Model-Based Feedback for Predictive Torsional Torque Estimation in Electric Drive with Elastic Connection
by Amanuel Haftu Kahsay, Piotr Derugo, Piotr Majdański and Rafał Zawiślak
Energies 2025, 18(14), 3770; https://doi.org/10.3390/en18143770 - 16 Jul 2025
Abstract
This paper proposes a hybrid methodology for one-step-ahead torsional torque estimation in an electric drive with an elastic connection. The approach integrates Nonlinear Autoregressive Neural Networks with Exogenous Inputs (NARX NNs) and model-based feedback. The NARX model uses real-time and historical motor speed [...] Read more.
This paper proposes a hybrid methodology for one-step-ahead torsional torque estimation in an electric drive with an elastic connection. The approach integrates Nonlinear Autoregressive Neural Networks with Exogenous Inputs (NARX NNs) and model-based feedback. The NARX model uses real-time and historical motor speed and torque signals as inputs while leveraging physics-derived torsional torque as a feedback input to refine estimation accuracy and robustness. While model-based methods provide insight into system dynamics, they lack predictive capability—an essential feature for proactive control. Conversely, standalone NARX NNs often suffer from error accumulation and overfitting. The proposed hybrid architecture synergises the adaptive learning of NARX NNs with the fidelity of physics-based feedback, enabling proactive vibration damping. The method was implemented and evaluated on a two-mass drive system using an IP controller and additional torsional torque feedback. Results demonstrate high accuracy and reliability in one-step-ahead torsional torque estimation, enabling effective proactive vibration damping. MATLAB 2024a/Simulink and dSPACE 1103 were used for simulation and hardware-in-the-loop testing. Full article
(This article belongs to the Special Issue Drive System and Control Strategy of Electric Vehicle)
16 pages, 2946 KiB  
Article
AI-Driven Comprehensive SERS-LFIA System: Improving Virus Automated Diagnostics Through SERS Image Recognition and Deep Learning
by Shuai Zhao, Meimei Xu, Chenglong Lin, Weida Zhang, Dan Li, Yusi Peng, Masaki Tanemura and Yong Yang
Biosensors 2025, 15(7), 458; https://doi.org/10.3390/bios15070458 - 16 Jul 2025
Abstract
Highly infectious and pathogenic viruses seriously threaten global public health, underscoring the need for rapid and accurate diagnostic methods to effectively manage and control outbreaks. In this study, we developed a comprehensive Surface-Enhanced Raman Scattering–Lateral Flow Immunoassay (SERS-LFIA) detection system that integrates SERS [...] Read more.
Highly infectious and pathogenic viruses seriously threaten global public health, underscoring the need for rapid and accurate diagnostic methods to effectively manage and control outbreaks. In this study, we developed a comprehensive Surface-Enhanced Raman Scattering–Lateral Flow Immunoassay (SERS-LFIA) detection system that integrates SERS scanning imaging with artificial intelligence (AI)-based result discrimination. This system was based on an ultra-sensitive SERS-LFIA strip with SiO2-Au NSs as the immunoprobe (with a theoretical limit of detection (LOD) of 1.8 pg/mL). On this basis, a negative–positive discrimination method combining SERS scanning imaging with a deep learning model (ResNet-18) was developed to analyze probe distribution patterns near the T line. The proposed machine learning method significantly reduced the interference of abnormal signals and achieved reliable detection at concentrations as low as 2.5 pg/mL, which was close to the theoretical Raman LOD. The accuracy of the proposed ResNet-18 image recognition model was 100% for the training set and 94.52% for the testing set, respectively. In summary, the proposed SERS-LFIA detection system that integrates detection, scanning, imaging, and AI automated result determination can achieve the simplification of detection process, elimination of the need for specialized personnel, reduction in test time, and improvement of diagnostic reliability, which exhibits great clinical potential and offers a robust technical foundation for detecting other highly pathogenic viruses, providing a versatile and highly sensitive detection method adaptable for future pandemic prevention. Full article
(This article belongs to the Special Issue Surface-Enhanced Raman Scattering in Biosensing Applications)
Show Figures

Figure 1

17 pages, 2879 KiB  
Article
The Impact of Integrating 3D-Printed Phantom Heads of Newborns with Cleft Lip and Palate into an Undergraduate Orthodontic Curriculum: A Comparison of Learning Outcomes and Student Perception
by Sarah Bühling, Jakob Stuhlfelder, Hedi Xandt, Sara Eslami, Lukas Benedikt Seifert, Robert Sader, Stefan Kopp, Nicolas Plein and Babak Sayahpour
Dent. J. 2025, 13(7), 323; https://doi.org/10.3390/dj13070323 - 16 Jul 2025
Abstract
Background/Objectives: This prospective intervention study examined the learning effect of using 3D-printed phantom heads with cleft lip and palate (CLP) and upper jaw models with CLP and maxillary plates during a lecture for dental students in their fourth year at J. W. [...] Read more.
Background/Objectives: This prospective intervention study examined the learning effect of using 3D-printed phantom heads with cleft lip and palate (CLP) and upper jaw models with CLP and maxillary plates during a lecture for dental students in their fourth year at J. W. Goethe Frankfurt University. The primary aim was to evaluate the impact of 3D-printed models on students’ satisfaction levels along with their understanding and knowledge in dental education. Methods: Six life-sized phantom heads with removable mandibles (three with unilateral and three with bilateral CLP) were designed using ZBrush software (Pixologic Inc., Los Angeles, CA, USA) based on MRI images and printed with an Asiga Pro 4K 3D printer (Asiga, Sydney, Australia). Two groups of students (n = 81) participated in this study: the control (CTR) group (n = 39) attended a standard lecture on cleft lip and palate, while the intervention (INT) group (n = 42) participated in a hands-on seminar with the same theoretical content, supplemented by 3D-printed models. Before and after the session, students completed self-assessment questionnaires and a multiple-choice test to evaluate knowledge improvement. Data analysis was conducted using the chi-square test for individual questions and the Wilcoxon rank test for knowledge gain, with the significance level set at 0.05. Results: The study demonstrated a significant knowledge increase in both groups following the lecture (p < 0.001). Similarly, there were significant differences in students’ self-assessments before and after the session (p < 0.001). The knowledge gain in the INT group regarding the anatomical features of unilateral cleft lip and palate was significantly higher compared to that in the CTR group (p < 0.05). Conclusions: The results of this study demonstrate the measurable added value of using 3D-printed models in dental education, particularly in enhancing students’ understanding of the anatomy of cleft lip and palate. Full article
(This article belongs to the Special Issue Dental Education: Innovation and Challenge)
Show Figures

Figure 1

23 pages, 963 KiB  
Article
A Methodology for Turbine-Level Possible Power Prediction and Uncertainty Estimations Using Farm-Wide Autoregressive Information on High-Frequency Data
by Francisco Javier Jara Ávila, Timothy Verstraeten, Pieter Jan Daems, Ann Nowé and Jan Helsen
Energies 2025, 18(14), 3764; https://doi.org/10.3390/en18143764 - 16 Jul 2025
Abstract
Wind farm performance monitoring has traditionally relied on deterministic models, such as power curves or machine learning approaches, which often fail to account for farm-wide behavior and the uncertainty quantification necessary for the reliable detection of underperformance. To overcome these limitations, we propose [...] Read more.
Wind farm performance monitoring has traditionally relied on deterministic models, such as power curves or machine learning approaches, which often fail to account for farm-wide behavior and the uncertainty quantification necessary for the reliable detection of underperformance. To overcome these limitations, we propose a probabilistic methodology for turbine-level active power prediction and uncertainty estimation using high-frequency SCADA data and farm-wide autoregressive information. The method leverages a Stochastic Variational Gaussian Process with a Linear Model of Coregionalization, incorporating physical models like manufacturer power curves as mean functions and enabling flexible modeling of active power and its associated variance. The approach was validated on a wind farm in the Belgian North Sea comprising over 40 turbines, using only 15 days of data for training. The results demonstrate that the proposed method improves predictive accuracy over the manufacturer’s power curve, achieving a reduction in error measurements of around 1%. Improvements of around 5% were seen in dominant wind directions (200°–300°) using 2 and 3 Latent GPs, with similar improvements observed on the test set. The model also successfully reconstructs wake effects, with Energy Ratio estimates closely matching SCADA-derived values, and provides meaningful uncertainty estimates and posterior turbine correlations. These results demonstrate that the methodology enables interpretable, data-efficient, and uncertainty-aware turbine-level power predictions, suitable for advanced wind farm monitoring and control applications, enabling a more sensitive underperformance detection. Full article
Show Figures

Figure 1

22 pages, 1295 KiB  
Article
Enhanced Similarity Matrix Learning for Multi-View Clustering
by Dongdong Zhang, Pusheng Wang and Qin Li
Electronics 2025, 14(14), 2845; https://doi.org/10.3390/electronics14142845 - 16 Jul 2025
Abstract
Graph-based multi-view clustering is a fundamental analysis method that learns the similarity matrix of multi-view data. Despite its success, it has two main limitations: (1) complementary information is not fully utilized by directly combining graphs from different views; (2) existing multi-view clustering methods [...] Read more.
Graph-based multi-view clustering is a fundamental analysis method that learns the similarity matrix of multi-view data. Despite its success, it has two main limitations: (1) complementary information is not fully utilized by directly combining graphs from different views; (2) existing multi-view clustering methods do not adequately address redundancy and noise in the data, significantly affecting performance. To address these issues, we propose the Enhanced Similarity Matrix Learning (ES-MVC) for multi-view clustering, which dynamically integrates global graphs from all views with local graphs from each view to create an improved similarity matrix. Specifically, the global graph captures cross-view consistency, while the local graph preserves view-specific geometric patterns. The balance between global and local graphs is controlled through an adaptive weighting strategy, where hyperparameters adjust the relative importance of each graph, effectively capturing complementary information. In this way, our method can learn the clustering structure that contains fully complementary information, leveraging both global and local graphs. Meanwhile, we utilize a robust similarity matrix initialization to reduce the negative effects caused by noisy data. For model optimization, we derive an effective optimization algorithm that converges quickly, typically requiring fewer than five iterations for most datasets. Extensive experimental results on diverse real-world datasets demonstrate the superiority of our method over state-of-the-art multi-view clustering methods. In our experiments on datasets such as MSRC-v1, Caltech101, and HW, our proposed method achieves superior clustering performance with average accuracy (ACC) values of 0.7643, 0.6097, and 0.9745, respectively, outperforming the most advanced multi-view clustering methods such as OMVFC-LICAG, which yield ACC values of 0.7284, 0.4512, and 0.8372 on the same datasets. Full article
Show Figures

Figure 1

2 pages, 121 KiB  
Correction
Correction: Justin et al. Modeling of Artificial Intelligence-Based Automated Climate Control with Energy Consumption Using Optimal Ensemble Learning on a Pixel Non-Uniformity Metro System. Sustainability 2023, 15, 13302
by Shekaina Justin, Wafaa Saleh, Maha M. A. Lashin and Hind Mohammed Albalawi
Sustainability 2025, 17(14), 6490; https://doi.org/10.3390/su17146490 - 16 Jul 2025
Abstract
The authors would like to make the following corrections to the published paper [...] Full article
26 pages, 7975 KiB  
Article
Soil Moisture Prediction Using the VIC Model Coupled with LSTMseq2seq
by Xiuping Zhang, Xiufeng He, Rencai Lin, Xiaohua Xu, Yanping Shi and Zhenning Hu
Remote Sens. 2025, 17(14), 2453; https://doi.org/10.3390/rs17142453 - 15 Jul 2025
Abstract
Soil moisture (SM) is a key variable in agricultural ecosystems and is crucial for drought prevention and control management. However, SM is influenced by underlying surface and meteorological conditions, and it changes rapidly in time and space. To capture the changes in SM [...] Read more.
Soil moisture (SM) is a key variable in agricultural ecosystems and is crucial for drought prevention and control management. However, SM is influenced by underlying surface and meteorological conditions, and it changes rapidly in time and space. To capture the changes in SM and improve the accuracy of short-term and medium-to-long-term predictions on a daily scale, an LSTMseq2seq model driven by both observational data and mechanism models was constructed. This framework combines historical meteorological elements and SM, as well as the SM change characteristics output by the VIC model, to predict SM over a 90-day period. The model was validated using SMAP SM. The proposed model can accurately predict the spatiotemporal variations in SM in Jiangxi Province. Compared with classical machine learning (ML) models, traditional LSTM models, and advanced transformer models, the LSTMseq2seq model achieved R2 values of 0.949, 0.9322, 0.8839, 0.8042, and 0.7451 for the prediction of surface SM over 3 days, 7 days, 30 days, 60 days, and 90 days, respectively. The mean absolute error (MAE) ranged from 0.0118 m3/m3 to 0.0285 m3/m3. This study also analyzed the contributions of meteorological features and simulated future SM state changes to SM prediction from two perspectives: time importance and feature importance. The results indicated that meteorological and SM changes within a certain time range prior to the prediction have an impact on SM prediction. The dual-driven LSTMseq2seq model has unique advantages in predicting SM and is conducive to the integration of physical mechanism models with data-driven models for handling input features of different lengths, providing support for daily-scale SM time series prediction and drought dynamics prediction. Full article
Show Figures

Figure 1

36 pages, 9024 KiB  
Article
Energy Optimal Trajectory Planning for the Morphing Solar-Powered Unmanned Aerial Vehicle Based on Hierarchical Reinforcement Learning
by Tichao Xu, Wenyue Meng and Jian Zhang
Drones 2025, 9(7), 498; https://doi.org/10.3390/drones9070498 - 15 Jul 2025
Abstract
Trajectory planning is crucial for solar aircraft endurance. The multi-wing morphing solar aircraft can enhance solar energy acquisition through wing deflection, which simultaneously incurs aerodynamic losses, complicating energy coupling and challenging existing planning methods in efficiency and long-term optimization. This study presents an [...] Read more.
Trajectory planning is crucial for solar aircraft endurance. The multi-wing morphing solar aircraft can enhance solar energy acquisition through wing deflection, which simultaneously incurs aerodynamic losses, complicating energy coupling and challenging existing planning methods in efficiency and long-term optimization. This study presents an energy-optimal trajectory planning method based on Hierarchical Reinforcement Learning for morphing solar-powered Unmanned Aerial Vehicles (UAVs), exemplified by a Λ-shaped aircraft. This method aims to train a hierarchical policy to autonomously track energy peaks. It features a top-level decision policy selecting appropriate bottom-level policies based on energy factors, which generate control commands such as thrust, attitude angles, and wing deflection angles. Shaped properly by reward functions and training conditions, the hierarchical policy can enable the UAV to adapt to changing flight conditions and achieve autonomous flight with energy maximization. Evaluated through 24 h simulation flights on the summer solstice, the results demonstrate that the hierarchical policy can appropriately switch its bottom-level policies during daytime and generate real-time control commands that satisfy optimal energy power requirements. Compared with the minimum energy consumption benchmark case, the proposed hierarchical policy achieved 0.98 h more of full-charge high-altitude cruise duration and 1.92% more remaining battery energy after 24 h, demonstrating superior energy optimization capabilities. In addition, the strong adaptability of the hierarchical policy to different quarterly dates was demonstrated through generalization ability testing. Full article
Show Figures

Figure 1

21 pages, 1594 KiB  
Article
Implementation of a Conditional Latent Diffusion-Based Generative Model to Synthetically Create Unlabeled Histopathological Images
by Mahfujul Islam Rumman, Naoaki Ono, Kenoki Ohuchida, Ahmad Kamal Nasution, Muhammad Alqaaf, Md. Altaf-Ul-Amin and Shigehiko Kanaya
Bioengineering 2025, 12(7), 764; https://doi.org/10.3390/bioengineering12070764 - 15 Jul 2025
Abstract
Generative image models have revolutionized artificial intelligence by enabling the synthesis of high-quality, realistic images. These models utilize deep learning techniques to learn complex data distributions and generate novel images that closely resemble the training dataset. Recent advancements, particularly in diffusion models, have [...] Read more.
Generative image models have revolutionized artificial intelligence by enabling the synthesis of high-quality, realistic images. These models utilize deep learning techniques to learn complex data distributions and generate novel images that closely resemble the training dataset. Recent advancements, particularly in diffusion models, have led to remarkable improvements in image fidelity, diversity, and controllability. In this work, we investigate the application of a conditional latent diffusion model in the healthcare domain. Specifically, we trained a latent diffusion model using unlabeled histopathology images. Initially, these images were embedded into a lower-dimensional latent space using a Vector Quantized Generative Adversarial Network (VQ-GAN). Subsequently, a diffusion process was applied within this latent space, and clustering was performed on the resulting latent features. The clustering results were then used as a conditioning mechanism for the diffusion model, enabling conditional image generation. Finally, we determined the optimal number of clusters using cluster validation metrics and assessed the quality of the synthetic images through quantitative methods. To enhance the interpretability of the synthetic image generation process, expert input was incorporated into the cluster assignments. Full article
(This article belongs to the Section Biosignal Processing)
52 pages, 770 KiB  
Systematic Review
Novel Artificial Intelligence Applications in Energy: A Systematic Review
by Tai Zhang and Goran Strbac
Energies 2025, 18(14), 3747; https://doi.org/10.3390/en18143747 - 15 Jul 2025
Abstract
This systematic review examines state-of-the-art artificial intelligence applications in energy systems, assessing their performance, real-world deployments and transformative potential. Guided by PRISMA 2020, we searched Web of Science, IEEE Xplore, ScienceDirect, SpringerLink, and Google Scholar for English-language studies published between January 2015 and [...] Read more.
This systematic review examines state-of-the-art artificial intelligence applications in energy systems, assessing their performance, real-world deployments and transformative potential. Guided by PRISMA 2020, we searched Web of Science, IEEE Xplore, ScienceDirect, SpringerLink, and Google Scholar for English-language studies published between January 2015 and January 2025 that reported novel AI uses in energy, empirical results, or significant theoretical advances and passed peer review. After title–abstract screening and full-text assessment, it was determined that 129 of 3000 records met the inclusion criteria. The methodological quality, reproducibility and real-world validation were appraised, and the findings were synthesised narratively around four critical themes: reinforcement learning (35 studies), multi-agent systems (28), planning under uncertainty (25), and AI for resilience (22), with a further 19 studies covering other areas. Notable outcomes include DeepMind-based reinforcement learning cutting data centre cooling energy by 40%, multi-agent control boosting virtual power plant revenue by 28%, AI-enhanced planning slashing the computation time by 87% without sacrificing solution quality, battery management AI raising efficiency by 30%, and machine learning accelerating hydrogen catalyst discovery 200,000-fold. Across domains, AI consistently outperformed traditional techniques. The review is limited by its English-only scope, potential under-representation of proprietary industrial work, and the inevitable lag between rapid AI advances and peer-reviewed publication. Overall, the evidence positions AI as a pivotal enabler of cleaner, more reliable, and efficient energy systems, though progress will depend on data quality, computational resources, legacy system integration, equity considerations, and interdisciplinary collaboration. No formal review protocol was registered because this study is a comprehensive state-of-the-art assessment rather than a clinical intervention analysis. Full article
(This article belongs to the Special Issue Optimization and Machine Learning Approaches for Power Systems)
Show Figures

Figure 1

27 pages, 2260 KiB  
Article
Machine Learning for Industrial Optimization and Predictive Control: A Patent-Based Perspective with a Focus on Taiwan’s High-Tech Manufacturing
by Chien-Chih Wang and Chun-Hua Chien
Processes 2025, 13(7), 2256; https://doi.org/10.3390/pr13072256 - 15 Jul 2025
Abstract
The global trend toward Industry 4.0 has intensified the demand for intelligent, adaptive, and energy-efficient manufacturing systems. Machine learning (ML) has emerged as a crucial enabler of this transformation, particularly in high-mix, high-precision environments. This review examines the integration of machine learning techniques, [...] Read more.
The global trend toward Industry 4.0 has intensified the demand for intelligent, adaptive, and energy-efficient manufacturing systems. Machine learning (ML) has emerged as a crucial enabler of this transformation, particularly in high-mix, high-precision environments. This review examines the integration of machine learning techniques, such as convolutional neural networks (CNNs), reinforcement learning (RL), and federated learning (FL), within Taiwan’s advanced manufacturing sectors, including semiconductor fabrication, smart assembly, and industrial energy optimization. The present study draws on patent data and industrial case studies from leading firms, such as TSMC, Foxconn, and Delta Electronics, to trace the evolution from classical optimization to hybrid, data-driven frameworks. A critical analysis of key challenges is provided, including data heterogeneity, limited model interpretability, and integration with legacy systems. A comprehensive framework is proposed to address these issues, incorporating data-centric learning, explainable artificial intelligence (XAI), and cyber–physical architectures. These components align with industrial standards, including the Reference Architecture Model Industrie 4.0 (RAMI 4.0) and the Industrial Internet Reference Architecture (IIRA). The paper concludes by outlining prospective research directions, with a focus on cross-factory learning, causal inference, and scalable industrial AI deployment. This work provides an in-depth examination of the potential of machine learning to transform manufacturing into a more transparent, resilient, and responsive ecosystem. Additionally, this review highlights Taiwan’s distinctive position in the global high-tech manufacturing landscape and provides an in-depth analysis of patent trends from 2015 to 2025. Notably, this study adopts a patent-centered perspective to capture practical innovation trends and technological maturity specific to Taiwan’s globally competitive high-tech sector. Full article
(This article belongs to the Special Issue Machine Learning for Industrial Optimization and Predictive Control)
Show Figures

Figure 1

Back to TopTop