Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,572)

Search Parameters:
Keywords = operational context

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 7867 KB  
Article
Bridging AI and Maintenance: Fault Diagnosis in Industrial Air-Cooling Systems Using Deep Learning and Sensor Data
by Ioannis Polymeropoulos, Stavros Bezyrgiannidis, Eleni Vrochidou and George A. Papakostas
Machines 2025, 13(10), 909; https://doi.org/10.3390/machines13100909 - 2 Oct 2025
Abstract
This work aims towards the automatic detection of faults in industrial air-cooling equipment used in a production line for staple fibers and ultimately provides maintenance scheduling recommendations to ensure seamless operation. In this context, various deep learning models are tested to ultimately define [...] Read more.
This work aims towards the automatic detection of faults in industrial air-cooling equipment used in a production line for staple fibers and ultimately provides maintenance scheduling recommendations to ensure seamless operation. In this context, various deep learning models are tested to ultimately define the most effective one for the intended scope. In the examined system, four vibration and temperature sensors are used, each positioned radially on the motor body near the rolling bearing of the motor shaft—a typical setup in many industrial environments. Thus, by collecting and using data from the latter sources, this work exhaustively investigates the feasibility of accurately diagnosing faults in staple fiber cooling fans. The dataset is acquired and constructed under real production conditions, including variations in rotational speed, motor load, and three fault priorities, depending on the model detection accuracy, product specification, and maintenance requirements. Fault identification for training purposes involves analyzing and evaluating daily maintenance logs for this equipment. Experimental evaluation on real production data demonstrated that the proposed ResNet50-1D model achieved the highest overall classification accuracy of 97.77%, while effectively resolving the persistent misclassification of the faulty impeller observed in all the other models. Complementary evaluation confirmed its robustness, cross-machine generalization, and suitability for practical deployment, while the integration of predictions with maintenance logs enables a severity-based prioritization strategy that supports actionable maintenance planning.deep learning; fault classification; industrial air-cooling; industrial automation; maintenance scheduling; vibration analysis Full article
12 pages, 768 KB  
Article
ECG Waveform Segmentation via Dual-Stream Network with Selective Context Fusion
by Yongpeng Niu, Nan Lin, Yuchen Tian, Kaipeng Tang and Baoxiang Liu
Electronics 2025, 14(19), 3925; https://doi.org/10.3390/electronics14193925 - 2 Oct 2025
Abstract
Electrocardiogram (ECG) waveform delineation is fundamental to cardiac disease diagnosis. This task requires precise localization of key fiducial points, specifically the onset, peak, and offset positions of P-waves, QRS complexes, and T-waves. Current methods exhibit significant performance degradation in noisy clinical environments (baseline [...] Read more.
Electrocardiogram (ECG) waveform delineation is fundamental to cardiac disease diagnosis. This task requires precise localization of key fiducial points, specifically the onset, peak, and offset positions of P-waves, QRS complexes, and T-waves. Current methods exhibit significant performance degradation in noisy clinical environments (baseline drift, electromyographic interference, powerline interference, etc.), compromising diagnostic reliability. To address this limitation, we introduce ECG-SCFNet: a novel dual-stream architecture employing selective context fusion. Our framework is further enhanced by a consistency training paradigm, enabling it to maintain robust waveform delineation accuracy under challenging noise conditions.The network employs a dual-stream architecture: (1) A temporal stream captures dynamic rhythmic features through sequential multi-branch convolution and temporal attention mechanisms; (2) A morphology stream combines parallel multi-scale convolution with feature pyramid integration to extract multi-scale waveform structural features through morphological attention; (3) The Selective Context Fusion (SCF) module adaptively integrates features from the temporal and morphology streams using a dual attention mechanism, which operates across both channel and spatial dimensions to selectively emphasize informative features from each stream, thereby enhancing the representation learning for accurate ECG segmentation. On the LUDB and QT datasets, ECG-SCFNet achieves high performance, with F1-scores of 97.83% and 97.80%, respectively. Crucially, it maintains robust performance under challenging noise conditions on these datasets, with 88.49% and 86.25% F1-scores, showing significantly improved noise robustness compared to other methods and demonstrating exceptional robustness and precise boundary localization for clinical ECG analysis. Full article
Show Figures

Figure 1

23 pages, 5971 KB  
Article
Improved MNet-Atten Electric Vehicle Charging Load Forecasting Based on Composite Decomposition and Evolutionary Predator–Prey and Strategy
by Xiaobin Wei, Qi Jiang, Huaitang Xia and Xianbo Kong
World Electr. Veh. J. 2025, 16(10), 564; https://doi.org/10.3390/wevj16100564 - 2 Oct 2025
Abstract
In the context of low carbon, achieving accurate forecasting of electrical energy is critical for power management with the continuous development of power systems. For the sake of improving the performance of load forecasting, an improved MNet-Atten electric vehicle charging load forecasting based [...] Read more.
In the context of low carbon, achieving accurate forecasting of electrical energy is critical for power management with the continuous development of power systems. For the sake of improving the performance of load forecasting, an improved MNet-Atten electric vehicle charging load forecasting based on composite decomposition and the evolutionary predator–prey and strategy model is proposed. In this light, through the data decomposition theory, each subsequence is processed using complementary ensemble empirical mode decomposition and filters out high-frequency white noise by using singular value decomposition based on matrix operation, which improves the anti-interference ability and computational efficiency of the model. In the model construction stage, the MNet-Atten prediction model is developed and constructed. The convolution module is used to mine the local dependencies of the sequences, and the long term and short-term features of the data are extracted through the loop and loop skip modules to improve the predictability of the data itself. Furthermore, the evolutionary predator and prey strategy is used to iteratively optimize the learning rate of the MNet-Atten for improving the forecasting performance and convergence speed of the model. The autoregressive module is used to enhance the ability of the neural network to identify linear features and improve the prediction performance of the model. Increasing temporal attention to give more weight to important features for global and local linkage capture. Additionally, the electric vehicle charging load data in a certain region, as an example, is verified, and the average value of 30 running times of the combined model proposed is 117.3231 s, and the correlation coefficient PCC of the CEEMD-SVD-EPPS-MNet-Atten model is closer to 1. Furthermore, the CEEMD-SVD-EPPS-MNet-Atten model has the lowest MAPE, RMSE, and PCC. The results show that the model in this paper can better extract the characteristics of the data, improve the modeling efficiency, and have a high data prediction accuracy. Full article
(This article belongs to the Section Charging Infrastructure and Grid Integration)
Show Figures

Graphical abstract

15 pages, 1250 KB  
Article
Kinetics of Serum Myoglobin and Creatine Kinase Related to Exercise-Induced Muscle Damage and ACTN3 Polymorphism in Military Paratroopers Under Intense Exercise
by Rachel de S. Augusto, Adrieli Dill, Eliezer Souza, Tatiana L. S. Nogueira, Diego V. Gomes, Jorge Paiva, Marcos Dornelas-Ribeiro and Caleb G. M. Santos
J. Funct. Morphol. Kinesiol. 2025, 10(4), 381; https://doi.org/10.3390/jfmk10040381 - 2 Oct 2025
Abstract
Background: Physical conditioning is essential to meet the operational demands of military environments. However, high-intensity exercise provokes muscle microinjuries resulting in exercise-induced muscle damage. This condition is typically monitored using serum biomarkers such as creatine kinase (CK), myoglobin (MYO), and lactate dehydrogenase [...] Read more.
Background: Physical conditioning is essential to meet the operational demands of military environments. However, high-intensity exercise provokes muscle microinjuries resulting in exercise-induced muscle damage. This condition is typically monitored using serum biomarkers such as creatine kinase (CK), myoglobin (MYO), and lactate dehydrogenase (LDH). Nevertheless, individual variability and genetic factors complicate the interpretation. In this context, the rs1815739 variant (ACTN3), the most common variant related to exercise phenotypes, hypothetically could interfere with the muscle physiological response. This study aimed to evaluate the kinetics of serum biomarkers during a high-intensity activity and their potential association with rs1815739 polymorphism. Materials and Methods: 32 male cadets were selected during the Army Paratrooper Course. Serum was obtained at six distinct moments while they performed regular course tests and recovery time. Borg scale was assessed in 2 moments (~11 and ~17). Results: Serum levels of CK, CK-MB, MYO, and LDH significantly increase after exercise, proportionally to Borg’s level, following the applicability of longitudinal studies to understand biomarker levels in response to exercise. R allele carriers (ACTN3) were only slightly associated with greater levels of MYO and CK, mainly in relative kinetic levels, and especially at moments of greater physical demand/recovery. Although the ACTN3 was slightly related to different biomarker levels in our investigation, the success or healthiness in military activities is multifactorial and does not depend only on interindividual variability or physical capacity. Conclusions: Monitoring biomarkers and multiple genomic regions can generate more efficient exercise-related phenotype interventions. Full article
(This article belongs to the Special Issue Tactical Athlete Health and Performance)
Show Figures

Figure 1

20 pages, 413 KB  
Article
The Effect of Financial Mismatch on Corporate ESG Performance: Evidence from Chinese A-Share Companies
by Xiaoli Li, Wenxin Heng, Hangyu Zeng and Chengyi Xian
Int. J. Financial Stud. 2025, 13(4), 184; https://doi.org/10.3390/ijfs13040184 - 2 Oct 2025
Abstract
This study examines the effect of financial mismatch on corporate ESG performance in the context of China’s developmental strategy and its dual-carbon goals. Using panel data for Chinese A-share firms spanning 2009–2023 and employing fixed-effects regression models, we find that financial mismatch significantly [...] Read more.
This study examines the effect of financial mismatch on corporate ESG performance in the context of China’s developmental strategy and its dual-carbon goals. Using panel data for Chinese A-share firms spanning 2009–2023 and employing fixed-effects regression models, we find that financial mismatch significantly weakens ESG performance. Further analysis reveals that this negative effect mainly operates through three channels: increased financing constraints, weakened internal control quality, and reduced innovation capability. The results remain robust across a series of alternative specifications and sensitivity tests. This study contributes to the literature by identifying financial mismatch as a key determinant of ESG outcomes and by clarifying the mechanisms through which it exerts influence. From a practical perspective, the findings suggest that alleviating financial mismatch by fostering patient capital, improving internal governance structures, and supporting firms’ green and sustainable investments is essential for enhancing corporate ESG performance and achieving China’s dual-carbon targets. Full article
Show Figures

Figure 1

19 pages, 7379 KB  
Article
Criterion Circle-Optimized Hybrid Finite Element–Statistical Energy Analysis Modeling with Point Connection Updating for Acoustic Package Design in Electric Vehicles
by Jiahui Li, Ti Wu and Jintao Su
World Electr. Veh. J. 2025, 16(10), 563; https://doi.org/10.3390/wevj16100563 - 2 Oct 2025
Abstract
This research is based on the acoustic package design of new energy vehicles, investigating the application of the hybrid Finite Element–Statistical Energy Analysis (FE-SEA) model in predicting the high-frequency dynamic response of automotive structures, with a focus on the modeling and correction methods [...] Read more.
This research is based on the acoustic package design of new energy vehicles, investigating the application of the hybrid Finite Element–Statistical Energy Analysis (FE-SEA) model in predicting the high-frequency dynamic response of automotive structures, with a focus on the modeling and correction methods for hybrid point connections. New energy vehicles face unique acoustic challenges due to the special nature of their power systems and operating conditions, such as high-frequency noise from electric motors and electronic devices, wind noise, and road noise at low speeds, which directly affect the vehicle’s ride comfort. Therefore, optimizing the acoustic package design of new energy vehicles to reduce in-cabin noise and improve acoustic quality is an important issue in automotive engineering. In this context, this study proposes an improved point connection correction factor by optimizing the division range of the decision circle. The factor corrects the dynamic stiffness of point connections based on wave characteristics, aiming to improve the analysis accuracy of the hybrid FE-SEA model and enhance its ability to model boundary effects. Simulation results show that the proposed method can effectively improve the model’s analysis accuracy, reduce the degrees of freedom in analysis, and increase efficiency, providing important theoretical support and reference for the acoustic package design and NVH performance optimization of new energy vehicles. Full article
Show Figures

Figure 1

13 pages, 579 KB  
Article
The Impact of Socioeconomic Status on Adolescent Moral Reasoning: Exploring a Dual-Pathway Cognitive Model
by Xiaoming Li, Tiwang Cao, Ronghua Hu, Keer Huang and Cheng Guo
Behav. Sci. 2025, 15(10), 1347; https://doi.org/10.3390/bs15101347 - 1 Oct 2025
Abstract
This study examines how objective (OSES) and subjective (SSES) socioeconomic status influence adolescent moral reasoning through distinct psychological mechanisms. Analyzing 4122 Chinese adolescents (Mage = 14.38), we found SSES enhanced moral internalization via strengthened social identity, while OSES reduced moral stereotyping through cognitive [...] Read more.
This study examines how objective (OSES) and subjective (SSES) socioeconomic status influence adolescent moral reasoning through distinct psychological mechanisms. Analyzing 4122 Chinese adolescents (Mage = 14.38), we found SSES enhanced moral internalization via strengthened social identity, while OSES reduced moral stereotyping through cognitive flexibility. Contrary to expectations, parental emotional warmth failed to buffer against SSES-related declines in internalization, with higher SSES predicting reduced internalization across parenting contexts. Results reveal socioeconomic status operates through dual pathways—social identity processes for SSES and cognitive flexibility for OSES—while challenging assumptions about parenting’s protective role. The findings suggest tailored interventions: identity-building programs for SSES-related moral development and cognitive training for OSES-linked reasoning biases, advancing theoretical understanding of moral development in diverse socioeconomic contexts. Full article
(This article belongs to the Topic Educational and Health Development of Children and Youths)
19 pages, 2183 KB  
Article
A Hierarchical RNN-LSTM Model for Multi-Class Outage Prediction and Operational Optimization in Microgrids
by Nouman Liaqat, Muhammad Zubair, Aashir Waleed, Muhammad Irfan Abid and Muhammad Shahid
Electricity 2025, 6(4), 55; https://doi.org/10.3390/electricity6040055 - 1 Oct 2025
Abstract
Microgrids are becoming an innovative piece of modern energy systems as they provide locally sourced and resilient energy opportunities and enable efficient energy sourcing. However, microgrid operations can be greatly affected by sudden environmental changes, deviating demand, and unexpected outages. In particular, extreme [...] Read more.
Microgrids are becoming an innovative piece of modern energy systems as they provide locally sourced and resilient energy opportunities and enable efficient energy sourcing. However, microgrid operations can be greatly affected by sudden environmental changes, deviating demand, and unexpected outages. In particular, extreme climatic events expose the vulnerability of microgrid infrastructure and resilience, often leading to increased risk of system-wide outages. Thus, successful microgrid operation relies on timely and accurate outage predictions. This research proposes a data-driven machine learning framework for the optimized operation of a microgrid and predictive outage detection using a Recurrent Neural Network–Long Short-Term Memory (RNN-LSTM) architecture that reflects inherent temporal modeling methods. A time-aware embedding and masking strategy is employed to handle categorical and sparse temporal features, while mutual information-based feature selection ensures only the most relevant and interpretable inputs are retained for prediction. Moreover, the model addresses the challenges of experiencing rapid power fluctuations by looking at long-term learning dependency aspects within historical and real-time data observation streams. Two datasets are utilized: a locally developed real-time dataset collected from a 5 MW microgrid of Maple Cement Factory in Mianwali and a 15-year national power outage dataset obtained from Kaggle. Both datasets went through intensive preprocessing, normalization, and tokenization to transform raw readings into machine-readable sequences. The suggested approach attained an accuracy of 86.52% on the real-time dataset and 84.19% on the Kaggle dataset, outperforming conventional models in detecting sequential outage patterns. It also achieved a precision of 86%, a recall of 86.20%, and an F1-score of 86.12%, surpassing the performance of other models such as CNN, XGBoost, SVM, and various static classifiers. In contrast to these traditional approaches, the RNN-LSTM’s ability to leverage temporal context makes it a more effective and intelligent choice for real-time outage prediction and microgrid optimization. Full article
Show Figures

Figure 1

30 pages, 1774 KB  
Review
A Systematic Literature Review on AI-Based Cybersecurity in Nuclear Power Plants
by Marianna Lezzi, Luigi Martino, Ernesto Damiani and Chan Yeob Yeun
J. Cybersecur. Priv. 2025, 5(4), 79; https://doi.org/10.3390/jcp5040079 - 1 Oct 2025
Abstract
Cybersecurity management plays a key role in preserving the operational security of nuclear power plants (NPPs), ensuring service continuity and system resilience. The growing number of sophisticated cyber-attacks against NPPs requires cybersecurity experts to detect, analyze, and defend systems and data from cyber [...] Read more.
Cybersecurity management plays a key role in preserving the operational security of nuclear power plants (NPPs), ensuring service continuity and system resilience. The growing number of sophisticated cyber-attacks against NPPs requires cybersecurity experts to detect, analyze, and defend systems and data from cyber threats in near real time. However, managing a large numbers of attacks in a timely manner is impossible without the support of Artificial Intelligence (AI). This study recognizes the need for a structured and in-depth analysis of the literature in the context of NPPs, referring to the role of AI technology in supporting cyber risk assessment processes. For this reason, a systematic literature review (SLR) is adopted to address the following areas of analysis: (i) critical assets to be preserved from cyber-attacks through AI, (ii) security vulnerabilities and cyber threats managed using AI, (iii) cyber risks and business impacts that can be assessed by AI, and (iv) AI-based security countermeasures to mitigate cyber risks. The SLR procedure follows a macro-step approach that includes review planning, search execution and document selection, and document analysis and results reporting, with the aim of providing an overview of the key dimensions of AI-based cybersecurity in NPPs. The structured analysis of the literature allows for the creation of an original tabular outline of emerging evidence (in the fields of critical assets, security vulnerabilities and cyber threats, cyber risks and business impacts, and AI-based security countermeasures) that can help guide AI-based cybersecurity management in NPPs and future research directions. From an academic perspective, this study lays the foundation for understanding and consciously addressing cybersecurity challenges through the support of AI; from a practical perspective, it aims to assist managers, practitioners, and policymakers in making more informed decisions to improve the resilience of digital infrastructure. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

38 pages, 21372 KB  
Article
Machine Learning-Based Dynamic Modeling of Ball Joint Friction for Real-Time Applications
by Kai Pfitzer, Lucas Rath, Sebastian Kolmeder, Burkhard Corves and Günther Prokop
Lubricants 2025, 13(10), 436; https://doi.org/10.3390/lubricants13100436 - 1 Oct 2025
Abstract
Ball joints are components of the vehicle axle, and their friction characteristics must be considered when evaluating vibration behavior and ride comfort in driving simulator-based simulations. To model the three-dimensional friction behavior of ball joints, real-time capability and intuitive parameterization using data from [...] Read more.
Ball joints are components of the vehicle axle, and their friction characteristics must be considered when evaluating vibration behavior and ride comfort in driving simulator-based simulations. To model the three-dimensional friction behavior of ball joints, real-time capability and intuitive parameterization using data from standardized component test benches are essential. These requirements favor phenomenological modeling approaches. This paper applies a spherical, three-dimensional friction model based on the LuGre model, compares it with alternative approaches, and introduces a universal parameter estimation framework using machine learning. Furthermore, the kinematic operating ranges of ball joints are derived from vehicle measurements, and component-level measurements are conducted accordingly. The collected measurement data are used to estimate model parameters through gradient-based optimization for all considered models. The results of the model fitting are presented, and the model characteristics are discussed in the context of their suitability for online simulation in a driving simulator environment. We demonstrate that the proposed parameter estimation framework is capable of learning all the applied models. Moreover, the three-dimensional LuGre-based approach proves to be well suited for capturing the dynamic friction behavior of ball joints in real-time applications. Full article
(This article belongs to the Special Issue New Horizons in Machine Learning Applications for Tribology)
50 pages, 4498 KB  
Review
Reinforcement Learning for Electric Vehicle Charging Management: Theory and Applications
by Panagiotis Michailidis, Iakovos Michailidis and Elias Kosmatopoulos
Energies 2025, 18(19), 5225; https://doi.org/10.3390/en18195225 - 1 Oct 2025
Abstract
The growing complexity of electric vehicle charging station (EVCS) operations—driven by grid constraints, renewable integration, user variability, and dynamic pricing—has positioned reinforcement learning (RL) as a promising approach for intelligent, scalable, and adaptive control. After outlining the core theoretical foundations, including RL algorithms, [...] Read more.
The growing complexity of electric vehicle charging station (EVCS) operations—driven by grid constraints, renewable integration, user variability, and dynamic pricing—has positioned reinforcement learning (RL) as a promising approach for intelligent, scalable, and adaptive control. After outlining the core theoretical foundations, including RL algorithms, agent architectures, and EVCS classifications, this review presents a structured survey of influential research, highlighting how RL has been applied across various charging contexts and control scenarios. This paper categorizes RL methodologies from value-based to actor–critic and hybrid frameworks, and explores their integration with optimization techniques, forecasting models, and multi-agent coordination strategies. By examining key design aspects—including agent structures, training schemes, coordination mechanisms, reward formulation, data usage, and evaluation protocols—this review identifies broader trends across central control dimensions such as scalability, uncertainty management, interpretability, and adaptability. In addition, the review assesses common baselines, performance metrics, and validation settings used in the literature, linking algorithmic developments with real-world deployment needs. By bridging theoretical principles with practical insights, this work provides comprehensive directions for future RL applications in EVCS control, while identifying methodological gaps and opportunities for safer, more efficient, and sustainable operation. Full article
(This article belongs to the Special Issue Advanced Technologies for Electrified Transportation and Robotics)
Show Figures

Figure 1

90 pages, 29362 KB  
Review
AI for Wildfire Management: From Prediction to Detection, Simulation, and Impact Analysis—Bridging Lab Metrics and Real-World Validation
by Nicolas Caron, Hassan N. Noura, Lise Nakache, Christophe Guyeux and Benjamin Aynes
AI 2025, 6(10), 253; https://doi.org/10.3390/ai6100253 - 1 Oct 2025
Abstract
Artificial intelligence (AI) offers several opportunities in wildfire management, particularly for improving short- and long-term fire occurrence forecasting, spread modeling, and decision-making. When properly adapted beyond research into real-world settings, AI can significantly reduce risks to human life, as well as ecological and [...] Read more.
Artificial intelligence (AI) offers several opportunities in wildfire management, particularly for improving short- and long-term fire occurrence forecasting, spread modeling, and decision-making. When properly adapted beyond research into real-world settings, AI can significantly reduce risks to human life, as well as ecological and economic damages. However, despite increasingly sophisticated research, the operational use of AI in wildfire contexts remains limited. In this article, we review the main domains of wildfire management where AI has been applied—susceptibility mapping, prediction, detection, simulation, and impact assessment—and highlight critical limitations that hinder practical adoption. These include challenges with dataset imbalance and accessibility, the inadequacy of commonly used metrics, the choice of prediction formats, and the computational costs of large-scale models, all of which reduce model trustworthiness and applicability. Beyond synthesizing existing work, our survey makes four explicit contributions: (1) we provide a reproducible taxonomy supported by detailed dataset tables, emphasizing both the reliability and shortcomings of frequently used data sources; (2) we propose evaluation guidance tailored to imbalanced and spatial tasks, stressing the importance of using accurate metrics and format; (3) we provide a complete state of the art, highlighting important issues and recommendations to enhance models’ performances and reliability from susceptibility to damage analysis; (4) we introduce a deployment checklist that considers cost, latency, required expertise, and integration with decision-support and optimization systems. By bridging the gap between laboratory-oriented models and real-world validation, our work advances prior reviews and aims to strengthen confidence in AI-driven wildfire management while guiding future research toward operational applicability. Full article
Show Figures

Figure 1

18 pages, 2683 KB  
Article
Casa da Arquitectura and the Liminality of Architecture Centers: Archives, Exhibitions, and Curatorial Strategies in the Digital Shift
by Giuseppe Resta and Fabiana Dicuonzo
Arts 2025, 14(5), 120; https://doi.org/10.3390/arts14050120 - 1 Oct 2025
Abstract
This study explores the evolving role of architecture centers in the digital age by analyzing the case of Casa da Arquitectura (CdA) in Porto, Portugal, a hybrid institution that functions as both archive and museum. Positioned within the broader context of museum digitization [...] Read more.
This study explores the evolving role of architecture centers in the digital age by analyzing the case of Casa da Arquitectura (CdA) in Porto, Portugal, a hybrid institution that functions as both archive and museum. Positioned within the broader context of museum digitization and liminality theory, the research investigates how CdA navigates the spatial, social, and procedural shifts inherent in digital transformation. Drawing on qualitative methods, including in-depth interviews with key personnel and on-site observations, the study examines the institution’s strategies in acquisition, curation, and exhibition design. The findings highlight CdA’s innovative approach to archival visibility, the creation of a multipurpose digital platform (“edifício digital”), and the integration of archival acquisitions with exhibition practices. These practices illustrate a condition of triple liminality of the digital museum concerning its process, position, and place. The study also discusses how digitization reconfigures the museum’s organizational model in terms of accessibility and curatorial complexity. By analyzing CdA’s operational and curatorial choices, the paper discusses how digital museums can act as speculative, process-oriented spaces that challenge traditional boundaries between archive and exhibition, physical and virtual, institutional and public. Full article
(This article belongs to the Special Issue The Role of Museums in the Digital Age)
Show Figures

Figure 1

12 pages, 1857 KB  
Communication
Personal KPIs in IVF Laboratory: Are They Measurable or Distortable? A Case Study Using AI-Based Benchmarking
by Péter Mauchart, Emese Wágner, Krisztina Gödöny, Kálmán Kovács, Sándor Péntek, Andrea Barabás, József Bódis and Ákos Várnagy
J. Clin. Med. 2025, 14(19), 6948; https://doi.org/10.3390/jcm14196948 - 1 Oct 2025
Abstract
Background: Key performance indicators (KPIs) are widely used to evaluate embryologist performance in IVF laboratories, yet they are sensitive to patient demographics, treatment indications, and case allocation. Artificial intelligence (AI) offers opportunities to benchmark personal KPIs against context-aware expectations. This study evaluated whether [...] Read more.
Background: Key performance indicators (KPIs) are widely used to evaluate embryologist performance in IVF laboratories, yet they are sensitive to patient demographics, treatment indications, and case allocation. Artificial intelligence (AI) offers opportunities to benchmark personal KPIs against context-aware expectations. This study evaluated whether personal CPR-based KPIs are measurable or distorted when compared with AI-derived predictions. Methods: We retrospectively analyzed 474 ICSI-only cycles performed by a single senior embryologist between 2022 and 2024. A Random Forest trained on 1294 institutional cycles generated AI-predicted clinical pregnancy rates (CPRs). Observed and predicted CPRs were compared across age groups, BMI categories, and physicians using cycle-level paired comparisons and a grouped calibration statistic. Results: Overall CPRs were similar between observed and predicted outcomes (0.31 vs. 0.33, p = 0.412). Age-stratified analysis showed significant discrepancy in the >40 group (0.11 vs. 0.18, p = 0.003), whereas CPR in the 35–40 group exceeded predictions (0.39 vs. 0.33, p = 0.018). BMI groups showed no miscalibration (p = 0.458). Physician-level comparisons suggested variability (p = 0.021), while grouped calibration was not statistically significant (p = 0.073). Conclusions: Personal embryologist KPIs are measurable but influenced by patient and physician factors. AI benchmarking may improve fairness by adjusting for case mix, yet systematic bias can persist in high-risk subgroups. Multi-operator, multi-center validation is needed to confirm generalizability. Full article
(This article belongs to the Section Reproductive Medicine & Andrology)
Show Figures

Figure 1

32 pages, 2914 KB  
Article
Grid Search and Genetic Algorithm Optimization of Neural Networks for Automotive Radar Object Classification
by Atila Gabriel Ham, Corina Nafornita, Vladimir Cristian Vesa, George Copacean, Voislava Denisa Davidovici and Ioan Nafornita
Sensors 2025, 25(19), 6017; https://doi.org/10.3390/s25196017 - 30 Sep 2025
Abstract
This paper proposes and evaluates two neural network-based approaches for object classification in automotive radar systems, comparing the performance impact of grid search and genetic algorithm (GA) hyperparameter optimization strategies. The task involves classifying cars, pedestrians, and cyclists using radar-derived features. The grid [...] Read more.
This paper proposes and evaluates two neural network-based approaches for object classification in automotive radar systems, comparing the performance impact of grid search and genetic algorithm (GA) hyperparameter optimization strategies. The task involves classifying cars, pedestrians, and cyclists using radar-derived features. The grid search–optimized model employs a compact architecture with two hidden layers and 10 neurons per layer, leveraging kinematic correlations and motion descriptors to achieve mean accuracies of 90.06% (validation) and 90.00% (test). In contrast, the GA-optimized model adopts a deeper architecture with nine hidden layers and 30 neurons per layer, integrating an expanded feature set that includes object dimensions, signal-to-noise ratio (SNR), radar cross-section (RCS), and Kalman filter–based motion descriptors, resulting in substantially higher performance at approximately 97.40% mean accuracy on both validation and test datasets. Principal Component Analysis (PCA) and SHapley Additive exPlanations (SHAP) highlight the enhanced discriminative power of the new set of features, while parallelized GA execution enables efficient exploration of a broader hyperparameter space. Although currently optimized for urban traffic scenarios, the proposed approach can be extended to highway and extra-urban environments through targeted dataset expansion and developing additional features that are less sensitive to object kinematics, thereby improving robustness across diverse motion patterns and operational contexts. Full article
Back to TopTop