Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (64,838)

Search Parameters:
Keywords = real-time

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1790 KB  
Article
High-Precision Mapping and Real-Time Localization for Agricultural Machinery Sheds and Farm Access Roads Environments
by Yang Yu, Zengyao Li, Buwang Dai, Jiahui Pan and Lizhang Xu
Agriculture 2025, 15(21), 2248; https://doi.org/10.3390/agriculture15212248 - 28 Oct 2025
Abstract
To address the issues of signal loss and insufficient accuracy of traditional GNSS (Global Navigation Satellite System) navigation in agricultural machinery sheds and farm access road environments, this paper proposes a high-precision mapping method for such complex environments and a real-time localization system [...] Read more.
To address the issues of signal loss and insufficient accuracy of traditional GNSS (Global Navigation Satellite System) navigation in agricultural machinery sheds and farm access road environments, this paper proposes a high-precision mapping method for such complex environments and a real-time localization system for agricultural vehicles. First, an autonomous navigation system was developed by integrating multi-sensor data from LiDAR (Light Laser Detection and Ranging), GNSS, and IMU (Inertial Measurement Unit), with functional modules for mapping, localization, planning, and control implemented within the ROS (Robot Operating System) framework. Second, an improved LeGO-LOAM algorithm is introduced for constructing maps of machinery sheds and farm access roads. The mapping accuracy is enhanced through reflectivity filtering, ground constraint optimization, and ScanContext-based loop closure detection. Finally, a localization method combining NDT (Normal Distribution Transform), IMU, and a UKF (Unscented Kalman Filter) is proposed for tracked grain transport vehicles. The UKF and IMU measurements are used to predict the vehicle state, while the NDT algorithm provides pose estimates for state update, yielding a fused and more accurate pose estimate. Experimental results demonstrate that the proposed mapping method reduces APE (absolute pose error) by 79.99% and 49.04% in the machinery sheds and farm access roads environments, respectively, indicating a significant improvement over conventional methods. The real-time localization module achieves an average processing time of 26.49 ms with an average error of 3.97 cm, enhancing localization accuracy without compromising output frequency. This study provides technical support for fully autonomous operation of agricultural machinery. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
19 pages, 1197 KB  
Article
Quaternion CNN in Deep Learning Processing for EEG with Applications to Brain Disease Detection
by Gerardo Ortega-Flores, Guillermo Altamirano-Escobedo, Diego Mercado-Ravell and Eduardo Bayro-Corrochano
Appl. Sci. 2025, 15(21), 11526; https://doi.org/10.3390/app152111526 - 28 Oct 2025
Abstract
Despite the popularity of electroencephalograms (EEGs) as tools for assessing brain health, they can sometimes be abstract and prone to noise, making them difficult to interpret. The following work aims to implement a Quaternion Convolutional Neural Network (QCNN) to detect abnormal EEGs obtained [...] Read more.
Despite the popularity of electroencephalograms (EEGs) as tools for assessing brain health, they can sometimes be abstract and prone to noise, making them difficult to interpret. The following work aims to implement a Quaternion Convolutional Neural Network (QCNN) to detect abnormal EEGs obtained from a database that includes both people with excellent mental health and individuals with different types of mental illnesses. Unlike other approaches in which the QCNN is used exclusively for image processing, in the present work, a unique architecture with mainly quaternionic layers is proposed, specifically designed for the classification of time-varying signals. Using the database “The TUH EEG Abnormal Corpus”, the signals are preprocessed using the Wavelet Transform, a mathematical tool capable of performing simultaneous time and frequency analysis, configured with a level 4 decomposition value. Subsequently, the results are subjected to a partial spectrogram-type treatment to integrate the energy parameter into the analysis. They are then conditioned in each of the elements of the quaternion and processed by the QCNN, leveraging quaternion algebra to maintain the relationships between its elements, both in the input and in the convolutional product. In this way, it is possible to obtain significant percentages in the precision, recall, and accuracy metrics with values higher than 77%. Its performance, which uses 4 times less computational memory, allows the QCNN to be considered an alternative for classifying EEG signals. Finally, a comparison of the proposed model was made with other architectures commonly used in the literature, as well as with developments in other research and with a hybrid model whose performance places it at the highest classification standard, not to mention the ability of the QCNN to preserve multi-channel dependencies in EEG signals in a more natural way, achieving parameter efficiencies by leveraging quaternion algebra, reducing the computational cost compared to real-valued CNNs. Full article
(This article belongs to the Special Issue Mechatronic Systems Design and Optimization)
27 pages, 7961 KB  
Review
Marine-Inspired Multimodal Sensor Fusion and Neuromorphic Processing for Autonomous Navigation in Unstructured Subaquatic Environments
by Chandan Sheikder, Weimin Zhang, Xiaopeng Chen, Fangxing Li, Yichang Liu, Zhengqing Zuo, Xiaohai He and Xinyan Tan
Sensors 2025, 25(21), 6627; https://doi.org/10.3390/s25216627 (registering DOI) - 28 Oct 2025
Abstract
Autonomous navigation in GPS-denied, unstructured environments such as murky waters or complex seabeds remains a formidable challenge for robotic systems, primarily due to sensory degradation and the computational inefficiency of conventional algorithms. Drawing inspiration from the robust navigation strategies of marine species such [...] Read more.
Autonomous navigation in GPS-denied, unstructured environments such as murky waters or complex seabeds remains a formidable challenge for robotic systems, primarily due to sensory degradation and the computational inefficiency of conventional algorithms. Drawing inspiration from the robust navigation strategies of marine species such as the sea turtle’s quantum-assisted magnetoreception, the octopus’s tactile-chemotactic integration, and the jellyfish’s energy-efficient flow sensing this study introduces a novel neuromorphic framework for resilient robotic navigation, fundamentally based on the co-design of marine-inspired sensors and event-based neuromorphic processors. Current systems lack the dynamic, context-aware multisensory fusion observed in these animals, leading to heightened susceptibility to sensor failures and environmental perturbations, as well as high power consumption. This work directly bridges this gap. Our primary contribution is a hybrid sensor fusion model that co-designs advanced sensing replicating the distributed neural processing of cephalopods and the quantum coherence mechanisms of migratory marine fauna with a neuromorphic processing backbone. Enabling real-time, energy-efficient path integration and cognitive mapping without reliance on traditional methods. This proposed framework has the potential to significantly enhance navigational robustness by overcoming the limitations of state-of-the-art solutions. The findings suggest the potential of marine bio-inspired design for advancing autonomous systems in critical applications such as deep-sea exploration, environmental monitoring, and underwater infrastructure inspection. Full article
Show Figures

Figure 1

26 pages, 1559 KB  
Review
AI-Based Modeling and Optimization of AC/DC Power Systems
by Izabela Rojek, Dariusz Mikołajewski, Piotr Prokopowicz and Maciej Piechowiak
Energies 2025, 18(21), 5660; https://doi.org/10.3390/en18215660 (registering DOI) - 28 Oct 2025
Abstract
This review examined the latest advances in the modeling, analysis, and control of AC/DC power systems based on artificial intelligence (AI) in which renewable energy sources play a significant role. Integrating variable and intermittent renewable energy sources (such as sunlight and wind power) [...] Read more.
This review examined the latest advances in the modeling, analysis, and control of AC/DC power systems based on artificial intelligence (AI) in which renewable energy sources play a significant role. Integrating variable and intermittent renewable energy sources (such as sunlight and wind power) poses a major challenge in maintaining system stability, reliability, and optimal system performance. Traditional modeling and control methods are increasingly inadequate to capture the complex, nonlinear, and dynamic behavior of modern hybrid AC/DC systems. Specialized AI techniques, such as machine learning (ML) and deep learning (DL), and hybrid models, have become important tools to meet these challenges. This article presents a comprehensive overview of AI-based methodologies for system identification, fault diagnosis, predictive control, and real-time optimization. Particular attention is paid to the role of AI in increasing grid resilience, implementing adaptive control strategies, and supporting decision-making under uncertainty. The review also highlights key breakthroughs in AI algorithms, including federated learning, and physics-based neural networks, which offer scalable and interpretable solutions. Furthermore, the article examines current limitations and open research problems related to data quality, computational requirements, and model generalizability. Case studies of smart grids and comparative scenarios demonstrate the practical effectiveness of AI-based approaches in real-world energy system applications. Finally, it proposes future directions to narrow the gap between AI research and industrial application in next-generation smart grids. Full article
19 pages, 1591 KB  
Article
Adaptive PPO-RND Optimization Within Prescribed Performance Control for High-Precision Motion Platforms
by Yimin Wang, Jingchong Xu, Kaina Gao, Junjie Wang, Shi Bu, Bin Liu and Jianping Xing
Mathematics 2025, 13(21), 3439; https://doi.org/10.3390/math13213439 - 28 Oct 2025
Abstract
The continuous reduction in critical dimensions and the escalating demands for higher throughput are driving motion platforms to operate under increasingly complex conditions, including multi-axis coupling, structural nonlinearities, and time-varying operational scenarios. These complexities make the trade-offs among precision, speed, and robustness increasingly [...] Read more.
The continuous reduction in critical dimensions and the escalating demands for higher throughput are driving motion platforms to operate under increasingly complex conditions, including multi-axis coupling, structural nonlinearities, and time-varying operational scenarios. These complexities make the trade-offs among precision, speed, and robustness increasingly challenging. Traditional Proportional–Integral–Derivative (PID) controllers, which rely on empirical tuning methods, suffer from prolonged trial-and-error cycles and limited transferability, and consequently struggle to maintain optimal performance under these complex working conditions. This paper proposes an adaptive β–Proximal Policy Optimization with Random Network Distillation (β-PPO-RND) parameter optimization within the Prescribed Performance Control (PPC) framework. The adaptive coefficient β is updated based on the temporal change in reward difference, which is clipped and smoothly mapped to a preset range using a hyperbolic tangent function. This mechanism dynamically balances intrinsic and extrinsic rewards—encouraging broader exploration in the early stage and emphasizing performance optimization in the later stage. Experimental validation on a Permanent Magnet Linear Synchronous Motor (PMLSM) platform confirms the effectiveness of the proposed approach. It eliminates the need for manual tuning and enables real-time controller parameter adjustment within the PPC framework, achieving high-precision trajectory tracking and a significant reduction in steady-state error. Experimental results show that the proposed method achieves MAE = 0.135 and RMSE = 0.154, representing approximately 70% reductions compared to the conventional PID controller. Full article
28 pages, 2030 KB  
Article
Self-Adaptable Computation Offloading Strategy for UAV-Assisted Edge Computing
by Yanting Wang, Yuhang Zhang, Zhuo Qian, Yubo Zhao and Han Zhang
Drones 2025, 9(11), 748; https://doi.org/10.3390/drones9110748 (registering DOI) - 28 Oct 2025
Abstract
Unmanned Aerial Vehicle-assisted Edge Computing (UAV-EC) leverages UAVs as aerial edge servers to provide computation resources to user equipment (UE) in dynamically changing environments. A critical challenge in UAV-EC lies in making real-time adaptive offloading decisions that determine whether and how UE should [...] Read more.
Unmanned Aerial Vehicle-assisted Edge Computing (UAV-EC) leverages UAVs as aerial edge servers to provide computation resources to user equipment (UE) in dynamically changing environments. A critical challenge in UAV-EC lies in making real-time adaptive offloading decisions that determine whether and how UE should offload tasks to UAVs. This problem is typically formulated as Mixed-Integer Nonlinear Programming (MINLP). However, most existing offloading methods sacrifice strategy timeliness, leading to significant performance degradation in UAV-EC systems, especially under varying wireless channel quality and unpredictable UAV mobility. In this paper, we propose a novel framework that enhances offloading strategy timeliness in such dynamic settings. Specifically, we jointly optimize offloading decisions, transmit power of UEs, and computation resource allocation, to maximize system utility encompassing both latency reduction and energy conservation. To tackle this combinational optimization problem and obtain real-time strategy, we design a Quality of Experience (QoE)-aware Online Offloading (QO2) algorithm which could optimally adapt offloading decisions and resources allocations to time-varying wireless channel conditions. Instead of directly solving MIP via traditional methods, QO2 algorithm utilizes a deep neural network to learn binary offloading decisions from experience, greatly improving strategy timeliness. This learning-based operation inherently enhances the robustness of QO2 algorithm. To further strengthen robustness, we design a Priority-Based Proportional Sampling (PPS) strategy that leverages historical optimization patterns. Extensive simulation results demonstrate that QO2 outperforms state-of-the-art baselines in solution quality, consistently achieving near-optimal solutions. More importantly, it exhibits strong adaptability to dynamic network conditions. These characteristics make QO2 a promising solution for dynamic UAV-EC systems. Full article
(This article belongs to the Section Drone Communications)
24 pages, 2177 KB  
Article
Translating the Nearest Convex Hull Classifier from Classical to Quantum Computing
by Grégoire Cattan, Anton Andreev and Quentin Barthélemy
Quantum Rep. 2025, 7(4), 51; https://doi.org/10.3390/quantum7040051 (registering DOI) - 28 Oct 2025
Abstract
The nearest convex hull (NCH) classifier is a promising algorithm for the classification of biosignals, such as electroencephalography (EEG) signals, especially when adapted to the classification of symmetric positive definite matrices. In this paper, we implemented a version of this classifier that can [...] Read more.
The nearest convex hull (NCH) classifier is a promising algorithm for the classification of biosignals, such as electroencephalography (EEG) signals, especially when adapted to the classification of symmetric positive definite matrices. In this paper, we implemented a version of this classifier that can execute either on a traditional computer or a quantum simulator, and we tested it against state-of-the-art classifiers for EEG classification. This article addresses the practical challenges of adapting a classical algorithm to one that can be executed on a quantum computer or a quantum simulator. One of these challenges is to find a formulation of the classification problem that is quadratic, is binary, and accepts only linear constraints—that is, an objective function that can be solved using a variational quantum algorithm. In this article, we present two approaches to solve this problem, both compatible with continuous variables. Finally, we evaluated, for the first time, the performance of the NCH classifier on real EEG data using both quantum and classical optimization methods. We selected a particularly challenging dataset, where classical optimization typically performs poorly, and demonstrated that the nearest convex hull classifier was able to generalize with a modest performance. One lesson from this case study is that, by separating the objective function from the solver, it becomes possible to allow an existing classical algorithm to run on a quantum computer, as long as an appropriate objective function—quadratic and binary—can be found. Full article
41 pages, 2249 KB  
Review
Research Status and Development Trends of Artificial Intelligence in Smart Agriculture
by Chuang Ge, Guangjian Zhang, Yijie Wang, Dandan Shao, Xiangjin Song and Zhaowei Wang
Agriculture 2025, 15(21), 2247; https://doi.org/10.3390/agriculture15212247 - 28 Oct 2025
Abstract
Artificial Intelligence (AI) is a key technological enabler for the transition of agricultural production and management from experience-driven to data-driven, continuously advancing modern agriculture toward smart agriculture. This evolution ultimately aims to achieve a precise agricultural production model characterized by low resource consumption, [...] Read more.
Artificial Intelligence (AI) is a key technological enabler for the transition of agricultural production and management from experience-driven to data-driven, continuously advancing modern agriculture toward smart agriculture. This evolution ultimately aims to achieve a precise agricultural production model characterized by low resource consumption, high safety, high quality, high yield, and stable, sustainable development. Although machine learning, deep learning, computer vision, Internet of Things, and other AI technologies have made significant progress in numerous agricultural production applications, most studies focus on singular agricultural scenarios or specific AI algorithm research, such as object detection, navigation, agricultural machinery maintenance, and food safety, resulting in relatively limited coverage. To comprehensively elucidate the applications of AI in agriculture and provide a valuable reference for practitioners and policymakers, this paper reviews relevant research by investigating the entire agricultural production process—including planting, management, and harvesting—covering application scenarios such as seed selection during the cultivation phase, pest and disease identification and intelligent management during the growth phase, and agricultural product grading during the harvest phase, as well as agricultural machinery and devices like fault diagnosis and predictive maintenance of agricultural equipment, agricultural robots, and the agricultural Internet of Things. It first analyzes the fundamental principles and potential advantages of typical AI technologies, followed by a systematic and in-depth review of the latest progress in applying these core technologies to smart agriculture. The challenges faced by existing technologies are also explored, such as the inherent limitations of AI models—including poor generalization capability, low interpretability, and insufficient real-time performance—as well as the complex agricultural operating environments that result in multi-source, heterogeneous, and low-quality, unevenly annotated data. Furthermore, future research directions are discussed, such as lightweight network models, transfer learning, embodied intelligent agricultural robots, multimodal perception technologies, and large language models for agriculture. The aim is to provide meaningful insights for both theoretical research and practical applications of AI technologies in agriculture. Full article
(This article belongs to the Special Issue Perception, Decision-Making, and Control of Agricultural Robots)
25 pages, 1619 KB  
Review
Artificial Intelligence in Postmenopausal Health: From Risk Prediction to Holistic Care
by Gianeshwaree Alias Rachna Panjwani, Srivarshini Maddukuri, Rabiah Aslam Ansari, Samiksha Jain, Manisha Chavan, Naga Sai Akhil Reddy Gogula, Gayathri Yerrapragada, Poonguzhali Elangovan, Mohammed Naveed Shariff, Thangeswaran Natarajan, Jayarajasekaran Janarthanan, Shiva Sankari Karrupiah, Keerthy Gopalakrishnan, Divyanshi Sood and Shivaram P. Arunachalam
J. Clin. Med. 2025, 14(21), 7651; https://doi.org/10.3390/jcm14217651 (registering DOI) - 28 Oct 2025
Abstract
Background/Objectives: Menopause, marked by permanent cessation of menstruation, is a universal transition associated with vasomotor, genitourinary, psychological, and metabolic changes. These conditions significantly affect health-related quality of life (HRQoL) and increase the risk of chronic diseases. Despite their impact, timely diagnosis and [...] Read more.
Background/Objectives: Menopause, marked by permanent cessation of menstruation, is a universal transition associated with vasomotor, genitourinary, psychological, and metabolic changes. These conditions significantly affect health-related quality of life (HRQoL) and increase the risk of chronic diseases. Despite their impact, timely diagnosis and individualized management are often limited by delayed care, fragmented health systems, and cultural barriers. Methods: This review summarizes current applications of artificial intelligence (AI) in postmenopausal health, focusing on risk prediction, early detection, and personalized treatment. Evidence was compiled from studies using biomarkers, imaging, wearable sensors, electronic health records, natural language processing, and digital health platforms. Results: AI enhances disease prediction and diagnosis, including improved accuracy in breast cancer and osteoporosis screening through imaging analysis, and cardiovascular risk stratification via machine learning models. Wearable devices and natural language processing enable real-time monitoring of underreported symptoms such as hot flushes and mood disorders. Digital technologies further support individualized interventions, including lifestyle modification and optimized medication regimens. By improving access to telemedicine and reducing bias, AI also has the potential to narrow healthcare disparities. Conclusions: AI can transform postmenopausal care from reactive to proactive, offering personalized strategies that improve outcomes and quality of life. However, challenges remain, including algorithmic bias, data privacy, and clinical implementation. Ethical frameworks and interdisciplinary collaboration among clinicians, data scientists, and policymakers are essential for safe and equitable adoption. Full article
Show Figures

Figure 1

18 pages, 2645 KB  
Article
Advancing YOLOv8-Based Wafer Notch-Angle Detection Using Oriented Bounding Boxes, Hyperparameter Tuning, Architecture Refinement, and Transfer Learning
by Eun Seok Jun, Hyo Jun Sim and Seung Jae Moon
Appl. Sci. 2025, 15(21), 11507; https://doi.org/10.3390/app152111507 - 28 Oct 2025
Abstract
Accurate angular alignment of wafers is essential in ion implantation to prevent channeling effects that degrade device performance. This study proposes a real-time notch-angle-detection system based on you only look once version 8 with oriented bounding boxes (YOLOv8-OBB). The proposed method compares YOLOv8 [...] Read more.
Accurate angular alignment of wafers is essential in ion implantation to prevent channeling effects that degrade device performance. This study proposes a real-time notch-angle-detection system based on you only look once version 8 with oriented bounding boxes (YOLOv8-OBB). The proposed method compares YOLOv8 and YOLOv8-OBB, demonstrating the superiority of the latter in accurately capturing rotational features. To enhance detection performance, hyperparameters—including initial learning rate (Lr0), weight decay, and optimizer—are optimized using an one factor at a time (OFAT) approach followed by grid search. Architectural improvements, including spatial pyramid pooling fast with large selective kernel attention (SPPF_LSKA), a bidirectional feature pyramid network (BiFPN), and a high-resolution detection head (P2 head), are incorporated to improve small-object detection. Furthermore, a gradual unfreezing strategy is employed to support more effective and stable transfer learning. The final system is evaluated over 100 training epochs and tracked up to 5000 epochs to verify long-term stability. Compared to baseline models, it achieves higher accuracy and robustness in angle-sensitive scenarios, offering a reliable and scalable solution for high-precision wafer-notch detection in semiconductor manufacturing. Full article
Show Figures

Figure 1

35 pages, 1798 KB  
Article
Sustainable Optimal Capacity Allocation for Grid-Connected Microgrids Incorporating Carbon Capture and Storage Retrofitting in Multi-Market Contexts: A Case Study in Southern China
by Yanbin Xu, Jiaxin Ma, Yi Liao, Shifang Kuang, Shasha Luo and Ming Zeng
Sustainability 2025, 17(21), 9588; https://doi.org/10.3390/su17219588 (registering DOI) - 28 Oct 2025
Abstract
With the goal of achieving carbon neutrality, promoting the clean and low-carbon transformation of energy assets, as exemplified by existing thermal power units, has emerged as a pivotal challenge in addressing climate change and achieving sustainable development. Arrangements and technologies such as the [...] Read more.
With the goal of achieving carbon neutrality, promoting the clean and low-carbon transformation of energy assets, as exemplified by existing thermal power units, has emerged as a pivotal challenge in addressing climate change and achieving sustainable development. Arrangements and technologies such as the electricity–carbon–certificate multi-market, microgrids with direct green power connections, and carbon capture and storage (CCS) retrofitting provide favorable conditions for facing the aforementioned challenge. Based on an analysis of how liquid-storage CCS retrofitting affects the flexibility of thermal power units, this manuscript proposes a bi-level optimization model and solution method for capacity allocation for grid-connected microgrids, while considering CCS retrofits under multi-markets. This approach overcomes two key deficiencies in the existing research: first, neglecting the relationship between electricity–carbon coupling characteristics and unit flexibility and its potential impacts, and second, the significant deviation of scenarios constructed from real policy and market environments, which limits its ability to provide timely and relevant references. A case study in southern China demonstrates that first, multi-market implementation significantly boosts microgrids’ investment in and absolute consumption of renewable energy. However, its effect on reducing carbon emissions is limited, and renewable power curtailment may surge, potentially deviating from the original intent of carbon neutrality policies. In this case study, renewable energy installed capacity and consumption rose by 17.09% and 22.64%, respectively, while net carbon emissions decreased by only 3.32%, and curtailed power nearly doubled. Second, introducing liquid-storage CCS, which decouples the CO2 absorption and desorption processes, into the capacity allocation significantly enhances microgrid flexibility, markedly reduces the risk of overcapacity in renewable energy units, and enhances investment efficiency. In this case study, following CCS retrofits, renewable energy unit installed capacity decreased by 24%, while consumption dropped by only 7.28%, utilization hours increased by 22%, and the curtailment declined by 78.05%. Third, although CCS retrofitting can significantly reduce microgrid carbon emissions, factors such as current carbon prices, technological efficiency, and economic characteristics hinder large-scale adoption. In this case study, under multi-markets, CCS retrofitting reduced net carbon emissions by 86.16%, but the annualized total cost rose by 3.68%. Finally, based on the aforementioned findings, this manuscript discusses implications for microgrid development decision making, CCS industrialization, and market mechanisms from the perspectives of research directions, policy formulation, and practical work. Full article
20 pages, 3485 KB  
Article
Deformation Pattern Classification of Sea-Crossing Bridge InSAR Time Series Based on a Transfer Learning Framework
by Lichen Ren, Chengyin Liu and Jinping Ou
Remote Sens. 2025, 17(21), 3567; https://doi.org/10.3390/rs17213567 - 28 Oct 2025
Abstract
Interferometric Synthetic Aperture Radar (InSAR) provides unique advantages for sea-crossing bridge monitoring through continuous, large-scale deformation detection. Dividing monitoring data into specific deformation patterns helps establish the connection between bridge deformation and its underlying mechanisms. However, the classification of complex and nonlinear bridge [...] Read more.
Interferometric Synthetic Aperture Radar (InSAR) provides unique advantages for sea-crossing bridge monitoring through continuous, large-scale deformation detection. Dividing monitoring data into specific deformation patterns helps establish the connection between bridge deformation and its underlying mechanisms. However, the classification of complex and nonlinear bridge deformations often requires extensive manual labeling work. To achieve automatic classification of deformation patterns with minimal labeled data, this study introduces a transfer learning approach and proposes an InSAR-based method for deformation pattern recognition of cross-sea bridges. At first, deformation time series of the study area are acquired by PS-InSAR, with GNSS results confirming less than 10% error. Then, six types of deformation are identified, including stable, linear, step, piecewise linear, power law, and temperature-related types. Large amounts of simulated data with labels are generated based on these six types. Subsequently, four models—TCN, Transformer, TFT, and ROCKET—are trained using synthetic data and finely adjusted using few real data. Finally, the final classification results are weighted by the classification results of multiple models. Even though confidence and global consistency of each single model are also calculated, the final result is the combined result of a set of multi-type confidences. ROCKET achieved the highest accuracy on simulation data (96.27%) in these four representative models, while ensemble weighting improved robustness on real data. The methodology addresses supervised learning’s labeled data requirements through synthetic data generation and ensemble classification, producing probabilistic outputs that preserve uncertainty information rather than deterministic labels. The framework enables automatic classification of sea-crossing bridge deformation patterns with minimal labeled data, identifying patterns with distinct dominant factors and providing probabilistic information for engineering decision making. Full article
19 pages, 839 KB  
Review
Real-Time Rail Electrification Systems Monitoring: A Review of Technologies
by Jose A. Sainz-Aja, João Pombo, Jordan Brant, Pedro Antunes, José M. Rebelo, José Santos and Diego Ferreño
Sensors 2025, 25(21), 6625; https://doi.org/10.3390/s25216625 (registering DOI) - 28 Oct 2025
Abstract
Most electrified railway networks are powered through a pantograph–overhead contact line (OCL) interface to ensure safe and reliable operation. The OCL is one of the most vulnerable components of the train traction power system as it is subjected to multiple impacts from the [...] Read more.
Most electrified railway networks are powered through a pantograph–overhead contact line (OCL) interface to ensure safe and reliable operation. The OCL is one of the most vulnerable components of the train traction power system as it is subjected to multiple impacts from the pantographs and to unpredictable environmental conditions. Wear, mounting imperfections, contact incidents, weather conditions, and inadequate maintenance lead to increased degradation of the pantograph–OCL current collection performance, causing degradation on contacting elements and assets failure. Incidents involving the pantograph–OCL system are significant sources of traffic disruption and train delays, e.g., Network Rail statistics show that, on average, delays due to OCL failures are 2500 h per year. In recent years, maintenance strategies have evolved significantly with improvements in technology and the increased interest in using real-time and historical data in decision support. This has led to an expansion in sensing systems for structures, vehicles, and machinery. The railway industry is currently investing in condition monitoring (CM) technologies in order to achieve lower failure rates and increase the availability, reliability, and safety of the railway service. This work presents a comprehensive review of the current CM systems for the pantograph–OCL, including their advantages and disadvantages, and outlines future trends in this area. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
12 pages, 815 KB  
Article
Eighteen Years of Human Rhinovirus Surveillance in the Republic of Korea (2007–2024): Age- and Season-Specific Trends from a Single-Center Study with Public Health Implications
by Yu Jeong Kim, Jeong Su Han, Sung Hun Jang, Jae-Sik Jeon and Jae Kyung Kim
Pathogens 2025, 14(11), 1098; https://doi.org/10.3390/pathogens14111098 - 28 Oct 2025
Abstract
Human rhinovirus (HRV) is the most common cause of upper respiratory tract infections and can cause substantial morbidity in children. Because its clinical features are nonspecific, differentiation from influenza virus and respiratory syncytial virus is often difficult, underscoring the diagnostic importance of real-time [...] Read more.
Human rhinovirus (HRV) is the most common cause of upper respiratory tract infections and can cause substantial morbidity in children. Because its clinical features are nonspecific, differentiation from influenza virus and respiratory syncytial virus is often difficult, underscoring the diagnostic importance of real-time reverse transcriptase polymerase chain reaction (Real-Time RT-PCR)-based detection. This study aimed to characterize long-term epidemiological patterns of HRV in the Republic of Korea and assess their clinical and public health implications. We retrospectively analyzed 23,284 nasopharyngeal swab specimens collected between 2007 and 2024 from outpatients and inpatients presenting with influenza-like illness at a tertiary care hospital. HRV RNA was detected by Real-Time RT-PCR, and positivity rates were compared by year, month, and age group. Annual detection peaked in 2015 (31.3%) and 2016 (28.6%), then dropped sharply during the COVID-19 pandemic (2020–2021, 4.2–11.0%) and remained low through 2024. Seasonally, rates were highest in July (24.4%) and September (24.1%) and lowest in January (6.9%). Age-specific analysis showed peak positivity in children (26.1%) and infants (20.3%), with lower rates in adults (3.9%) and older adults (3.3%). These findings underscore the diagnostic value of HRV detection and provide evidence for pediatric-focused prevention, outbreak preparedness, and climate-informed surveillance strategies. Full article
47 pages, 3136 KB  
Article
ZeroDay-LLM: A Large Language Model Framework for Zero-Day Threat Detection in Cybersecurity
by Mohammed Abdullah Alsuwaiket
Information 2025, 16(11), 939; https://doi.org/10.3390/info16110939 (registering DOI) - 28 Oct 2025
Abstract
Zero-day attacks pose unprecedented challenges to modern cybersecurity frameworks, exploiting unknown vulnerabilities that evade traditional signature-based detection systems. This paper presents ZeroDay-LLM, a novel large language model framework specifically designed for real-time zero-day threat detection in IoT and cloud networks. The proposed system [...] Read more.
Zero-day attacks pose unprecedented challenges to modern cybersecurity frameworks, exploiting unknown vulnerabilities that evade traditional signature-based detection systems. This paper presents ZeroDay-LLM, a novel large language model framework specifically designed for real-time zero-day threat detection in IoT and cloud networks. The proposed system integrates lightweight edge encoders with centralized transformer-based reasoning engines, enabling contextual understanding of network traffic patterns and behavioral anomalies. Through comprehensive evaluation on benchmark cybersecurity datasets including CICIDS2017, NSL-KDD, and UNSW-NB15, ZeroDay-LLM demonstrates superior performance, with a 97.8% accuracy in detecting novel attack signatures, a 23% reduction in false positives compared to traditional intrusion detection systems, and enhanced resilience against adversarial evasion techniques. The framework achieves real-time processing capabilities with an average latency of 12.3 ms per packet analysis while maintaining scalability across heterogeneous network infrastructures. Experimental results across urban, rural, and mixed deployment scenarios validate the practical applicability and robustness of the proposed approach. Full article
(This article belongs to the Special Issue Cyber Security in IoT)
Back to TopTop