Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,091)

Search Parameters:
Keywords = learning assistance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 4155 KB  
Review
Artificial Intelligence in Pulmonary Endoscopy: Current Evidence, Limitations, and Future Directions
by Sara Lopes, Miguel Mascarenhas, João Fonseca and Adelino F. Leite-Moreira
J. Imaging 2026, 12(4), 167; https://doi.org/10.3390/jimaging12040167 (registering DOI) - 12 Apr 2026
Abstract
Background: Artificial intelligence (AI) is increasingly applied in pulmonary endoscopy, including diagnostic bronchoscopy, interventional pulmonology and endobronchial imaging. Advances in computer vision, machine learning and robotic systems have expanded the potential for automated lesion detection, navigation to peripheral pulmonary lesions, and real-time [...] Read more.
Background: Artificial intelligence (AI) is increasingly applied in pulmonary endoscopy, including diagnostic bronchoscopy, interventional pulmonology and endobronchial imaging. Advances in computer vision, machine learning and robotic systems have expanded the potential for automated lesion detection, navigation to peripheral pulmonary lesions, and real-time procedural support. However, the current evidence base remains heterogeneous, and translational challenges persist. Methods: This review summarizes current applications and developments of AI across white-light bronchoscopy (WLB), image-enhanced bronchoscopy (e.g., narrow-band imaging and autofluorescence imaging), endobronchial ultrasound (EBUS), virtual and robotic bronchoscopies, and workflow optimization and training. The authors also examine the methodological limitations, regulatory considerations, and implementation barriers that affect translation into routine practice. Results: Reported developments include deep learning-based models for mucosal abnormality detection, lymph-node characterization during EBUS-guided transbronchial needle aspiration (EBUS-TBNA), improved lesion localization, and reduction in operator-dependent variability. Additionally, AI-assisted simulation platforms and decision-support tools are reshaping training paradigms. Nevertheless, most studies remain retrospective or single-center, with limited external validation, dataset heterogeneity, unclear model explainability, and incomplete integration into clinical workflows. Conclusions: AI has the potential to support lesion detection, navigation, and training in pulmonary endoscopy. However, robust prospective validation, standardized datasets, transparent model reporting, robust data governance, multidisciplinary collaboration, and careful integration into clinical practice are required before widespread adoption. Full article
(This article belongs to the Section AI in Imaging)
36 pages, 1657 KB  
Review
The Current Status of Contaminated Site Remediation and Application Prospects of Artificial Intelligence—A Review
by Guodong Zheng, Shengcheng Mei, Yiping Wu and Pengyi Cui
Environments 2026, 13(4), 212; https://doi.org/10.3390/environments13040212 (registering DOI) - 12 Apr 2026
Abstract
Industrialization has led to the substantial release of heavy metals and organic pollutants into soil and groundwater, resulting in severe contaminated site issues that pose significant threats to ecosystems and human health. This review aims to systematically review the current development status and [...] Read more.
Industrialization has led to the substantial release of heavy metals and organic pollutants into soil and groundwater, resulting in severe contaminated site issues that pose significant threats to ecosystems and human health. This review aims to systematically review the current development status and challenges of contaminated site remediation technologies, and explore the potential of artificial intelligence (AI) applications in site remediation, to provide a theoretical reference for advancing intelligent remediation. Conventional remediation technologies mainly include physical methods (e.g., solidification/stabilization (S/S), soil vapor extraction (SVE), thermal desorption, pump and treat (P&T), groundwater circulation wells (GCWs)), chemical methods (e.g., chemical oxidation/reduction, electrokinetic remediation (EKR), soil washing), and biological methods (phytoremediation, microbial remediation), along with combined strategies that integrate multiple approaches. Although these technologies have achieved certain successes in engineering practice, they still face common challenges such as risks of secondary pollution, long remediation periods, high costs, poor adaptability to complex hydrogeological conditions, and insufficient long-term stability, making it difficult to fully meet the remediation demands of complex contaminated sites. Subsequently, the potential of emerging technologies—including nanomaterial-based remediation, bioelectrochemical systems, and molecular biology-assisted remediation—is introduced. On this basis, the forefront applications of AI in contaminated site remediation are discussed, covering site monitoring and characterization, risk assessment, remedial strategy selection, process prediction and parameter optimization, material design, and post-remediation intelligent stewardship. Machine learning (ML), explainable AI (XAI), and hybrid modeling approaches have markedly improved remediation efficiency and decision-making. Looking forward, with advancements in XAI, mechanism-data fusion models, and environmental foundation models, AI is poised to drive a paradigm shift toward intelligent and precision remediation. However, challenges related to data quality, model interpretability, and interdisciplinary expertise remain key barriers to overcome. Full article
22 pages, 908 KB  
Review
Exploring Recent Maritime Research on AIS-Based Ship Behavior Analysis and Modeling
by Anila Duka, Houxiang Zhang, Pero Vidan and Guoyuan Li
J. Mar. Sci. Eng. 2026, 14(8), 712; https://doi.org/10.3390/jmse14080712 (registering DOI) - 11 Apr 2026
Abstract
Automatic Identification System (AIS) data provide valuable insights into ship behavior, supporting maritime safety, situational awareness, and operational efficiency capabilities that are increasingly required for autonomous ship functions and harbor maneuvering assistance. This review synthesizes recent research on AIS-based ship behavior analysis and [...] Read more.
Automatic Identification System (AIS) data provide valuable insights into ship behavior, supporting maritime safety, situational awareness, and operational efficiency capabilities that are increasingly required for autonomous ship functions and harbor maneuvering assistance. This review synthesizes recent research on AIS-based ship behavior analysis and modeling published between 2022 and 2024 using a structured literature search and screening process informed by PRISMA principles. The review presents a five-stage workflow, spanning data processing, data analysis, knowledge extraction, modeling, and runtime applications with emphasis on how these stages contribute to perception, prediction, and decision support in automated navigation. Four dimensions are considered in data analysis, including statistical analysis, safety indicators, situational awareness, and anomaly detection. The modeling approaches are categorized into classification, regression, and optimization, highlighting current limitations such as data quality, algorithmic transparency, and real-time performance, while also assessing runtime feasibility for onboard or edge deployment. Three runtime application directions are identified: autonomous vessel functions, remote monitoring and control operations, and onboard decision-support tools, with numerous studies focusing on constrained waterways and port-approach scenarios. Future directions suggest integrating multi-source data and advancing machine learning models to improve robustness in complex traffic and harbor environments. By linking theoretical insights with practical onboard needs, this study provides guidance for developing intelligent, adaptive, and safety-enhancing maritime systems. Full article
(This article belongs to the Special Issue Autonomous Ship and Harbor Maneuvering: Modeling and Control)
25 pages, 1862 KB  
Article
Machine Learning-Assisted Modal Sensitivity and Parameter Ranking in Systems with Viscoelastic Damping
by Jakub Porysek and Magdalena Łasecka-Plura
Appl. Sci. 2026, 16(8), 3749; https://doi.org/10.3390/app16083749 (registering DOI) - 11 Apr 2026
Abstract
This paper proposes a machine-learning-assisted framework for modal sensitivity analysis of systems with viscoelastic damping elements, including both classical and fractional rheological models. Surrogate models are trained to approximate natural frequencies over a prescribed parameter space using two sampling strategies (Grid and Latin [...] Read more.
This paper proposes a machine-learning-assisted framework for modal sensitivity analysis of systems with viscoelastic damping elements, including both classical and fractional rheological models. Surrogate models are trained to approximate natural frequencies over a prescribed parameter space using two sampling strategies (Grid and Latin Hypercube) and two regression approaches: multi-layer perceptron (MLP) and Gaussian process regression (GPR). Sensitivities are obtained from the surrogates by finite differences and complemented by model-interpretability measures, namely permutation feature importance (PFI) and Shapley Additive Explanations (SHAP). The surrogate-based results are compared with analytically obtained sensitivities. Local first- and second-order sensitivities of natural frequencies are derived analytically using the direct differentiation method (DDM) for a nonlinear eigenvalue problem formulated in the Laplace domain and further transformed into dimensionless sensitivity measures. The methodology is demonstrated for a single-degree-of-freedom oscillator with classical and fractional Kelvin damper models and a two-story frame equipped with a fractional Kelvin damper. The results show very good agreement between analytical and surrogate-based sensitivities. Feature-importance rankings obtained by PFI and SHAP are consistent with the dimensionless sensitivities and capture changes in parameter influence under varying damping levels. Dispersion studies indicate only minor ranking variations. Full article
(This article belongs to the Section Civil Engineering)
21 pages, 1354 KB  
Article
Chaos Theory with AI Analysis in IoT Network Scenarios
by Antonio Francesco Gentile and Maria Cilione
Cryptography 2026, 10(2), 25; https://doi.org/10.3390/cryptography10020025 - 10 Apr 2026
Viewed by 36
Abstract
While general network dynamics have been extensively modeled using stochastic methods, the emergence of dense Internet of Things (IoT) ecosystems demands a more specialized analytical framework. IoT environments are characterized by extreme non-linearity and sensitivity to initial conditions, where traditional models often fail [...] Read more.
While general network dynamics have been extensively modeled using stochastic methods, the emergence of dense Internet of Things (IoT) ecosystems demands a more specialized analytical framework. IoT environments are characterized by extreme non-linearity and sensitivity to initial conditions, where traditional models often fail to account for chaotic latency and packet loss. This paper introduces a specialized approach that integrates Chaos Theory with the innovative paradigm of Vibe Coding—an AI-assisted development and analysis methodology that allows for the `encoding’ and interpretation of the dynamic `vibe’ or signature of network fluctuations in real-time. By categorizing network behavior into four distinct scenarios (quiescent, perturbed, attacked, and perturbed–Attacked), the proposed framework utilizes deep learning to transform chaotic signals into actionable intelligence. Our findings demonstrate that this specialized synergy between chaos analysis and Vibe Coding provides superior classification of adversarial threats, such as DoS and injection attacks, fostering intelligent native security for next-generation IoT infrastructures. Full article
30 pages, 20938 KB  
Review
Remote Sensing of Water: The Observation-to-Inference Arc Across Six Decades and Toward an AI-Native Future
by Daniel P. Ames
Remote Sens. 2026, 18(8), 1127; https://doi.org/10.3390/rs18081127 - 10 Apr 2026
Viewed by 36
Abstract
Over six decades, satellite remote sensing of water resources has evolved from manual interpretation of weather photographs to AI systems that learn hydrologic predictions directly from satellite imagery. This review traces that evolution through the observation-to-inference arc—a framework for the progressively tightening coupling [...] Read more.
Over six decades, satellite remote sensing of water resources has evolved from manual interpretation of weather photographs to AI systems that learn hydrologic predictions directly from satellite imagery. This review traces that evolution through the observation-to-inference arc—a framework for the progressively tightening coupling between what satellites observe and what hydrologists infer. Using illustrative applications in precipitation, evapotranspiration, soil moisture, snow, surface water, and groundwater, we show how early observations (1960–1985) remained disconnected from operational hydrology; how calibrated retrieval algorithms (1985–2000) established a one-way pipeline from satellites to models; how operational infrastructure (2000–2015), anchored by MODIS, GRACE, GPM, and Sentinel, achieved assimilative coupling through computational feedback between models and observations; and how deep learning (2015–present) is beginning to collapse this pipeline. Multi-source data fusion has been a recurring enabler at each stage. We articulate a four-level AI vision and research trajectory, from AI-assisted interpretation through AI-native retrieval and AI-driven inference to autonomous Earth observation intelligence. Persistent challenges in mission continuity, calibration, equity of access, and translating satellite-derived information into operational water management decisions provide essential context for evaluating both the promise and limits of this trajectory. Full article
(This article belongs to the Special Issue Mapping the Blue: Remote Sensing in Water Resource Management)
15 pages, 2413 KB  
Article
A Motion Intention Recognition Method for Lower-Limb Exoskeleton Assistance in Ultra-High-Voltage Transmission Tower Climbing
by Haoyuan Chen, Yalun Liu, Ming Li, Zhan Yang, Hongwei Hu, Xingqi Wu, Xingchao Wang, Hanhong Shi and Zhao Guo
Sensors 2026, 26(8), 2346; https://doi.org/10.3390/s26082346 - 10 Apr 2026
Viewed by 30
Abstract
Transmission tower climbing is a critical specialized operation in ultra-high-voltage power maintenance and communication infrastructure servicing. However, existing lower-limb exoskeletons used for tower climbing still suffer from insufficient motion intention recognition accuracy under complex operational environments. To address this issue, this study proposes [...] Read more.
Transmission tower climbing is a critical specialized operation in ultra-high-voltage power maintenance and communication infrastructure servicing. However, existing lower-limb exoskeletons used for tower climbing still suffer from insufficient motion intention recognition accuracy under complex operational environments. To address this issue, this study proposes an inertial measurement unit (IMU)-based bidirectional temporal deep learning method for motion intention recognition. First, a one-dimensional convolutional neural network (1D-CNN) is employed to extract local temporal features from multi-channel IMU signals. Subsequently, a bidirectional long short-term memory network (Bi-LSTM) is introduced to model the forward and backward temporal dependencies of motion sequences. Furthermore, a temporal attention mechanism is incorporated to emphasize discriminative features at critical movement phases, enabling the precise recognition of short-duration and transitional motions. Experimental results demonstrate that the proposed method outperforms traditional machine learning approaches and unidirectional temporal models in terms of accuracy, F1-score, and other evaluation metrics. In particular, this method demonstrates significant advantages in identifying the flexion/extension phases and transitional states. This study provides an offline method for analyzing movement intentions in lower-limb exoskeleton control for power transmission tower climbing scenarios and offers a reference for developing assistive control strategies for assisted climbing tasks in this specific context. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

27 pages, 1616 KB  
Systematic Review
Applications of Machine Learning in Early Stage Rolling Bearing Simulations—A Systematic Literature Review
by Felix Pfister, Sandro Wartzack and Benedict Rothammer
Lubricants 2026, 14(4), 163; https://doi.org/10.3390/lubricants14040163 - 10 Apr 2026
Viewed by 33
Abstract
Rolling bearing simulations are often too computationally expensive for early design decisions, because many simulations are required in a large design of experiments. Therefore, the aim of this systematic literature review is to provide an overview of how machine learning (ML) is used [...] Read more.
Rolling bearing simulations are often too computationally expensive for early design decisions, because many simulations are required in a large design of experiments. Therefore, the aim of this systematic literature review is to provide an overview of how machine learning (ML) is used to integrate engineering knowledge in advance when simulations are the primary data source for supervised learning. In the 11 included studies, ML is mainly applied as regression models trained on simulation data to replace repeated solver calls. The applications can be classified into three domains—contact mechanics, lubrication, and dynamics—mostly linked to their domain specific outputs. In most cases, ML models replace the simulation once the model is trained and validated, followed by optimization, which is often performed on the surrogate using evolutionary algorithms. Surrogates have the potential to enable design-space exploration, sensitivity analysis, and uncertainty propagation, but this capability is not yet fully exploited in current practice. The purpose of this review article is to provide a summary of methodological building blocks and practical guidelines to assist researchers and engineers in selecting appropriate ML workflows for simulation-based analysis of rolling bearings in the areas of tribology, dynamics, service life, load capacity, and system-level investigations. Full article
Show Figures

Graphical abstract

33 pages, 2387 KB  
Article
Energy-Aware Adaptive Communication Topology with Edge-AI Navigation for UAV Swarms in GNSS-Denied Environments
by Alizhan Tulembayev, Alexandr Dolya, Ainur Kuttybayeva, Timur Jussupbekov and Kalmukhamed Tazhen
Drones 2026, 10(4), 273; https://doi.org/10.3390/drones10040273 - 9 Apr 2026
Viewed by 83
Abstract
Energy-efficient and resilient decentralized unmanned aerial vehicles (UAV) swarm operation in global navigation satellite system (GNSS) denied environments remains challenging because propulsion demand, communication load, and onboard inference are tightly coupled at the mission level. Although prior studies have examined some of these [...] Read more.
Energy-efficient and resilient decentralized unmanned aerial vehicles (UAV) swarm operation in global navigation satellite system (GNSS) denied environments remains challenging because propulsion demand, communication load, and onboard inference are tightly coupled at the mission level. Although prior studies have examined some of these components separately, their joint evaluation within adaptive decentralized swarms remains limited under degraded navigation conditions. This study proposes an energy-aware adaptive communication-topology framework integrated with lightweight edge artificial intelligence (AI)-assisted navigation for decentralized UAV swarms operating without reliable GNSS support. The approach combines a unified mission-level energy-accounting structure for propulsion, communication, and onboard inference, a residual-energy-aware topology adaptation mechanism for preserving swarm connectivity, and a convolutional neural network-long short-term memory (CNN–LSTM) based edge-AI navigation module for improving localization robustness. The framework was evaluated in 1200 s Robot Operating System 2 (ROS2)–Gazebo–PX4 simulation scenarios against fixed topology and extended Kalman filter (EKF)-based baselines. Under the adopted simulation assumptions, the proposed configuration achieved a 22.7% reduction in total energy consumption, with the largest decrease observed in the communication-energy component, while preserving positive algebraic connectivity across all evaluated runs. The edge-AI module yielded a 4.8% root mean square error (RMSE) reduction relative to the EKF baseline, indicating a modest but meaningful improvement in localization performance. These results support the feasibility of integrated energy-aware swarm coordination in GNSS-denied environments; however, they should be interpreted as simulation-based evidence under the adopted modeling assumptions, and further high-fidelity propagation modeling, broader learning validation, and hardware-in-the-loop studies remain necessary. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
24 pages, 997 KB  
Article
Teaching Strategies and Methods in a Complex Education Process: Use Case of Multi-Level Computer-Assisted Exercises on Constructive Simulation Systems
by Miro Čolić and Mirko Sužnjević
Appl. Sci. 2026, 16(8), 3692; https://doi.org/10.3390/app16083692 - 9 Apr 2026
Viewed by 71
Abstract
This study develops a new concept of computer-assisted exercises (CAX) on constructive simulation systems and how the proposed concept affects the strategy and teaching methods. The current state of affairs in the field of defense and security, both in Europe and in the [...] Read more.
This study develops a new concept of computer-assisted exercises (CAX) on constructive simulation systems and how the proposed concept affects the strategy and teaching methods. The current state of affairs in the field of defense and security, both in Europe and in the world, requires the acquisition of competencies (European Qualifications Framework—EQF: knowledge, skills, independence, and responsibility), i.e., the education and training of a significantly larger number of personnel in the field of defense and security than has been the case in the last 70 years. In addition, an important specificity of today is that students need to acquire some competencies that were almost unknown until recently. Most of these competencies are the result of the rapid development of technology, which has significantly changed human life in all areas. In order to respond to the modern requirements of conducting operations, where the transfer of information both horizontally and vertically is exponentially accelerated, current concepts of preparation and implementation of education and training, of which exercises are often the most important part, need to be replaced with new concepts, and one such concept is developed in this paper. New information introduced is mostly related to the new weapons that are being introduced (unmanned systems, hypersonic missiles, weapons based on microwaves and lasers, etc.), which all result in necessary changes to the traditional approach to conducting war, i.e., tactics, techniques, and procedures (TTP). This novel exercise concept allows for the simultaneous implementation of training for up to three or four hierarchical levels (e.g., TF Div, brigade, battalion, and company) in one exercise, while in most countries, including the NATO alliance, it is still common for such exercises to be conducted according to a concept that is over 20 years old and, as a rule, is focused on the implementation of exercises for one or two hierarchical levels. This approach allows key personnel from the headquarters of units from four hierarchical levels to be simulated in real time, which is not provided by current concepts for preparing and conducting exercises. The new concept was applied as a multi-level, computer-assisted exercise (CAX) on constructive simulation systems. In addition, significant advantages of the new concept relate to the flexibility and adaptability of the proposed concept to be applied in addition to operational units and in training institutions such as academies and higher education institutions. In addition to the above, the new concept requires a shorter planning period as well as fewer total resources needed for the preparation and implementation of the exercise. The management, organizational, and technological components of the proposed exercise concept are implemented in the CAX model. The hypotheses in this paper will be tested in an applied study, which was evaluated through an external evaluation body. The implemented CAX model was tested in Croatia on the example of using exercises at the Croatian Defense Academy. Full article
(This article belongs to the Special Issue Applications of Smart Learning in Education)
27 pages, 729 KB  
Article
RSMA-Assisted Fluid Antenna ISAC via Hierarchical Deep Reinforcement Learning
by Muhammad Sheraz, Teong Chee Chuah and It Ee Lee
Telecom 2026, 7(2), 41; https://doi.org/10.3390/telecom7020041 - 9 Apr 2026
Viewed by 133
Abstract
Integrated sensing and communications (ISAC) requires tight coordination between spatial signal design and multiple-access strategies to balance communication throughput and sensing accuracy under shared spectral and hardware constraints. However, existing ISAC frameworks with rate-splitting multiple access (RSMA) typically rely on fixed antenna arrays [...] Read more.
Integrated sensing and communications (ISAC) requires tight coordination between spatial signal design and multiple-access strategies to balance communication throughput and sensing accuracy under shared spectral and hardware constraints. However, existing ISAC frameworks with rate-splitting multiple access (RSMA) typically rely on fixed antenna arrays and decoupled optimization, which fundamentally limit their ability to adapt to fast channel variations and dynamic sensing requirements. This paper introduces a fluid antenna-enabled RSMA-assisted ISAC architecture, in which movable antenna ports are exploited as a new spatial degree of freedom to enhance adaptability in both communication and sensing operations. Fluid antenna systems (FAS) are deployed at both the base station and user terminals, allowing dynamic port selection that reshapes the effective channel and sensing beampattern in real time. We formulate a joint sum-rate maximization problem subject to explicit sensing-quality constraints, capturing the coupled impact of antenna port selection, RSMA rate allocation, and multi-beam transmit design. The proposed framework maximizes the communication sum-rate while ensuring that the sensing functionality satisfies a predefined sensing quality constraint. This constraint-based ISAC formulation guarantees that sufficient sensing power is directed toward the target while optimizing communication performance. The resulting optimization involves strongly coupled discrete and continuous decision variables, rendering conventional optimization methods ineffective. To address this challenge, a hierarchical deep reinforcement learning (HDRL) framework is developed, where an upper-layer deep Q-network (DQN) determines discrete antenna port selection and a lower-layer twin delayed deep deterministic policy gradient (TD3) algorithm optimizes continuous beamforming and rate-splitting parameters. Numerical results demonstrate that the proposed approach significantly improves system performance, achieving higher communication sum-rate while satisfying sensing requirements under dynamic propagation conditions. Full article
Show Figures

Figure 1

38 pages, 5187 KB  
Article
Human-Assisted Deep Reinforcement Learning (HADRL) for Multi-Objective Tram Optimisation Problem
by Moneeb Ashraf, Stuart Hillmansen and Ning Zhao
Appl. Sci. 2026, 16(8), 3683; https://doi.org/10.3390/app16083683 - 9 Apr 2026
Viewed by 122
Abstract
Reducing traction energy in urban rail systems while preserving safety, punctuality, and passenger comfort remains challenging. Additionally, route-level tram studies that train deep reinforcement learning (DRL) policies using Operational Train Monitoring Recorder (OTMR) logs and benchmark them across multiple objectives remain limited. This [...] Read more.
Reducing traction energy in urban rail systems while preserving safety, punctuality, and passenger comfort remains challenging. Additionally, route-level tram studies that train deep reinforcement learning (DRL) policies using Operational Train Monitoring Recorder (OTMR) logs and benchmark them across multiple objectives remain limited. This study develops and evaluates a Human-Assisted Deep Reinforcement Learning (HADRL) framework for multi-objective tram control in an OTMR-grounded simulation. Two HADRL agents were trained using a human-assistance action mapping: a standard Proximal Policy Optimisation (PPO) baseline and a recurrent, history-augmented PPO. Their performance was compared against that of four human drivers using indices for speed-limit compliance, schedule deviation, traction energy, jerk-based comfort, and stopping accuracy. These performance measures were aggregated using the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) with both equal and entropy-derived weights. Both HADRL agents reproduce the characteristic accelerate–coast–brake driving pattern, reduce traction energy relative to all human baselines, and achieve near-complete speed-limit compliance, all while remaining within the specified schedule-deviation and comfort thresholds. TOPSIS yields identical rankings under both weighting schemes, with Multi-Objective Tram Operation Non-Stationary Proximal Policy Optimisation (MOTO-NSPPO, a recurrent, history-augmented PPO) ranked first and PPO second. Full article
30 pages, 14814 KB  
Article
The Intelligent Row-Following Method and System for Corn Harvesters Driven by “Visual-Gateway” Collaboration
by Shengjie Zhou, Songling Du, Xinping Zhang, Cheng Yang, Guoying Li, Qingyang Wang and Liqing Zhao
Agriculture 2026, 16(8), 832; https://doi.org/10.3390/agriculture16080832 - 9 Apr 2026
Viewed by 75
Abstract
To address the issues of corn harvester field operations relying on driver visual guidance for row alignment, high labor intensity, and unstable operation accuracy, this study innovatively proposes a “vision-dominant, gateway-enhanced” dual-mode collaborative row-alignment assistance architecture, and independently develops the R2DC-Mask [...] Read more.
To address the issues of corn harvester field operations relying on driver visual guidance for row alignment, high labor intensity, and unstable operation accuracy, this study innovatively proposes a “vision-dominant, gateway-enhanced” dual-mode collaborative row-alignment assistance architecture, and independently develops the R2DC-Mask R-CNN instance segmentation network and MCC-KF robust filtering algorithm to form a deeply coupled hardware–software-assisted driving system. The R2DC-Mask R-CNN network is autonomously designed for corn row-detection scenarios, achieving accurate perception in complex field environments; the MCC-KF algorithm innovatively solves the state estimation divergence problem during transient vision failures through a multi-criteria constraint mechanism, ensuring continuous navigation capability; the intelligent gateway and vision system form a confidence-driven master–slave switching mechanism that adaptively enhances system robustness when vision is restricted. Field experiments demonstrate that within the speed range of 0.5–5.0 km/h, the average lateral deviation in the row alignment assisted by the system is 3.82–5.30 cm, the proportion of deviations less than 10 cm exceeds 96%, and all sample deviations remain within 20 cm; at a speed of 3.5 km/h, the system reduces the average grain loss rate from 3.76% under manual operation to 2.65%, a decrease of 29.5%. This system effectively improves row alignment accuracy and harvest quality, providing a practical human–machine collaborative solution for intelligent harvester operations. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

50 pages, 2682 KB  
Systematic Review
Transforming Beekeeping Through Technology: A Systematic Review of Precision Beekeeping
by Ashan Milinda Bandara Ratnayake, Hazwani Suhaimi and Pg Emeroylariffion Abas
Sci 2026, 8(4), 87; https://doi.org/10.3390/sci8040087 - 9 Apr 2026
Viewed by 182
Abstract
Beekeeping is a profitable and mind-relaxing practice; however, monitoring beehives poses significant challenges, such as consuming time and potentially disturbing hive equilibrium, which may lead to colony collapse. Developing precision beekeeping (PB) systems is crucial to assist beekeepers in decision-making, automate redundant hive [...] Read more.
Beekeeping is a profitable and mind-relaxing practice; however, monitoring beehives poses significant challenges, such as consuming time and potentially disturbing hive equilibrium, which may lead to colony collapse. Developing precision beekeeping (PB) systems is crucial to assist beekeepers in decision-making, automate redundant hive maintenance, and enhance the security and comfort of bee life. This review systematically explores research on PB systems, based on a keyword-driven search of Scopus and Web of Science databases, yielding 46 relevant publications. The analysis highlights a notable increase in research activity in the field since 2016. The integration of advanced technologies, including machine learning, cloud computing, IoT, and scenario-based communication methods, has proven instrumental in predicting hive states such as queen status, enemy attacks, readiness for harvest, swarming events, and population decline. Commonly measured parameters include hive weight, temperature, and relative humidity, with various sensors employed to ensure precision while minimizing bee disturbance. Additionally, bee traffic monitoring has emerged as a critical approach to assessing hive health. Most studies focus on honeybees rather than stingless bees and, in the context of enemy identification, Varroa destructor is the primary target. This review underscores the potential of novel technologies to revolutionize apiculture and enhance hive management practices. Full article
(This article belongs to the Special Issue Feature Papers—Multidisciplinary Sciences 2025)
Show Figures

Figure 1

19 pages, 1466 KB  
Article
D2MNet: Difference-Aware Decoupling and Multi-Prompt Learning for Medical Difference Visual Question Answering
by Lingge Lai, Weihua Ou, Jianping Gou and Zhonghua Liu
J. Imaging 2026, 12(4), 162; https://doi.org/10.3390/jimaging12040162 - 9 Apr 2026
Viewed by 146
Abstract
Difference visual question answering (Diff-VQA) aims to answer questions by identifying and reasoning about differences between medical images. Existing methods often rely on simple feature subtraction or fusion to model image differences, while overlooking the asymmetric descriptive requirements of changed and unchanged cases [...] Read more.
Difference visual question answering (Diff-VQA) aims to answer questions by identifying and reasoning about differences between medical images. Existing methods often rely on simple feature subtraction or fusion to model image differences, while overlooking the asymmetric descriptive requirements of changed and unchanged cases and providing limited task-specific guidance to pretrained language decoders. To address these limitations, we propose D2MNet (Difference-aware Decoupling and Multi-prompt Network), a framework for medical Diff-VQA that combines change-aware reasoning with prompt-guided answer generation. Specifically, a Change Analysis Module (CAM) predicts whether a change is present and produces a binary change-aware prompt; a Difference-Aware Module (DAM) uses dual attention to capture fine-grained difference features; and a multi-prompt learning mechanism (MLM) injects question-aware, change-aware, and learnable prompts into the language decoder to improve contextual alignment and response generation. Experiments on the MIMIC-DiffVQA benchmark show that D2MNet achieves a CIDEr score of 2.907 ± 0.040, outperforming the strongest baseline, ReAl (2.409), under the same evaluation setting. These results demonstrate the effectiveness of the proposed design on benchmark medical Diff-VQA and suggest its potential for assisting difference-aware medical answer generation. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Back to TopTop