Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,335)

Search Parameters:
Keywords = machine vision system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10040 KB  
Article
Design of Monitoring System for River Crab Feeding Platform Based on Machine Vision
by Yueping Sun, Ziqiang Li, Zewei Yang, Bikang Yuan, De’an Zhao, Ni Ren and Yawen Cheng
Fishes 2026, 11(2), 88; https://doi.org/10.3390/fishes11020088 (registering DOI) - 1 Feb 2026
Abstract
Bait costs constitute 40–50% of the total expenditure in river crab aquaculture, highlighting the critical need for accurately assessing crab growth and scientifically determining optimal feeding regimes across different farming stages. Current traditional methods rely on periodic manual sampling to monitor growth status [...] Read more.
Bait costs constitute 40–50% of the total expenditure in river crab aquaculture, highlighting the critical need for accurately assessing crab growth and scientifically determining optimal feeding regimes across different farming stages. Current traditional methods rely on periodic manual sampling to monitor growth status and artificial feeding platforms to observe consumption and adjust bait input. These approaches are inefficient, disruptive to crab growth, and fail to provide comprehensive growth data. Therefore, this study proposes a machine vision-based monitoring system for river crab feeding platforms. Firstly, the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm is applied to enhance underwater images of river crabs. Subsequently, an improved YOLOv11 (You Only Look Once) model is introduced and applied for multi-target detection and counting in crab ponds, enabling the extraction of information related to both river crabs and bait. Concurrently, underwater environmental parameters are monitored in real-time via an integrated environmental information sensing system. Finally, an information processing platform is established to facilitate data sharing under a “detection–processing–distribution” workflow. The real crab farm experimental results show that the river crab quality error rate was below 9.57%, while the detection rates for both corn and pellet baits consistently exceeded 90% across varying conditions. These results indicate that the proposed system significantly enhances farming efficiency, elevates the level of automation, and provides technological support for the river crab aquaculture industry. Full article
(This article belongs to the Section Fishery Facilities, Equipment, and Information Technology)
Show Figures

Figure 1

32 pages, 27435 KB  
Review
Artificial Intelligence in Adult Cardiovascular Medicine and Surgery: Real-World Deployments and Outcomes
by Dimitrios E. Magouliotis, Noah Sicouri, Laura Ramlawi, Massimo Baudo, Vasiliki Androutsopoulou and Serge Sicouri
J. Pers. Med. 2026, 16(2), 69; https://doi.org/10.3390/jpm16020069 - 30 Jan 2026
Abstract
Artificial intelligence (AI) is rapidly reshaping adult cardiac surgery, enabling more accurate diagnostics, personalized risk assessment, advanced surgical planning, and proactive postoperative care. Preoperatively, deep-learning interpretation of ECGs, automated CT/MRI segmentation, and video-based echocardiography improve early disease detection and refine risk stratification beyond [...] Read more.
Artificial intelligence (AI) is rapidly reshaping adult cardiac surgery, enabling more accurate diagnostics, personalized risk assessment, advanced surgical planning, and proactive postoperative care. Preoperatively, deep-learning interpretation of ECGs, automated CT/MRI segmentation, and video-based echocardiography improve early disease detection and refine risk stratification beyond conventional tools such as EuroSCORE II and the STS calculator. AI-driven 3D reconstruction, virtual simulation, and augmented-reality platforms enhance planning for structural heart and aortic procedures by optimizing device selection and anticipating complications. Intraoperatively, AI augments robotic precision, stabilizes instrument motion, identifies anatomy through computer vision, and predicts hemodynamic instability via real-time waveform analytics. Integration of the Hypotension Prediction Index into perioperative pathways has already demonstrated reductions in ventilation duration and improved hemodynamic control. Postoperatively, machine-learning early-warning systems and physiologic waveform models predict acute kidney injury, low-cardiac-output syndrome, respiratory failure, and sepsis hours before clinical deterioration, while emerging closed-loop control and remote monitoring tools extend individualized management into the recovery phase. Despite these advances, current evidence is limited by retrospective study designs, heterogeneous datasets, variable transparency, and regulatory and workflow barriers. Nonetheless, rapid progress in multimodal foundation models, digital twins, hybrid OR ecosystems, and semi-autonomous robotics signals a transition toward increasingly precise, predictive, and personalized cardiac surgical care. With rigorous validation and thoughtful implementation, AI has the potential to substantially improve safety, decision-making, and outcomes across the entire cardiac surgical continuum. Full article
Show Figures

Graphical abstract

36 pages, 5431 KB  
Article
Explainable AI-Driven Quality and Condition Monitoring in Smart Manufacturing
by M. Nadeem Ahangar, Z. A. Farhat, Aparajithan Sivanathan, N. Ketheesram and S. Kaur
Sensors 2026, 26(3), 911; https://doi.org/10.3390/s26030911 - 30 Jan 2026
Viewed by 25
Abstract
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how [...] Read more.
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how explainable artificial intelligence (XAI) techniques can be used to systematically open and interpret the internal reasoning of AI systems commonly deployed in manufacturing, rather than to optimise or compare model performance. A unified explainability-centred framework is proposed and applied across three representative manufacturing use cases encompassing heterogeneous data modalities and learning paradigms: vision-based classification of casting defects, vision-based localisation of metal surface defects, and unsupervised acoustic anomaly detection for machine condition monitoring. Diverse models are intentionally employed as representative black-box decision-makers to evaluate whether XAI methods can provide consistent, physically meaningful explanations independent of model architecture, task formulation, or supervision strategy. A range of established XAI techniques, including Grad-CAM, Integrated Gradients, Saliency Maps, Occlusion Sensitivity, and SHAP, are applied to expose model attention, feature relevance, and decision drivers across visual and acoustic domains. The results demonstrate that XAI enables alignment between model behaviour and physically interpretable defect and fault mechanisms, supporting transparent, auditable, and human-interpretable decision-making. By positioning explainability as a core operational requirement rather than a post hoc visual aid, this work contributes a cross-modal framework for trustworthy AI in manufacturing, aligned with Industry 5.0 principles, human-in-the-loop oversight, and emerging expectations for transparent and accountable industrial AI systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 1914 KB  
Review
Memristor Synapse—A Device-Level Critical Review
by Sridhar Chandrasekaran, Yao-Feng Chang and Firman Mangasa Simanjuntak
Nanomaterials 2026, 16(3), 179; https://doi.org/10.3390/nano16030179 - 28 Jan 2026
Viewed by 135
Abstract
The memristor has long been known as a nonvolatile memory technology alternative and has recently been explored for neuromorphic computing, owing to its capability to mimic the synaptic plasticity of the human brain. The architecture of a memristor synapse device allows ultra-high-density integration [...] Read more.
The memristor has long been known as a nonvolatile memory technology alternative and has recently been explored for neuromorphic computing, owing to its capability to mimic the synaptic plasticity of the human brain. The architecture of a memristor synapse device allows ultra-high-density integration by internetworking with crossbar arrays, which benefits large-scale training and learning using advanced machine-learning algorithms. In this review, we present a statistical analysis of neuromorphic computing device publications from 2018 to 2025, focusing on various memristive systems. Furthermore, we provide a device-level perspective on biomimetic properties in hardware neural networks such as short-term plasticity (STP), long-term plasticity (LTP), spike timing-dependent plasticity (STDP), and spike rate-dependent plasticity (SRDP). Herein, we highlight the utilization of optoelectronic synapses based on 2D materials driven by a sequence of optical stimuli to mimic the plasticity of the human brain, further broadening the scope of memristor controllability by optical stimulation. We also highlight practical applications ranging from MNIST dataset recognition to hardware-based pattern recognition and explore future directions for memristor synapses in healthcare, including artificial cognitive retinal implants, vital organ interfaces, artificial vision systems, and physiological signal anomaly detection. Full article
27 pages, 8004 KB  
Article
A Grid-Enabled Vision and Machine Learning Framework for Safer and Smarter Intersections: Enhancing Real-Time Roadway Intelligence and Vehicle Coordination
by Manoj K. Jha, Pranav K. Jha and Rupesh K. Yadav
Infrastructures 2026, 11(2), 41; https://doi.org/10.3390/infrastructures11020041 - 27 Jan 2026
Viewed by 86
Abstract
Urban intersections are critical nodes for roadway safety, congestion management, and autonomous vehicle coordination. Traditional traffic control systems based on fixed-time signals and static sensors lack adaptability to real-time risks such as red-light violations, near-miss incidents, and multimodal conflicts. This study presents a [...] Read more.
Urban intersections are critical nodes for roadway safety, congestion management, and autonomous vehicle coordination. Traditional traffic control systems based on fixed-time signals and static sensors lack adaptability to real-time risks such as red-light violations, near-miss incidents, and multimodal conflicts. This study presents a grid-enabled framework integrating computer vision and machine learning to enhance real-time intersection intelligence and road safety. The system overlays a computational grid on the roadway, processes live video feeds, and extracts dynamic parameters including vehicle trajectories, deceleration patterns, and queue evolution. A novel active learning module improves detection accuracy under low visibility and occlusion, reducing false alarms in collision and violation detection. Designed for edge-computing environments, the framework interfaces with signal controllers to enable adaptive signal timing, proactive collision avoidance, and emergency vehicle prioritization. Case studies from multiple intersections typical of US cities show improved phase utilization, reduced intersection conflicts, and enhanced throughput. A grid-based heatmap visualization highlights spatial risk zones, supporting data-driven decision-making. The proposed framework bridges static infrastructure and intelligent mobility systems, advancing safer, smarter, and more connected roadway operations. Full article
Show Figures

Figure 1

21 pages, 3516 KB  
Article
Visual Navigation Using Depth Estimation Based on Hybrid Deep Learning in Sparsely Connected Path Networks for Robustness and Low Complexity
by Huda Al-Saedi, Pedram Salehpour and Seyyed Hadi Aghdasi
Appl. Syst. Innov. 2026, 9(2), 29; https://doi.org/10.3390/asi9020029 - 27 Jan 2026
Viewed by 174
Abstract
Robot navigation refers to a robot’s ability to determine its position within a reference frame and plan a path to a target location. Visual navigation, which relies on visual sensors such as cameras, is one approach to this problem. Among visual navigation methods, [...] Read more.
Robot navigation refers to a robot’s ability to determine its position within a reference frame and plan a path to a target location. Visual navigation, which relies on visual sensors such as cameras, is one approach to this problem. Among visual navigation methods, Visual Teach and Repeat (VT&R) techniques are commonly used. To develop an effective robot navigation framework based on the VT&R method, accurate and fast depth estimation of the scene is essential. In recent years, event cameras have garnered significant interest from machine vision researchers due to their numerous advantages and applicability in various environments, including robotics and drones. However, the main gap is how these cameras are used in a navigation system. The current research uses the attention-based UNET neural network to estimate the depth of a scene using an event camera. The attention-based UNET structure leads to accurate depth detection of the scene. This depth information is then used, together with a hybrid deep neural network consisting of a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM), for robot navigation. Simulation results on the DENSE dataset yield an RMSE of 8.15, which is an acceptable result compared to other similar methods. This method not only provides good accuracy but also operates at high speed, making it suitable for real-time applications and visual navigation methods based on VT&R. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

33 pages, 1798 KB  
Review
Animals as Communication Partners: Ethics and Challenges in Interspecies Language Research
by Hanna Mamzer, Maria Kuchtar and Waldemar Grzegorzewski
Animals 2026, 16(3), 375; https://doi.org/10.3390/ani16030375 - 24 Jan 2026
Viewed by 262
Abstract
Interspecies communication is increasingly recognized as an affective–cognitive process co-created between humans and animals rather than a one-directional transmission of signals. This review integrates findings from ethology, neuroscience, welfare science, behavioral studies, and posthumanist ethics to examine how emotional expression, communicative intentionality, and [...] Read more.
Interspecies communication is increasingly recognized as an affective–cognitive process co-created between humans and animals rather than a one-directional transmission of signals. This review integrates findings from ethology, neuroscience, welfare science, behavioral studies, and posthumanist ethics to examine how emotional expression, communicative intentionality, and relational engagement shape understanding across species. Research on primates, dogs, elephants, and marine mammals demonstrates that empathy, consolation, cooperative signaling, and multimodal perception rely on evolutionarily conserved mechanisms, including mirror systems, affective contagion, and oxytocin-mediated bonding. These biological insights intersect with ethical considerations concerning animal agency, methodological responsibility, and the interpretation of non-human communication. Emerging technological tools—bioacoustics, machine vision, and AI-assisted modeling—offer new opportunities to analyze complex vocal and behavioral patterns, yet they require careful contextualization to avoid anthropocentric misclassification. Synthesizing these perspectives, the review proposes a relational framework in which meaning arises through shared emotional engagement, embodied interaction, and ethically grounded interpretation. This approach highlights the importance of welfare-oriented, minimally invasive methodologies and supports a broader shift toward recognizing animals as communicative partners whose emotional lives contribute to scientific knowledge. This review primarily synthesizes empirical and theoretical research on primates and dogs, complemented by selected examples from elephants and marine mammals, which provide the most developed evidence base for the affective–cognitive and relational mechanisms discussed. Full article
(This article belongs to the Section Human-Animal Interactions, Animal Behaviour and Emotion)
Show Figures

Figure 1

19 pages, 1007 KB  
Review
Machine Learning-Powered Vision for Robotic Inspection in Manufacturing: A Review
by David Yevgeniy Patrashko and Vladimir Gurau
Sensors 2026, 26(3), 788; https://doi.org/10.3390/s26030788 - 24 Jan 2026
Viewed by 347
Abstract
Machine learning (ML)-powered vision for robotic inspection has accelerated with smart manufacturing, enabling automated defect detection and classification and real-time process optimization. This review provides insight into the current landscape and state-of-the-art practices in smart manufacturing quality control (QC). More than 50 studies [...] Read more.
Machine learning (ML)-powered vision for robotic inspection has accelerated with smart manufacturing, enabling automated defect detection and classification and real-time process optimization. This review provides insight into the current landscape and state-of-the-art practices in smart manufacturing quality control (QC). More than 50 studies spanning across automotive, aerospace, assembly, and general manufacturing sectors demonstrate that ML-powered vision is technically viable for robotic inspection in manufacturing. The accuracy of defect detection and classification frequently exceeds 95%, with some vision systems achieving 98–100% accuracy in controlled environments. The vision systems use predominantly self-designed convolutional neural network (CNN) architectures, YOLO variants, or traditional ML vision models. However, 77% of implementations remain at the prototype or pilot scale, revealing systematic deployment barriers. A discussion is provided to address the specifics of the vision systems and the challenges that these technologies continue to face. Finally, recommendations for future directions in ML-powered vision for robotic inspection in manufacturing are provided. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

45 pages, 2614 KB  
Systematic Review
Machine Learning, Neural Networks, and Computer Vision in Addressing Railroad Accidents, Railroad Tracks, and Railway Safety: An Artificial Intelligence Review
by Damian Frej, Lukasz Pawlik and Jacek Lukasz Wilk-Jakubowski
Appl. Sci. 2026, 16(3), 1184; https://doi.org/10.3390/app16031184 - 23 Jan 2026
Viewed by 173
Abstract
Ensuring robust railway safety is paramount for efficient and reliable transportation systems, a challenge increasingly addressed through advancements in artificial intelligence (AI). This review paper comprehensively explores the burgeoning role of AI in enhancing the safety of railway operations, focusing on key contributions [...] Read more.
Ensuring robust railway safety is paramount for efficient and reliable transportation systems, a challenge increasingly addressed through advancements in artificial intelligence (AI). This review paper comprehensively explores the burgeoning role of AI in enhancing the safety of railway operations, focusing on key contributions from machine learning, neural networks, and computer vision. We synthesize current research that leverages these sophisticated AI methodologies to mitigate risks associated with railroad accidents and optimize railroad tracks management. The scope of this review encompasses diverse applications, including real-time monitoring of track conditions, predictive maintenance for infrastructure components, automated defect detection, and intelligent systems for obstacle and intrusion detection. Furthermore, it delves into the use of AI in assessing human factors, improving signaling systems, and analyzing accident/incident reports for proactive risk management. By examining the integration of advanced analytical techniques into various facets of railway operations, this paper highlights how AI is transforming traditional safety paradigms, paving the way for more resilient, efficient, and secure railway networks worldwide. Full article
Show Figures

Figure 1

26 pages, 4329 KB  
Review
Advanced Sensor Technologies in Cutting Applications: A Review
by Motaz Hassan, Roan Kirwin, Chandra Sekhar Rakurty and Ajay Mahajan
Sensors 2026, 26(3), 762; https://doi.org/10.3390/s26030762 - 23 Jan 2026
Viewed by 321
Abstract
Advances in sensing technologies are increasingly transforming cutting operations by enabling data-driven condition monitoring, predictive maintenance, and process optimization. This review surveys recent developments in sensing modalities for cutting systems, including vibration sensors, acoustic emission sensors, optical and vision-based systems, eddy-current sensors, force [...] Read more.
Advances in sensing technologies are increasingly transforming cutting operations by enabling data-driven condition monitoring, predictive maintenance, and process optimization. This review surveys recent developments in sensing modalities for cutting systems, including vibration sensors, acoustic emission sensors, optical and vision-based systems, eddy-current sensors, force sensors, and emerging hybrid/multi-modal sensing frameworks. Each sensing approach offers unique advantages in capturing mechanical, acoustic, geometric, or electromagnetic signatures related to tool wear, process instability, and fault development, while also showing modality-specific limitations such as noise sensitivity, environmental robustness, and integration complexity. Recent trends show a growing shift toward hybrid and multi-modal sensor fusion, where data from multiple sensors are combined using advanced data analytics and machine learning to improve diagnostic accuracy and reliability under changing cutting conditions. The review also discusses how artificial intelligence, Internet of Things connectivity, and edge computing enable scalable, real-time monitoring solutions, along with the challenges related to data needs, computational costs, and system integration. Future directions highlight the importance of robust fusion architectures, physics-informed and explainable models, digital twin integration, and cost-effective sensor deployment to accelerate adoption across various manufacturing environments. Overall, these advancements position advanced sensing and hybrid monitoring strategies as key drivers of intelligent, Industry 4.0-oriented cutting processes. Full article
Show Figures

Figure 1

22 pages, 4982 KB  
Article
Real-Time Analysis of Concrete Placement Progress Using Semantic Segmentation
by Zifan Ye, Linpeng Zhang, Yu Hu, Fengxu Hou, Rui Ma, Danni Luo and Wenqian Geng
Buildings 2026, 16(2), 434; https://doi.org/10.3390/buildings16020434 - 20 Jan 2026
Viewed by 122
Abstract
Concrete arch dams represent a predominant dam type in water conservancy and hydropower projects in China. The control of concrete placement progress during construction directly impacts project quality and construction efficiency. Traditional manual monitoring methods, characterized by delayed response and strong subjectivity, struggle [...] Read more.
Concrete arch dams represent a predominant dam type in water conservancy and hydropower projects in China. The control of concrete placement progress during construction directly impacts project quality and construction efficiency. Traditional manual monitoring methods, characterized by delayed response and strong subjectivity, struggle to meet the demands of modern intelligent construction management. This study introduces machine vision technology to monitor the concrete placement process and establishes an intelligent analysis system for construction scenes based on deep learning. By comparing the performance of U-Net and DeepLabV3+ semantic segmentation models in complex construction environments, the U-Net model, achieving an IoU of 89%, was selected to identify vibrated and non-vibrated concrete areas, thereby optimizing the concrete image segmentation algorithm. A comprehensive real-time analysis method for placement progress was developed, enabling automatic ternary classification and progress calculation for key construction stages, including concrete unloading, spreading, and vibration. In a continuous placement case study of Monolith No. 3 at a project site, the model’s segmentation results showed only an 8.2% error compared with manual annotations, confirming the method’s real-time capability and reliability. The research outcomes provide robust data support for intelligent construction management and hold significant practical value for enhancing the quality and efficiency of hydraulic engineering construction. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

34 pages, 7495 KB  
Article
Advanced Consumer Behaviour Analysis: Integrating Eye Tracking, Machine Learning, and Facial Recognition
by José Augusto Rodrigues, António Vieira de Castro and Martín Llamas-Nistal
J. Eye Mov. Res. 2026, 19(1), 9; https://doi.org/10.3390/jemr19010009 - 19 Jan 2026
Viewed by 193
Abstract
This study presents DeepVisionAnalytics, an integrated framework that combines eye tracking, OpenCV-based computer vision (CV), and machine learning (ML) to support objective analysis of consumer behaviour in visually driven tasks. Unlike conventional self-reported surveys, which are prone to cognitive bias, recall errors, and [...] Read more.
This study presents DeepVisionAnalytics, an integrated framework that combines eye tracking, OpenCV-based computer vision (CV), and machine learning (ML) to support objective analysis of consumer behaviour in visually driven tasks. Unlike conventional self-reported surveys, which are prone to cognitive bias, recall errors, and social desirability effects, the proposed approach relies on direct behavioural measurements of visual attention. The system captures gaze distribution and fixation dynamics during interaction with products or interfaces. It uses AOI-level eye tracking metrics as the sole behavioural signal to infer candidate choice under constrained experimental conditions. In parallel, OpenCV and ML perform facial analysis to estimate demographic attributes (age, gender, and ethnicity). These attributes are collected independently and linked post hoc to gaze-derived outcomes. Demographics are not used as predictive features for choice inference. Instead, they are used as contextual metadata to support stratified, segment-level interpretation. Empirical results show that gaze-based inference closely reproduces observed choice distributions in short-horizon, visually driven tasks. Demographic estimates enable meaningful post hoc segmentation without affecting the decision mechanism. Together, these results show that multimodal integration can move beyond descriptive heatmaps. The platform produces reproducible decision-support artefacts, including AOI rankings, heatmaps, and segment-level summaries, grounded in objective behavioural data. By separating the decision signal (gaze) from contextual descriptors (demographics), this work contributes a reusable end-to-end platform for marketing and UX research. It supports choice inference under constrained conditions and segment-level interpretation without demographic priors in the decision mechanism. Full article
Show Figures

Figure 1

24 pages, 39327 KB  
Article
Forest Surveying with Robotics and AI: SLAM-Based Mapping, Terrain-Aware Navigation, and Tree Parameter Estimation
by Lorenzo Scalera, Eleonora Maset, Diego Tiozzo Fasiolo, Khalid Bourr, Simone Cottiga, Andrea De Lorenzo, Giovanni Carabin, Giorgio Alberti, Alessandro Gasparetto, Fabrizio Mazzetto and Stefano Seriani
Machines 2026, 14(1), 99; https://doi.org/10.3390/machines14010099 - 14 Jan 2026
Viewed by 204
Abstract
Forest surveying and inspection face significant challenges due to unstructured environments, variable terrain conditions, and the high costs of manual data collection. Although mobile robotics and artificial intelligence offer promising solutions, reliable autonomous navigation in forest, terrain-aware path planning, and tree parameter estimation [...] Read more.
Forest surveying and inspection face significant challenges due to unstructured environments, variable terrain conditions, and the high costs of manual data collection. Although mobile robotics and artificial intelligence offer promising solutions, reliable autonomous navigation in forest, terrain-aware path planning, and tree parameter estimation remain open challenges. In this paper, we present the results of the AI4FOREST project, which addresses these issues through three main contributions. First, we develop an autonomous mobile robot, integrating SLAM-based navigation, 3D point cloud reconstruction, and a vision-based deep learning architecture to enable tree detection and diameter estimation. This system demonstrates the feasibility of generating a digital twin of forest while operating autonomously. Second, to overcome the limitations of classical navigation approaches in heterogeneous natural terrains, we introduce a machine learning-based surrogate model of wheel–soil interaction, trained on a large synthetic dataset derived from classical terramechanics. Compared to purely geometric planners, the proposed model enables realistic dynamics simulation and improves navigation robustness by accounting for terrain–vehicle interactions. Finally, we investigate the impact of point cloud density on the accuracy of forest parameter estimation, identifying the minimum sampling requirements needed to extract tree diameters and heights. This analysis provides support to balance sensor performance, robot speed, and operational costs. Overall, the AI4FOREST project advances the state of the art in autonomous forest monitoring by jointly addressing SLAM-based mapping, terrain-aware navigation, and tree parameter estimation. Full article
Show Figures

Figure 1

24 pages, 4100 KB  
Article
Design and Error Calibration of a Machine Vision-Based Laser 2D Tracking System
by Dabao Lao, Xiaojian Wang and Tianqi Chen
Sensors 2026, 26(2), 570; https://doi.org/10.3390/s26020570 - 14 Jan 2026
Viewed by 312
Abstract
A laser tracker is an essential tool in the field of precise geometric measurement. Its fundamental operating idea is a dual-axis rotating device that propels the laser beam to continuously align and measure the attitude of a collaborating target. Such systems provide numerous [...] Read more.
A laser tracker is an essential tool in the field of precise geometric measurement. Its fundamental operating idea is a dual-axis rotating device that propels the laser beam to continuously align and measure the attitude of a collaborating target. Such systems provide numerous benefits, including a broad measuring range, high precision, outstanding real-time performance, and ease of use. To solve the issue of low beam recovery efficiency in typical laser trackers, this research offers a two-dimensional laser tracking system that incorporates a machine vision module. The system uses a unique off-axis optical design in which the distance measuring and laser tracking paths are independent, decreasing the system’s dependency on optical coaxiality and mechanical processing precision. A tracking head error calibration method based on singular value decomposition (SVD) is introduced, using optical axis point cloud data obtained from experiments on various components for geometric fitting. A complete prototype system was constructed and subjected to accuracy testing. Experimental results show that the proposed system achieves a relative positioning accuracy of less than 0.2 mm (spatial root mean square error (RMSE) = 0.189 mm) at the maximum working distance of 1.5 m, providing an effective solution for the design of high-precision laser tracking systems. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

16 pages, 8302 KB  
Article
A Smart Vision-Aided RICH (Robotic Interface Control and Handling) System for VULCAN
by Albert P. Song, Alice Tang, Dunji Yu and Ke An
Hardware 2026, 4(1), 1; https://doi.org/10.3390/hardware4010001 - 14 Jan 2026
Viewed by 137
Abstract
High-flux neutron beams and high-efficiency detectors enable rapid neutron diffraction measurements at the Engineering Materials Diffractometer (VULCAN) at the Spallation Neutron Source (SNS), Oak Ridge National Laboratory (ORNL). To optimize beam time utilization, efficient sample exchange, alignment, and automated measurements are essential. Recent [...] Read more.
High-flux neutron beams and high-efficiency detectors enable rapid neutron diffraction measurements at the Engineering Materials Diffractometer (VULCAN) at the Spallation Neutron Source (SNS), Oak Ridge National Laboratory (ORNL). To optimize beam time utilization, efficient sample exchange, alignment, and automated measurements are essential. Recent advances in artificial intelligence (AI) have expanded the capabilities of robotic systems. Here, we report the development of a Robotic Interactive Control and Handling (RICH) system for sample handling at VULCAN, designed to support high-throughput experiments and reduce overhead time. The RICH system employs a six-axis desktop robot integrated with AI-based computer vision models capable of recognizing and localizing samples in real time from instrument and depth-resolving cameras. Vision algorithms combine these detections to align samples with designated measurement positions or place them within complex sample environments such as furnaces. This integration of machine learning-assisted vision with robotic handling demonstrates the feasibility of autonomous sample detection and preparation, offering a pathway toward fully unmanned neutron scattering experiments. Full article
Show Figures

Figure 1

Back to TopTop