Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,224)

Search Parameters:
Keywords = simultaneous localization and mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6668 KB  
Article
Development of a Visual SLAM-Based Autonomous UAV System for Greenhouse Plant Monitoring
by Jing-Heng Lin and Ta-Te Lin
Drones 2026, 10(3), 205; https://doi.org/10.3390/drones10030205 - 15 Mar 2026
Abstract
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. [...] Read more.
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. The proposed system employs a dual-link edge-computing architecture: a lightweight onboard controller handles flight control and sensor acquisition, while visual simultaneous localization and mapping (V-SLAM) is offloaded to an edge computer via the FPV video link. Phenotyping (flower detection and tracking/counting) is performed offline from the side-view RGB stream and does not participate in the flight control loop. Using muskmelon (Cucumis melo L.) flower development as a case study, the UAV autonomously executed daily missions for 27 days in a commercial greenhouse, performing flower detection and tracking to monitor phenological dynamics. Localization and control accuracy were evaluated against a validated UWB reference system, achieving 5.4~8.0 cm 2D RMSE for trajectory tracking and 12.7 cm translation RMSE for greenhouse mapping. This work demonstrates a practical architecture for autonomous monitoring in GPS-denied agricultural environments, with operational boundaries characterized through the sustained field deployment. The system’s design principles may extend to other indoor or communication-limited scenarios requiring lightweight, intelligent robotic operation. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

21 pages, 5612 KB  
Article
A Single-Beacon Underwater Positioning Method with Sensor Trajectory Systematic Error Calibration
by Yun Ye, Hongyang He, Feng Zha, Hongqiong Tang, Jingshu Li, Kaihui Xu and Yangzi Chen
J. Mar. Sci. Eng. 2026, 14(6), 545; https://doi.org/10.3390/jmse14060545 - 14 Mar 2026
Abstract
Underwater acoustic single-beacon positioning technology achieves localization by integrating vehicle motion with range measurements acquired from acoustic ranging devices, offering advantages such as system simplicity, flexible deployment, and high cost-effectiveness. However, its accuracy is limited by weak initial observability and degraded observation geometry. [...] Read more.
Underwater acoustic single-beacon positioning technology achieves localization by integrating vehicle motion with range measurements acquired from acoustic ranging devices, offering advantages such as system simplicity, flexible deployment, and high cost-effectiveness. However, its accuracy is limited by weak initial observability and degraded observation geometry. To address this, a sensor data correction and collaborative optimization framework is proposed. A hybrid outlier rejection strategy first suppresses acoustic multipath and sensor noise. To compensate for systematic sensor errors ignored in conventional Virtual Long Baseline methods, an affine transformation maps the true trajectory to the sensor-indicated one, reformulating error compensation as a correction to virtual beacon coordinates. To further mitigate the accuracy degradation caused by degenerated geometric configurations, this paper proposes a collaborative algorithm that integrates Chan initialization with affine transformation optimization. This approach formulates the positioning problem as an optimization task, simultaneously estimating the position information and affine transformation parameters through iterative refinement to achieve high-precision localization. The process begins with Chan’s algorithm, which provides an initial estimate from the virtual sensor array. This estimate is then refined under affine constraints to achieve high-precision localization. Experimental results show the method improves positioning accuracy by 36.30% compared to baseline algorithms, demonstrating significant performance enhancement. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

17 pages, 1708 KB  
Article
Robust Visual–Inertial SLAM and Biomass Assessment for AUVs in Marine Ranching
by Yangyang Wang, Ziyu Liu, Tianzhu Gao and Xijun Du
Symmetry 2026, 18(3), 495; https://doi.org/10.3390/sym18030495 - 13 Mar 2026
Viewed by 58
Abstract
Environmental perception is a cornerstone for autonomous underwater vehicles (AUVs) to achieve robust self-localization and scene understanding, which are pivotal for the intelligent management of marine ranching. However, underwater image degradation and weak-textured scenes significantly hinder reliable self-localization and fine-grained environmental perception. To [...] Read more.
Environmental perception is a cornerstone for autonomous underwater vehicles (AUVs) to achieve robust self-localization and scene understanding, which are pivotal for the intelligent management of marine ranching. However, underwater image degradation and weak-textured scenes significantly hinder reliable self-localization and fine-grained environmental perception. To address the perceptual asymmetry arising from these challenges, this paper proposes a robust visual–inertial simultaneous localization and mapping (SLAM) and biomass assessment scheme for marine ranching. Specifically, we first propose a robust tightly coupled underwater visual–inertial localization scheme, which leverages a multi-sensor fusion strategy to solve the image degradation problem of localization in complex underwater environments. Furthermore, we propose a novel underwater scene perception method, which enables the simultaneous visual reconstruction of aquaculture species and the quantitative mapping of their spatial distribution in marine ranching. Finally, we develop a low-cost, agile, and portable multisensor-integrated system that consolidates autonomous localization and aquaculture biomass assessment modules, with its performance validated through extensive real-world underwater experiments. The experimental results demonstrate that the proposed methods can effectively overcome the interference of complex underwater environments and provide high-precision perception support for both AUV state estimation and aquaculture asset management. Full article
(This article belongs to the Special Issue Symmetry in Next-Generation Intelligent Information Technologies)
Show Figures

Figure 1

20 pages, 14849 KB  
Article
MCViM-YOLO: Remote Sensing Vehicle Detection for Sustainable Intelligent Transportation
by Kairui Zhang, Ningning Zhu, Fuqing Zhao and Qiuyu Zhang
Sustainability 2026, 18(6), 2836; https://doi.org/10.3390/su18062836 - 13 Mar 2026
Viewed by 70
Abstract
Vehicle detection is a core task in smart city perception management and an important technical support for sustainable urban development and intelligent transportation optimization. In high-resolution unmanned aerial vehicle (UAV) remote sensing images, it faces challenges such as variable target scales, severe occlusion, [...] Read more.
Vehicle detection is a core task in smart city perception management and an important technical support for sustainable urban development and intelligent transportation optimization. In high-resolution unmanned aerial vehicle (UAV) remote sensing images, it faces challenges such as variable target scales, severe occlusion, and difficulty in modeling long-range dependencies. To address these issues, this study proposes the MCViM-YOLO algorithm, which integrates the local perception advantage of convolution with the global modeling capability of the state space model (Mamba). Based on YOLOv12, the algorithm reconstructs the neck network: it introduces the Mix-Mamba module (parallel multi-scale convolution and selective state space model) to simultaneously capture local details and global spatial dependencies, adopts the dual-factor calibration fusion module (DCFM) to adaptively fuse heterogeneous features, and employs a dual-branch attention detection head (DADH) to optimize the prediction of difficult samples (e.g., occluded, small-scale vehicles). Experiments on the VEBAI dataset demonstrate that our proposed model achieves an mAP@0.5 of 92.391% and a recall rate of 86.070%, with a computational complexity of 10.41 GFLOPs. The results show that the proposed method effectively improves the accuracy and efficiency of vehicle detection in complex remote sensing scenarios, provides technical support for traffic flow monitoring, low-carbon urban planning, and other sustainable applications, and offers an innovative paradigm for the deep integration of CNN and state space models with both theoretical research value and engineering application prospects. Full article
Show Figures

Figure 1

20 pages, 24767 KB  
Article
VINA-SLAM: A Voxel-Based Inertial and Normal-Aligned LiDAR–IMU SLAM
by Ruyang Zhang and Bingyu Sun
Sensors 2026, 26(6), 1810; https://doi.org/10.3390/s26061810 - 13 Mar 2026
Viewed by 160
Abstract
Environments with sparse or repetitive geometric structures, such as long corridors and narrow stairwells, remain challenging for LiDAR–inertial simultaneous localization and mapping (LiDAR–IMU SLAM) due to insufficient geometric observability and unreliable data associations. To address these issues, we propose VINA-SLAM, a novel LiDAR–IMU [...] Read more.
Environments with sparse or repetitive geometric structures, such as long corridors and narrow stairwells, remain challenging for LiDAR–inertial simultaneous localization and mapping (LiDAR–IMU SLAM) due to insufficient geometric observability and unreliable data associations. To address these issues, we propose VINA-SLAM, a novel LiDAR–IMU SLAM framework that constructs a unified global voxel map to explicitly exploit structural consistency. VINA-SLAM continuously tracks surface normals stored in the global voxel map using a normal-guided correspondence strategy, enabling stable scan-to-map alignment in degenerate scenes. Furthermore, a tangent-space metric is introduced to supplement missing rotational constraints around planar regions, providing reliable initial pose estimates for local optimization. A tightly coupled sliding-window bundle adjustment is then formulated by jointly incorporating IMU factors, voxel normal consistency factors, and planar regularization terms. In particular, the minimum eigenvalue of each voxel’s covariance is used as a statistically principled planar constraint, improving the Hessian conditioning and cross-view geometric consistency. The proposed system directly aligns raw LiDAR scans to the voxelized map without explicit feature extraction or loop closure. Experiments on 25 sequences from the HILTI and MARS-LVIG datasets show that VINA-SLAM reduces ATE by 25–40% on average while maintaining real-time performance at 10 Hz in the evaluated geometrically degenerate environments. Full article
Show Figures

Figure 1

30 pages, 3812 KB  
Review
Video-Based 3D Reconstruction: A Review of Photogrammetry and Visual SLAM Approaches
by Ali Javadi Moghadam, Abbas Kiani, Reza Naeimaei, Shirin Malihi and Ioannis Brilakis
J. Imaging 2026, 12(3), 128; https://doi.org/10.3390/jimaging12030128 - 13 Mar 2026
Viewed by 121
Abstract
Three-dimensional (3D) reconstruction using images is one of the most significant topics in computer vision and photogrammetry, with wide-ranging applications in robotics, augmented reality, and mapping. This study investigates methods of 3D reconstruction using video (especially monocular video) data and focuses on techniques [...] Read more.
Three-dimensional (3D) reconstruction using images is one of the most significant topics in computer vision and photogrammetry, with wide-ranging applications in robotics, augmented reality, and mapping. This study investigates methods of 3D reconstruction using video (especially monocular video) data and focuses on techniques such as Structure from Motion (SfM), Multi-View Stereo (MVS), Visual Simultaneous Localization and Mapping (V-SLAM), and videogrammetry. Based on a statistical analysis of SCOPUS records, these methods collectively account for approximately 6863 journal publications up to the end of 2024. Among these, about 80 studies are analyzed in greater detail to identify trends and advancements in the field. The study also shows that the use of video data for real-time 3D reconstruction is commonly addressed through two main approaches: photogrammetry-based methods, which rely on precise geometric principles and offer high accuracy at the cost of greater computational demand; and V-SLAM methods, which emphasize real-time processing and provide higher speed. Furthermore, the application of IMU data and other indicators, such as color quality and keypoint detection, for selecting suitable frames for 3D reconstruction is investigated. Overall, this study compiles and categorizes video-based reconstruction methods, emphasizing the critical step of keyframe extraction. By summarizing and illustrating the general approaches, the study aims to clarify and facilitate the entry path for researchers interested in this area. Finally, the paper offers targeted recommendations for improving keyframe extraction methods to enhance the accuracy and efficiency of real-time video-based 3D reconstruction, while also outlining future research directions in addressing challenges like dynamic scenes, reducing computational costs, and integrating advanced learning-based techniques. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

20 pages, 3093 KB  
Article
Predominantly Independent Genetic Control Between Growth and Visceral White Nodules Disease Resistance Revealed by High-Density Linkage Map and QTL Mapping in Larimichthys crocea
by Ting Ye, Dandan Guo, Yilian Zhou, Bao Lou and Feng Liu
Int. J. Mol. Sci. 2026, 27(6), 2531; https://doi.org/10.3390/ijms27062531 - 10 Mar 2026
Viewed by 99
Abstract
The large yellow croaker (Larimichthys crocea) is a key mariculture species in China, however, its industry is threatened by visceral white nodules disease (VWND) caused by the bacterium Pseudomonas plecoglossicida. A significant challenge in breeding is the potential genetic trade-off [...] Read more.
The large yellow croaker (Larimichthys crocea) is a key mariculture species in China, however, its industry is threatened by visceral white nodules disease (VWND) caused by the bacterium Pseudomonas plecoglossicida. A significant challenge in breeding is the potential genetic trade-off between growth and disease resistance. To investigate their genetic relationship, we constructed a high-density SNP-based genetic linkage map for L. crocea using a F1 full-sib family (n = 150). The map comprised 24 linkage groups with 32,429 bin markers and an average interval of 0.051 cM. Based on this map, we conducted QTL mapping for one yield trait (body weight), eight morphological traits, and three VWND-resistance traits (survival time, AT; spleen and liver pathogen loads). Phenotypic analysis revealed strong integration among growth traits and a moderate positive correlation between growth traits and AT. QTL mapping identified 53 QTLs for growth (PVE = 0.14–5.83%) and 20 for resistance (PVE = 0.78–8.93%). Notably, only two genomic intervals exhibited co-localization between a morphological trait (AL or BL) and AT, each explaining a modest phenotypic variance (0.66–5.99%). The largest-effect QTLs for growth and resistance were mapped to distinct linkage groups, and candidate genes within the co-localized intervals (Unc5d, SCN5A, HUS1) are involved in fundamental cellular processes rather than core growth or immune pathways. These results suggest that yield, morphological, and VWND-resistance traits in L. crocea are largely under independent genetic control within the studied family, indicating that simultaneous improvement of growth and disease resistance is feasible. This study provides a molecular basis for breeding strategies aimed at overcoming the trait trade-off bottleneck in this economically vital species. Full article
(This article belongs to the Special Issue Genomic, Transcriptomic, and Epigenetic Approaches in Fish Research)
Show Figures

Figure 1

36 pages, 15804 KB  
Article
An RGB-D SLAM Algorithm Based on a Multi-Layer Refraction Model for Underwater Scenarios
by Xianshuai Sun, Yabiao Wang, Yuming Zhao, Zhigang Li, Zhen He and Xiaohui Wang
J. Mar. Sci. Eng. 2026, 14(5), 485; https://doi.org/10.3390/jmse14050485 - 3 Mar 2026
Viewed by 188
Abstract
The use of depth cameras in low-texture environments is crucial for ensuring the feasibility of visual simultaneous localization and mapping (SLAM) algorithms. Nevertheless, in underwater scenarios, light propagation through multi-layered media gives rise to refractive distortion. Directly utilizing distorted images acquired by depth [...] Read more.
The use of depth cameras in low-texture environments is crucial for ensuring the feasibility of visual simultaneous localization and mapping (SLAM) algorithms. Nevertheless, in underwater scenarios, light propagation through multi-layered media gives rise to refractive distortion. Directly utilizing distorted images acquired by depth cameras for visual SLAM computations inevitably introduces substantial errors in localization and mapping. Additionally, the waterproof glass mounted in front of the depth camera renders traditional air-based camera calibration ineffective, thereby introducing calibration inaccuracies. To mitigate these challenges, we propose a comprehensive SLAM algorithm framework for underwater multi-layered media refraction correction based on RGB-D cameras. Firstly, a multi-layer refraction calibration module is developed to calibrate the depth camera in air. Subsequently, the calibrated parameters are leveraged to construct an underwater multi-layer refraction correction module, which retrieves undistorted color images and aligned depth images. Finally, the corrected color images and depth images are fed into the front-end of the visual SLAM algorithm to generate dense point cloud maps. Both simulation and real-world experiments are conducted to validate the accuracy of the multi-layer refraction calibration results and the precision of the dense point clouds obtained via multi-layer refraction correction. Furthermore, the superiority of the proposed method is demonstrated through both qualitative and quantitative evaluations. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

30 pages, 8087 KB  
Article
A Novel SLAM Approach for Trajectory Generation of a Dual-Arm Mobile Robot (DAMR) Using Sensor Fusion
by Narendra Kumar Kolla and Pandu Ranga Vundavilli
Automation 2026, 7(2), 42; https://doi.org/10.3390/automation7020042 - 3 Mar 2026
Viewed by 202
Abstract
Simultaneous Localization and Mapping (SLAM) is essential for autonomous movement in intelligent robotic systems. Traditional SLAM using a single sensor, such as an Inertial Measurement Unit (IMU), faces challenges including noise and drift. This paper introduces a novel Cartographer-based SLAM approach for DAMR [...] Read more.
Simultaneous Localization and Mapping (SLAM) is essential for autonomous movement in intelligent robotic systems. Traditional SLAM using a single sensor, such as an Inertial Measurement Unit (IMU), faces challenges including noise and drift. This paper introduces a novel Cartographer-based SLAM approach for DAMR trajectory generation in indoor environments to reduce drift errors and improve localization accuracy. This SLAM approach integrates multi-sensor data with extended Kalman filter (EKF) fusion from wheel odometry, an RGB-D camera (RTAB-Map), and an IMU for precise mapping with DAMR trajectory generation and is compared with the heading reference trajectory generated by robot pose estimation and frame transformation. This system is implemented in the Robot Operating System (ROS 2) for coordinated data acquisition, processing, and visualization. After experimental verification, the DAMR trajectories generated are closer to the reference trajectory and drift errors are tuned. The experimental results revealed that the DAMR trajectory with multi-sensor data integration using the EKF effectively improved the positioning accuracy and robustness of the system. The proposed approach shows improved alignment with the reference trajectory, yielding a mean displacement error of 0.352% and an absolute trajectory error of 0.007 m, highlighting the effectiveness of the fusion approach for accurate indoor robot navigation. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

16 pages, 2080 KB  
Article
Lidar–Vision Depth Fusion for Robust Loop Closure Detection in SLAM Systems
by Bingzhuo Liu, Panlong Wu, Rongting Chen, Yidan Zheng and Mengyu Li
Machines 2026, 14(3), 282; https://doi.org/10.3390/machines14030282 - 3 Mar 2026
Viewed by 238
Abstract
Loop Closure Detection (LCD) is a key component of Simultaneous Localization and Mapping (SLAM) systems, responsible for correcting odometric drift and maintaining global consistency in localization and mapping. However, single-modality LCD methods suffer from inherent limitations: LiDAR-based approaches are affected by point cloud [...] Read more.
Loop Closure Detection (LCD) is a key component of Simultaneous Localization and Mapping (SLAM) systems, responsible for correcting odometric drift and maintaining global consistency in localization and mapping. However, single-modality LCD methods suffer from inherent limitations: LiDAR-based approaches are affected by point cloud sparsity, limiting feature representation in unstructured environments, while vision-based methods are sensitive to illumination and weather variations, reducing robustness. To address these issues, this paper presents a LiDAR–vision multimodal fusion LCD algorithm. Spatiotemporal alignment between LiDAR point clouds and images is achieved through extrinsic calibration and timestamp interpolation to ensure cross-modal consistency. Harris corner detection and BRIEF descriptors are employed to extract visual features, and a LiDAR-projected sparse depth map is used to complete depth information, mapping 2D features into 3D space. A hybrid feature representation is then constructed by fusing LiDAR geometric triangle descriptors with visual BRIEF descriptors, enabling efficient loop candidate retrieval via hash indexing. Finally, an improved RANSAC algorithm performs geometric verification to enhance the robustness of relative pose estimation. Experiments on the KITTI and NCLT datasets show that the proposed method achieves average F1 scores of 85.28% and 77.63%, respectively, outperforming both unimodal and existing multimodal approaches. When integrated into a SLAM framework, it reduces the Absolute Error (ATE) RMSE by 11.2–16.4% compared with LiDAR-only methods, demonstrating improved loop detection accuracy and overall system robustness in complex environments. Full article
Show Figures

Figure 1

17 pages, 761 KB  
Article
Obstacle Avoidance in Mobile Robotics: A CNN-Based Approach Using CMYD Fusion of RGB and Depth Images
by Chaymae El Mechal, Mostefa Mesbah and Najiba El Amrani El Idrissi
Digital 2026, 6(1), 20; https://doi.org/10.3390/digital6010020 - 2 Mar 2026
Viewed by 194
Abstract
Over the last few years, deep neural networks have achieved outstanding results in computer vision, and have been widely integrated into mobile robot obstacle avoidance systems, where perception-driven classification supports navigation decisions. Most existing approaches rely on either color images (RGB) or depth [...] Read more.
Over the last few years, deep neural networks have achieved outstanding results in computer vision, and have been widely integrated into mobile robot obstacle avoidance systems, where perception-driven classification supports navigation decisions. Most existing approaches rely on either color images (RGB) or depth images (D) as the primary source of information, which limits their ability to jointly exploit appearance and geometric cues. This paper proposes a deep learning-based classification approach that simultaneously exploits RGB and depth information for mobile robot obstacle avoidance. The method adopts an early-stage fusion strategy in which RGB images are first converted into the CMYK color space, after which the K (black) channel is replaced by a normalized depth map to form a four-channel CMYD representation. This representation preserves chromatic information while embedding geometric structure in an intensity-consistent channel and is used as input to a convolutional neural network (CNN). The proposed method is evaluated using locally acquired data under different training options and hyperparameter settings. Experimental results show that, when using the baseline CNN architecture, the proposed fusion strategy achieves an overall classification accuracy of 93.3%, outperforming depth-only inputs (86.5%) and RGB-only images (92.9%). When the refined CNN architecture is employed, classification accuracy is further improved across all tested input representations, reaching approximately 93.9% for RGB images, 91.0% for depth-only inputs, 94.6% for the CMYK color space, and 96.2% for the proposed CMYD fusion. These results demonstrate that combining appearance and depth information through CMYD fusion is beneficial regardless of the network variant, while the refined CNN architecture further enhances the effectiveness of the fused representation for robust obstacle avoidance. Full article
Show Figures

Figure 1

14 pages, 3050 KB  
Article
Lateralization of FDG-PET Hypometabolism Using Resting-State fMRI in Temporal Lobe Epilepsy: A Simultaneous PET-MRI Study
by Daniel Uher, Gerhard S. Drenthen, Tineke van de Weijer, Jochem van der Pol, Christianne M. Hoeberigs, Paul A. M. Hofman, Sam Springer, Rob P. W. Rouhl, Albert J. Colon, Olaf E. M. G. Schijns, Walter H. Backes and Jacobus F. A. Jansen
Tomography 2026, 12(3), 30; https://doi.org/10.3390/tomography12030030 - 2 Mar 2026
Viewed by 200
Abstract
Background: In temporal lobe epilepsy (TLE), locally reduced glucose metabolism (i.e., hypometabolism) is indicative of the epileptogenic onset zone (EZ). Here, we investigate the potential value of resting-state fMRI (rs-fMRI) for localizing the EZ with fluorodeoxyglucose positron emission tomography (FDG-PET) as ground truth. [...] Read more.
Background: In temporal lobe epilepsy (TLE), locally reduced glucose metabolism (i.e., hypometabolism) is indicative of the epileptogenic onset zone (EZ). Here, we investigate the potential value of resting-state fMRI (rs-fMRI) for localizing the EZ with fluorodeoxyglucose positron emission tomography (FDG-PET) as ground truth. Methods: Twelve PET-positive patients (34.1 ± 13.1 y; 5 females) with unilateral drug-resistant TLE were included. FDG-PET and rs-fMRI were acquired simultaneously at a hybrid 3T PET-MR scanner. Hypometabolic regions were identified on the FDG-PET images by a nuclear medicine expert. The FDG-PET images were compared with a clinical FDG-PET control dataset with normal glucose uptake distribution. The output z-score maps were thresholded at z < −2 to produce a binary mask of the significantly hypometabolic regions. The hypometabolism masks were mirrored onto the contralateral hemisphere for the asymmetry comparison. Regional homogeneity (ReHo), amplitude of low-frequency fluctuations (ALFF), and fractional ALFF (fALFF) were calculated from the rs-fMRI in conventional (0.01–0.1 Hz) and slow-3 (0.073–0.198 Hz) frequency bands. Asymmetry indices (AIs) were calculated using the ipsilateral and contralateral hypometabolic masks in the PET-positive subjects and assessed via the one-sample Wilcoxon test and Spearman correlation coefficients. Results: The AIs of conventional fALFF were significantly lower in the hypometabolic zone (p < 0.05). A significant negative correlation was found between the AIs of FDG-PET and fALFF in the slow-3 band (r = −0.62; p < 0.05). Conclusions: Conventional and slow-3 band fALFF showed a potential to mimic the FDG-PET findings in terms of EZ localization. Further research with extended cohorts and histopathological validation is required to determine the clinical value. Full article
(This article belongs to the Section Neuroimaging)
Show Figures

Figure 1

22 pages, 6376 KB  
Article
Simulator-Based Digital Twin of a Robotics Laboratory
by Lluís Ribas-Xirgo
Machines 2026, 14(3), 273; https://doi.org/10.3390/machines14030273 - 1 Mar 2026
Viewed by 305
Abstract
Simulator-based digital twins are widely used in robotics education and industrial development to accelerate prototyping and enable safe experimentation. However, they often hide implementation details that are essential for understanding, diagnosing, and correcting system failures. This paper introduces a technology-independent model-based design framework [...] Read more.
Simulator-based digital twins are widely used in robotics education and industrial development to accelerate prototyping and enable safe experimentation. However, they often hide implementation details that are essential for understanding, diagnosing, and correcting system failures. This paper introduces a technology-independent model-based design framework that provides students with full visibility of the computational mechanisms underlying robotic controllers while remaining feasible within a 150-h undergraduate course. The approach relies on representing controller behavior using networks of Extended Finite State Machines (EFSMs) and their stacked extension (EFS2M), which unify all abstraction levels of the control architecture—from low-level reactive behaviors to high-level deliberation—under a single formal model. A structured programming template ensures traceable, optimization-free software synthesis, facilitating debugging and enabling self-diagnosis of design flaws. The framework includes real-time synchronized simulation, transparent switching between virtual and physical robots, and a smart data logger that captures meaningful events for model updating and error detection. Integrated into the Intelligent Robots course, the system supports topics such as kinematics, control, perception, and simultaneous localization and mapping (SLAM) while avoiding dependency on specific middleware such as Robot Operating System (ROS) 2. Over three academic years, students reported positive hands-on experiences, strong adaptability to diverse modeling approaches, and consistently high survey ratings reflecting the course’s overall quality. The proposed environment thus offers an effective methodology for teaching end-to-end robot controller design through transparent, simulation-driven digital twins. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

33 pages, 3858 KB  
Systematic Review
Quadruped Robots in Construction Automation: A Comprehensive Review of Applications, Localization, and Site-Level Operations
by Azizbek Kakhkharov, Jong-Wook Kim and Jae-ho Choi
Buildings 2026, 16(5), 962; https://doi.org/10.3390/buildings16050962 - 1 Mar 2026
Viewed by 527
Abstract
This paper presents a comprehensive review of quadruped robots in the construction industry, focusing on their applications, technological capabilities, and integration with digital construction workflows. Quadruped robots have emerged as promising mobile platforms due to their ability to traverse uneven terrain, operate autonomously, [...] Read more.
This paper presents a comprehensive review of quadruped robots in the construction industry, focusing on their applications, technological capabilities, and integration with digital construction workflows. Quadruped robots have emerged as promising mobile platforms due to their ability to traverse uneven terrain, operate autonomously, and support multimodal sensing, enabling tasks such as site inspection, 3D reality capture, safety monitoring, logistics support, and integration with Building Information Modeling (BIM) and digital-twin systems. Despite these advantages, real-world deployment remains constrained by limitations in battery endurance, payload capacity, communication reliability, perception robustness, and system interoperability. This review synthesizes findings from 20 studies published between 2015 and 2025 and incorporates a quantitative bibliometric analysis using both SciVal and Scopus. While SciVal provides performance-based indicators and global research trends, Scopus offers complementary publication coverage, improving analytical reliability. Unlike general robotics surveys, this review adopts a construction-centric perspective by explicitly linking quadruped robot capabilities to construction engineering objectives under practical site conditions. The findings highlight current application domains, technological gaps, and adoption barriers, and outline future research directions to support the effective integration of quadruped robots into construction practice. This review provides actionable insights for researchers, engineers, and practitioners assessing the readiness and limitations of quadruped robots in construction environments. Full article
(This article belongs to the Special Issue Robotics, Automation and Digitization in Construction)
Show Figures

Figure 1

17 pages, 12829 KB  
Article
Stereo Gaussian Splatting with Adaptive Scene Depth Estimation for Semantic Mapping
by Chenhui Fu and Jiangang Lu
J. Imaging 2026, 12(3), 105; https://doi.org/10.3390/jimaging12030105 - 28 Feb 2026
Viewed by 218
Abstract
Simultaneous Localization and Mapping (SLAM) is a fundamental capability in robotics and augmented reality. However, achieving accurate geometric reconstruction and consistent semantic understanding in complex environments remains challenging. Although recent neural implicit representations have improved reconstruction quality, they often suffer from high computational [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a fundamental capability in robotics and augmented reality. However, achieving accurate geometric reconstruction and consistent semantic understanding in complex environments remains challenging. Although recent neural implicit representations have improved reconstruction quality, they often suffer from high computational cost and the forgetting phenomenon during online mapping. In this paper, we propose StereoGS-SLAM, a stereo semantic SLAM framework based on 3D Gaussian Splatting (3DGS) for explicit scene representation. Unlike existing approaches, StereoGS-SLAM operates on passive RGB stereo inputs without requiring active depth sensors. An adaptive depth estimation strategy is introduced to dynamically refine Gaussian scales based on real-time stereo depth estimates, ensuring robust and scale-consistent reconstruction. In addition, we propose a hybrid keyframe selection strategy that integrates motion-aware selection with lightweight random sampling to improve keyframe diversity and maintain stable, real-time optimization. Experimental evaluations demonstrate that StereoGS-SLAM achieves consistent and competitive localization, rendering, and semantic reconstruction performance compared with recent 3DGS-based SLAM systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

Back to TopTop