Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,441)

Search Parameters:
Keywords = indoor positioning system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3887 KB  
Article
A Cost-Effective and Rapidly Manufacturable Infrared–Visible High-Contrast Calibration Board Based on Structural Parametrization
by Yuandong Shao and Aleksandr S. Vasilev
J. Imaging 2026, 12(5), 199; https://doi.org/10.3390/jimaging12050199 (registering DOI) - 2 May 2026
Abstract
The infrared (IR)—visible light (VIS) dual-camera system provides complementary cues for image fusion, but issues such as geometric mismatch caused by different imaging methods, inconsistent resolution/field-of-view, and installation offsets often lead to ghosting and artifacts. This study aims to develop a fast-deployable and [...] Read more.
The infrared (IR)—visible light (VIS) dual-camera system provides complementary cues for image fusion, but issues such as geometric mismatch caused by different imaging methods, inconsistent resolution/field-of-view, and installation offsets often lead to ghosting and artifacts. This study aims to develop a fast-deployable and repeatable calibration workflow based on cost-effective calibration board. We designed an infrared-visible high-contrast checkerboard plate that can be generated through structural parameterization and efficiently manufactured using Python/OpenSCAD. We also established a corner-based registration pipeline that estimates global homography to align the visible-light images onto the infrared pixel grid for fusion and quantitative evaluation. Experiments conducted in a controlled indoor environment demonstrated stable sub-pixel performance within a range of 1.5–2.5 m, with an average re-projection error of 0.47–0.50 pixels per frame and a 95th percentile lower than 0.51 pixels. The corner position re-projection error test further confirmed stability near image boundaries, with a median value of 0.53–0.63 pixels and a 95th percentile of 0.54–0.64 pixels. Overall, the proposed target design and workflow can achieve practical infrared-visible calibration under typical deployment constraints and have repeatable accuracy, providing geometrically consistent input for subsequent fusion and dataset construction. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

27 pages, 13167 KB  
Article
Chasing Ghosts: A Simulation-to-Real Olfactory Navigation Stack with Optional Vision Augmentation
by Kordel K. France, Ovidiu Daescu, Latifur Khan and Rohith Peddi
Sensors 2026, 26(9), 2849; https://doi.org/10.3390/s26092849 (registering DOI) - 2 May 2026
Abstract
Autonomous odor source localization remains a challenging problem for aerial robots due to turbulent airflow, sparse and delayed sensory signals, and strict payload and computation constraints. While prior unmanned aerial vehicle (UAV)-based olfaction systems have demonstrated gas distribution mapping or reactive plume tracing, [...] Read more.
Autonomous odor source localization remains a challenging problem for aerial robots due to turbulent airflow, sparse and delayed sensory signals, and strict payload and computation constraints. While prior unmanned aerial vehicle (UAV)-based olfaction systems have demonstrated gas distribution mapping or reactive plume tracing, they rely on predefined coverage patterns, external infrastructure, or extensive sensing and coordination. In this work, we present a complete, open-source UAV system for online odor source localization using a minimal sensor suite. The system integrates custom olfaction hardware, onboard sensing, and a learning-based navigation policy that we train in simulation and deploy on a real quadrotor. Through our minimal framework, the UAV is able to navigate directly toward an odor source without constructing an explicit gas distribution map or relying on external positioning systems. We incorporate vision as an optional complementary modality to accelerate navigation under certain conditions. We validate the proposed system through real-world flight experiments in a large indoor environment using an ethanol source, demonstrating consistent source-finding behavior under realistic airflow conditions. The primary contribution of this work is a reproducible system and methodological framework for UAV-based olfactory navigation and source finding under minimal sensing assumptions. We elaborate on our hardware design and open-source our UAV firmware, simulation code, olfaction–vision dataset, and circuit board to the community. Full article
(This article belongs to the Special Issue Intelligent Robots: Control and Sensing)
29 pages, 1373 KB  
Review
Effect of Environment on the Cognition of Older Adults: A Narrative Review
by José Miguel Sánchez-Nieto, Beatriz Hernández-Monjaraz and Víctor Manuel Mendoza-Núñez
Brain Sci. 2026, 16(5), 502; https://doi.org/10.3390/brainsci16050502 (registering DOI) - 2 May 2026
Abstract
Cognition in older adults may be influenced by environmental factors; however, the pathways linking environmental exposures and cognition remain unclear. The aim of this narrative review is to synthesize evidence on the association between the environment and cognition in older adults, integrating biological, [...] Read more.
Cognition in older adults may be influenced by environmental factors; however, the pathways linking environmental exposures and cognition remain unclear. The aim of this narrative review is to synthesize evidence on the association between the environment and cognition in older adults, integrating biological, environmental, and behavioral elements. Systematic reviews and original studies addressing this topic were identified in Web of Science, PubMed, and Scopus. The primary neural processes associated with maintaining cognition during aging are neuronal plasticity and compensatory scaffolding. Participation in intellectually stimulating activities, physical exercise, and a healthy diet; mitigation of chronic stress; reduction in the severity of depressive symptoms; and buffering against the adverse effects of air pollution are proposed as plausible pathways that may mediate the relationship between neural processes and the environment. In this context, environmental factors that affect cognition can be classified at three levels: (i) micro-level (family and home): social interaction with family members and indoor pollution; (ii) meso-level (community and services): social interaction, land-use diversity, transportation systems, environmental design, and urban green spaces; and (iii) macro-level (society in general and public policies): social representations of old age and aging (positive aging vs. ageism) and public policies aimed at improving pathways related to cognitive maintenance. Overall, the environment may influence cognition in older adults; however, the available studies show methodological and conceptual heterogeneity, inconsistent findings, and important gaps in knowledge. Full article
22 pages, 55201 KB  
Article
A Distributed and Reconfigurable Architecture for Unified Multimodal Indoor Localization of a Mobile Edge Node in a Cyber-Physical Context
by Theodoros Papafotiou, Emmanouil Tsardoulias and Andreas Symeonidis
Robotics 2026, 15(5), 91; https://doi.org/10.3390/robotics15050091 - 30 Apr 2026
Abstract
Precise 3D positioning in GPS-denied environments is a critical enabler of autonomous robotics, industrial automation, and smart logistics within the emerging cyber-physical landscape. This paper presents a distributed and reconfigurable architecture designed to benchmark and provide unified multimodal indoor localization for mobile edge [...] Read more.
Precise 3D positioning in GPS-denied environments is a critical enabler of autonomous robotics, industrial automation, and smart logistics within the emerging cyber-physical landscape. This paper presents a distributed and reconfigurable architecture designed to benchmark and provide unified multimodal indoor localization for mobile edge nodes. Unlike rigid commercial solutions, our architecture employs a distributed, reconfigurable framework that allows the rapid interchange of Absolute Localization Methods (UWB, External RGB-D Vision) and Relative Localization Methods (Inertial Odometry, Visual Odometry). We evaluate these modalities individually and in hybrid configurations using a custom low-cost mobile edge node. Experimental results in a controlled environment demonstrate that while all-optical systems offer high precision, a cost-effective fusion of Ultra-Wideband (UWB) and Inertial Measurement Unit (IMU) data provides a robust balance of accuracy and reliability. Conversely, we identify significant limitations in monocular visual odometry within feature-poor indoor spaces. The developed platform serves as a reproducible foundation for researchers to prototype hybrid localization algorithms and assess the trade-offs between hardware cost and operational accuracy within complex cyber-physical ecosystems. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
23 pages, 3606 KB  
Article
Wireless Communication-Based Indoor Localization with Optical Initialization and Sensor Fusion
by Marcin Leplawy, Piotr Lipiński, Barbara Morawska and Ewa Korzeniewska
Sensors 2026, 26(9), 2653; https://doi.org/10.3390/s26092653 - 24 Apr 2026
Viewed by 573
Abstract
Indoor localization in GNSS-denied environments remains a significant challenge due to the low sampling frequency and high variability of wireless signal measurements. This~paper presents a wireless communication-based indoor localization method that integrates Wi-Fi received signal strength indication (RSSI) measurements with optical initialization and [...] Read more.
Indoor localization in GNSS-denied environments remains a significant challenge due to the low sampling frequency and high variability of wireless signal measurements. This~paper presents a wireless communication-based indoor localization method that integrates Wi-Fi received signal strength indication (RSSI) measurements with optical initialization and inertial sensor fusion. The proposed approach eliminates the need for labor-intensive fingerprinting and specialized infrastructure by leveraging existing Wi-Fi networks. Optical pose estimation using ArUco markers provides accurate initial position and orientation, enabling alignment between sensor coordinate systems and reducing inertial drift. During tracking, inertial measurements compensate for motion between sparse Wi-Fi observations by virtually translating historical RSSI samples, allowing statistically consistent averaging and improved distance estimation. A simplified factor graph framework is employed to fuse heterogeneous measurements while maintaining computational efficiency suitable for real-time operation on mobile devices. Experimental validation using a robot-based ground-truth reference system demonstrates sub-meter localization accuracy with an average positioning error of approximately 0.40~m. The proposed method provides a low-cost and scalable solution for indoor positioning and navigation applications such as access-controlled environments, exhibitions, and large public venues. Full article
(This article belongs to the Special Issue Positioning and Navigation Techniques Based on Wireless Communication)
28 pages, 33079 KB  
Article
Pedestrian Localization Using Smartphone LiDAR in Indoor Environments
by Kwangjae Sung and Jaehun Kim
Electronics 2026, 15(9), 1810; https://doi.org/10.3390/electronics15091810 - 24 Apr 2026
Viewed by 149
Abstract
Many place recognition approaches, which identify previously visited places or locations by matching current sensory data, such as 2D RGB images and 3D point clouds, have been proposed to achieve accurate and robust localization and loop closure detection in global positioning system (GPS)-denied [...] Read more.
Many place recognition approaches, which identify previously visited places or locations by matching current sensory data, such as 2D RGB images and 3D point clouds, have been proposed to achieve accurate and robust localization and loop closure detection in global positioning system (GPS)-denied environments. Since visual place recognition (VPR) methods that rely on images captured by camera sensors are highly sensitive to variations in appearance, including changes in lighting, surface color, and shadows, they can lead to poor place recognition accuracy. In contrast, light detection and ranging (LiDAR)-based place recognition (LPR) approaches based on 3D point cloud data that captures the shape and geometric structure of the environment are robust to changes in place appearance and can therefore provide more reliable place recognition results than VPR methods. This work presents an indoor LPR method called PointNetVLAD-based indoor pedestrian localization (PIPL). PIPL is a deep network model that uses PointNetVLAD to learn to extract global descriptors from 3D LiDAR point cloud data. PIPL can recognize places previously visited by a pedestrian using point clouds captured by a low-cost LiDAR sensor on a smartphone in small-scale indoor environments, while PointNetVLAD performs place recognition for vehicles using high-cost LiDAR, GPS, and inertial measurement unit (IMU) sensors in large-scale outdoor areas. For place recognition on 3D point cloud reference maps generated from LiDAR scans, PointNetVLAD exploits the universal transverse mercator (UTM) coordinate system based on GPS and IMU measurements, whereas PIPL uses a virtual coordinate system designed in this study due to the unavailability of GPS indoors. In experiments conducted in campus buildings, PIPL shows significant advantages over NetVLAD (known as a convolutional neural network (CNN)-based VPR method). Particularly in indoor environments with repetitive scenes where geometric structures are preserved and image-based appearance features are sparse or unclear, PIPL achieved 39% higher top-1 accuracy and 10% higher top-3 accuracy compared to NetVLAD. Furthermore, PIPL achieved place recognition accuracy comparable to NetVLAD even with a small number of points in a 3D point cloud and outperformed NetVLAD even with a smaller model training dataset. The experimental results also indicate that PIPL requires over 76% less place retrieval time than NetVLAD while maintaining robust place classification performance. Full article
(This article belongs to the Special Issue Advanced Indoor Localization Technologies: From Theory to Application)
Show Figures

Figure 1

24 pages, 550 KB  
Review
ISO 16000-8 and Ventilation Performance: A Critical Review
by Sascha Nehr and Julia Hurraß
Standards 2026, 6(2), 16; https://doi.org/10.3390/standards6020016 - 20 Apr 2026
Viewed by 250
Abstract
Standard 16000-8 of the International Organization for Standardization (ISO 16000-8) specifies the assessment of ventilation performance using age-of-air concepts and tracer gas techniques. Since its publication in 2007, ventilation systems and assessment practices have evolved considerably, driven by increased use of mixed-mode and [...] Read more.
Standard 16000-8 of the International Organization for Standardization (ISO 16000-8) specifies the assessment of ventilation performance using age-of-air concepts and tracer gas techniques. Since its publication in 2007, ventilation systems and assessment practices have evolved considerably, driven by increased use of mixed-mode and decentralized ventilation and advances in modeling and measurement technologies. This review examines how ISO 16000-8 can be modernized to harmonize with adjacent ventilation and indoor air quality standards while remaining applicable to contemporary systems and emerging approaches. A structured literature search of Web of Science and Google Scholar identified 76 studies (2007–2026) that engage with ISO 16000-8, age-of-air metrics, or tracer gas-based assessment. The literature was synthesized qualitatively using the framework of Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA), classifying studies into performance assessment, measurement–simulation convergence, and standardization discourse. The synthesis shows that while the conceptual foundations of ISO 16000-8 remain valid, assumptions of homogeneous mixing and steady-state conditions are often violated in real buildings, leading to inconsistent application of age-of-air indicators. Field and laboratory studies under point-source conditions demonstrate reduced ventilation effectiveness of 0.73–0.82 in classrooms and 0.5–1.4 in various indoor environments, instead of ≈1 for perfect mixing. Spatial heterogeneity is also observed in mixed-mode systems, with an efficiency around 0.5. In decentralized and façade-integrated systems, air exchange effectiveness deviates from theoretical expectations, indicating inhomogeneous air renewal and short-circuiting. Field measurements show configuration-dependent discrepancies in air exchange rates (e.g., carbon dioxide vs. perfluorocarbon tracer methods under varying door positions), while wind induces time-varying infiltration. Collectively, the literature demonstrates systematic violations of well-mixed and steady-state assumptions underpinning ISO 16000-8. Fragmentation between ventilation performance standards and indoor air quality regulation limits practical uptake. Emerging experimental, numerical, and data-driven methods complement ISO 16000-8, provided applicability domains and uncertainties are addressed. The review concludes that ISO 16000-8 should be modernized toward a harmonized, performance-based framework integrating diverse ventilation systems and assessment technologies. Full article
(This article belongs to the Section Building Standards)
Show Figures

Figure 1

9 pages, 4519 KB  
Proceeding Paper
UAV Position Tracking with Ground Cameras
by Andrea Masiero, Paolo Dabove, Vincenzo Di Pietra, Marco Piragnolo, Alberto Guarnieri, Charles Toth, Wioleta Blaszczak-Bak, Jelena Gabela and Kai-Wei Chiang
Eng. Proc. 2026, 126(1), 50; https://doi.org/10.3390/engproc2026126050 - 15 Apr 2026
Viewed by 203
Abstract
The use of Unmanned Aerial Vehicles (UAVs) has become quite popular in several applications during the last few years. Their spread is motivated by the flexibility of usage of UAVs and by their ability to automatically execute several tasks, mostly thanks to the [...] Read more.
The use of Unmanned Aerial Vehicles (UAVs) has become quite popular in several applications during the last few years. Their spread is motivated by the flexibility of usage of UAVs and by their ability to automatically execute several tasks, mostly thanks to the availability of Global Navigation Satellite Systems (GNSSs), which usually allow reliable outdoor localization of aerial vehicles. However, the extension of task automatic execution indoors, and in other challenging working conditions for the GNSS, requires an alternative positioning system able to compensate for the unreliability or unavailability of GNSS in those cases. To this end, additional sensors are usually considered. Among them, cameras are probably the most popular ones. The most common case of a vision-based positioning system is a camera mounted on a moving platform used to determine its ego-motion in a dead-reckoning approach, i.e., visual odometry. Although this solution is affordable and does not require the installation of any infrastructure, it enables absolute positioning of the camera, i.e., of the UAV, only if certain landmarks, with known position, are visible in the flying area. In contrast, this work considers the use of external cameras installed in the flying area to track the UAV movements. This approach is similar to the one implemented in motion capture systems as well, where a set of static cameras is used to triangulate some target positions using calibrated cameras. Instead, this work investigates the use of vision and machine learning tools to (i) extract the UAV position from each video frame and (ii) estimate its 3D position. Estimation of the 3D UAV position is performed with a single camera, exploiting machine learning tools in order to avoid the need for camera calibration. Performance analysis is provided for a dataset collected at the Agripolis campus of the University of Padua. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

21 pages, 3061 KB  
Article
A Machine Learning-Assisted Recognition and Compensation Method for UWB Ranging Errors in Complex Indoor Environments
by Jiayuan Zhang, Guangxu Zhang, Ying Xu, Zeyu Li and Hao Wu
Sensors 2026, 26(8), 2434; https://doi.org/10.3390/s26082434 - 15 Apr 2026
Viewed by 406
Abstract
Ultra-wideband (UWB) technology has been widely adopted for indoor positioning due to its high temporal resolution. However, the accuracy of UWB-based indoor positioning is fundamentally limited by ranging measurement errors, particularly under non-line-of-sight (NLOS) conditions, where systematic bias and uncertainty are introduced into [...] Read more.
Ultra-wideband (UWB) technology has been widely adopted for indoor positioning due to its high temporal resolution. However, the accuracy of UWB-based indoor positioning is fundamentally limited by ranging measurement errors, particularly under non-line-of-sight (NLOS) conditions, where systematic bias and uncertainty are introduced into the measured distances. In this paper, a measurement error mitigation method is proposed to improve UWB ranging reliability in complex indoor environments. The method first identifies NLOS measurements using low-dimensional physical features and a lightweight machine learning classifier. Subsequently, an error compensation strategy is applied to correct biased ranging observations, which are then incorporated into a nonlinear least squares positioning model. Experimental results obtained in typical indoor environments demonstrate that the proposed method significantly reduces ranging errors and improves positioning accuracy compared with conventional approaches. The results indicate that the proposed framework effectively enhances measurement robustness without increasing system complexity. Full article
Show Figures

Figure 1

36 pages, 2125 KB  
Article
Hybrid Neural Network-Based PDR with Multi-Layer Heading Correction Across Smartphone Carrying Modes
by Junhua Ye, Anzhe Ye, Ahmed Mansour, Shusu Qiu, Zhenzhen Li and Xuanyu Qu
Sensors 2026, 26(8), 2421; https://doi.org/10.3390/s26082421 - 15 Apr 2026
Viewed by 215
Abstract
Traditional pedestrian inertial navigation (PDR) algorithms usually assume that the carrying mode of a smartphone is fixed and remains horizontal, while ignoring the significant impact of dynamic changes in the carrying mode on heading estimation, which is the core element of PDR algorithms. [...] Read more.
Traditional pedestrian inertial navigation (PDR) algorithms usually assume that the carrying mode of a smartphone is fixed and remains horizontal, while ignoring the significant impact of dynamic changes in the carrying mode on heading estimation, which is the core element of PDR algorithms. In practical application scenarios, pedestrians often change their way of carrying smart terminals (e.g., calling) according to their needs, corresponding to the difference in the heading estimation method; especially when the mode is switched, it will cause a sudden change in heading, which will lead to a significant increase in the localization error if it cannot be corrected in time. Existing smart terminal carrying mode recognition methods that rely on traditional machine learning or set thresholds have poor robustness; lack of universality, especially weak diagnostic ability for mutation; and can not effectively reduce the heading error. Based on these practical problems, this paper innovatively proposes a PDR framework that tries to overcome these limitations. Based on this research purpose, firstly, this paper classifies four types of common carrying modes based on practical applications and designs a CNN-LSTM hybrid model, which can classify the four common carrying modes in near real-time, with a recognition accuracy as high as 99.68%. Secondly, based on the mode recognition results, a multi-layer heading correction strategy is introduced: (1) introducing a quaternion-based universal filter (VQF) algorithm to realize the accurate estimation of initial heading; (2) designing an algorithm to accurately detect the mode switching point and developing an adaptive offset correction algorithm to realize the dynamic compensation of heading in the process of mode switching to reduce the impact of sudden changes; and (3) considering the motion characteristics of pedestrians walking in a straight line segment where lateral displacement tends to be close to zero. This study designs a heading optimization method with lateral displacement constraints to further inhibit the drifting of the heading caused by the slight swaying of the smart terminal. In this study, two validation experiments are carried out in two different environment—an indoor corridor and a tree shelter—and the results show that based on the proposed multi-layer heading optimization strategy, the average heading error of the system is lower than 1.5°, the cumulative positioning error is lower than 1% of the walking distance, and the root mean square error of the checkpoints is lower than 2 m, which significantly reduces the positioning error and shows the effectiveness of the framework in complex environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

34 pages, 6632 KB  
Article
SPICD-Net: A Siamese PointNet Framework for Autonomous Indoor Change Detection in 3D LiDAR Point Clouds
by Dalibor Šeljmeši, Vladimir Brtka, Velibor Ilić, Dalibor Dobrilović, Eleonora Brtka and Višnja Ognjenović
AI 2026, 7(4), 141; https://doi.org/10.3390/ai7040141 - 15 Apr 2026
Viewed by 594
Abstract
Reliable change detection in indoor environments remains a challenge for autonomous robotic systems using 3D LiDAR. Existing methods often require manual annotation, computationally intensive architectures, or focus on outdoor scenes. This paper presents SPICD-Net, a lightweight Siamese PointNet framework for indoor 3D change [...] Read more.
Reliable change detection in indoor environments remains a challenge for autonomous robotic systems using 3D LiDAR. Existing methods often require manual annotation, computationally intensive architectures, or focus on outdoor scenes. This paper presents SPICD-Net, a lightweight Siamese PointNet framework for indoor 3D change detection trained exclusively on synthetically generated anomalies, eliminating manual labeling. The framework offers three deployment-oriented contributions: a three-class Siamese formulation separating no-change, changed, and geometrically inconsistent tile pairs; a pre-FPS anomaly injection strategy that aligns synthetic training with inference-time preprocessing; and a stochastic-gated Chamfer-statistics branch that complements learned embeddings with explicit geometric cues under consumer-grade hardware constraints. Evaluated on 14 controlled simulation experiments in an indoor corridor dataset, SPICD-Net achieved aggregated Precision = 0.86, Recall = 0.82, F1-score = 0.84, and Accuracy = 0.96, with zero false positives in the no-change baseline and mean inference time of 22.4 s for a 172-tile map on a single consumer GPU. Additional robustness experiments identified registration accuracy as the main operational prerequisite. A limited real-world validation in one unseen room (four scans, 67 tiles) achieved Precision = 0.583, Recall = 1.000, and F1 = 0.737. Full article
(This article belongs to the Special Issue Artificial Intelligence for Robotic Perception and Planning)
Show Figures

Figure 1

13 pages, 4062 KB  
Article
Robotic Harvesting of Apples Using ROS2
by Connor Ruybalid, Christian Salisbury and Duke M. Bulanon
Machines 2026, 14(4), 433; https://doi.org/10.3390/machines14040433 - 14 Apr 2026
Viewed by 434
Abstract
Rising global food demand, increasing labor costs, and farm labor shortages have created significant challenges for specialty crop production, particularly in labor-intensive tasks such as fruit harvesting. Robotic harvesting offers a promising long-term solution, yet its adoption in orchard environments remains limited due [...] Read more.
Rising global food demand, increasing labor costs, and farm labor shortages have created significant challenges for specialty crop production, particularly in labor-intensive tasks such as fruit harvesting. Robotic harvesting offers a promising long-term solution, yet its adoption in orchard environments remains limited due to unstructured conditions, variable lighting, and difficulties in fruit recognition and manipulation. This study presents an improved robotic fruit harvesting system, Orchard roBot (OrBot), developed by the Robotics Vision Lab at Northwest Nazarene University, with the goal of advancing autonomous apple harvesting applications. The updated OrBot platform integrates a dual-camera vision system consisting of an eye-to-hand stereo camera with a wide field of view for fruit detection and an eye-in-hand RGB-D camera for precise manipulation. The control architecture was redesigned using Robot Operating System 2 (ROS2) and Python, enabling modular subsystem development and coordination. Fruit detection was performed using a YOLOv5 deep learning model, and visual servoing was employed to guide the robotic manipulator toward the target fruit. System performance was evaluated through laboratory experiments using artificial trees and field tests conducted in a commercial apple orchard in Idaho. OrBot achieved a 100% harvesting success rate in indoor tests and a 75–80% success rate in outdoor orchard conditions. Experimental results demonstrate that the dual-camera approach significantly enhances fruit search efficiency and harvesting efficiency. Identified limitations include sensitivity to lighting conditions, end effector performance with varying fruit sizes, and depth estimation errors. Overall, the results indicate a positive potential toward effective robotic fruit harvesting and highlight key areas for future improvement in vision, manipulation, and system robustness. Full article
Show Figures

Figure 1

26 pages, 17521 KB  
Article
Multi-Objective Optimization of Façade and Roof Opening Configurations for Sustainable Industrial Heritage Retrofit: Enhancing Daylight Availability, Non-Visual Potential, and Energy Performance
by Jian Ma, Zhenxiang Cao, Jie Jian, Kunming Li and Jinyue Wu
Sustainability 2026, 18(7), 3644; https://doi.org/10.3390/su18073644 - 7 Apr 2026
Viewed by 384
Abstract
During the adaptive reuse of industrial heritage buildings, existing opening systems and envelope performance often pose major constraints. These restrictions make it difficult for the building to meet the requirements of the updated indoor environment, resulting in insufficient daylight and increased energy consumption. [...] Read more.
During the adaptive reuse of industrial heritage buildings, existing opening systems and envelope performance often pose major constraints. These restrictions make it difficult for the building to meet the requirements of the updated indoor environment, resulting in insufficient daylight and increased energy consumption. Therefore, optimizing lighting and energy performance has become the primary goal of the retrofit design. However, with limited interventions, the retrofit of heritage buildings to achieve significant overall performance improvement is still a challenge. From a sustainability perspective, improving daylight utilization and reducing energy demand are essential strategies for achieving low-carbon and resource-efficient building retrofit. This study proposes a grid-based parametric multi-objective optimization approach to optimize the window openings of the building envelope. The approach defines the position, size and material properties of the roof and facade openings as design variables. Implemented via the Honeybee and Octopus platforms, it integrates a genetic algorithm with EnergyPlus and Radiance simulations to co-optimize daylight performance, circadian frequency, and energy use intensity. Taking a single-story typical industrial heritage building in China’s cold climate zone as a case study, it is shown that coordinated multi-objective constraints significantly improve the overall performance across various evaluation metrics. The optimization results also provide interpretable window configuration strategies and recommended parameter ranges, which fully consider the climate adaptability of the surrounding environment. These findings offer useful guidance for sustainable retrofit design decision-making in similar single-story industrial heritage buildings. Full article
(This article belongs to the Section Green Building)
Show Figures

Figure 1

30 pages, 4563 KB  
Article
Neural Network-Based LoRa Received Signal Strength Indicator Fingerprint Identification for Indoor Localization of Mobile Robots
by Chandan Barai, Meem Sarkar, Ushnish Sarkar, Subhabrata Mazumder, Abhijit Chandra, Tapas Samanta and Hemendra Kumar Pandey
Sensors 2026, 26(7), 2127; https://doi.org/10.3390/s26072127 - 30 Mar 2026
Viewed by 614
Abstract
This paper presents an indoor self-localization framework for mobile robots, an essential component for automation in Industry 4.0 and smart environments. We evaluate a Received Signal Strength Indicator (RSSI) fingerprinting technique utilizing Long-Range (LoRa) technology to overcome the challenges of congested indoor settings. [...] Read more.
This paper presents an indoor self-localization framework for mobile robots, an essential component for automation in Industry 4.0 and smart environments. We evaluate a Received Signal Strength Indicator (RSSI) fingerprinting technique utilizing Long-Range (LoRa) technology to overcome the challenges of congested indoor settings. To optimize communication parameters, the Structural Similarity Index Measure (SSIM) was employed to select the most effective spreading factor, while the entropy of the RSSI database was calculated to verify fingerprint stability. For positional prediction, a Multi-layer Perceptron (MLP) neural network was developed to classify the location of the target within a grid-based experimental setup, featuring cells spaced 60 cm apart. The MLP achieved a validation accuracy of 91.8 percent during training and demonstrated high precision in classifying grid regions within a signal-dense environment. For scenarios where slow-moving robots (5 cm/s) are required, like radiation mapping, this method provide highly accurate high-level localization data.These results suggest that the proposed LoRa-MLP integration provides a robust, low-power solution for high-accuracy indoor positioning systems (IPSs) in modern industrial infrastructure. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

22 pages, 2044 KB  
Article
Vertex: A Semantic Graph-Based Indoor Navigation System with Vision-Language Landmark Verification
by Isabel Ferri-Molla, Dena Bazazian, Marius N. Varga, Jordi Linares-Pellicer and Joan Albert Silvestre-Cerdà
Sensors 2026, 26(7), 2031; https://doi.org/10.3390/s26072031 - 24 Mar 2026
Viewed by 387
Abstract
Older adults often need guidance when visiting new buildings for the first time. However, indoor navigation remains challenging due to the lack of Global Positioning System (GPS) availability, visually repetitive corridors, and frequent location failures. This article presents a multimodal indoor navigation assistant [...] Read more.
Older adults often need guidance when visiting new buildings for the first time. However, indoor navigation remains challenging due to the lack of Global Positioning System (GPS) availability, visually repetitive corridors, and frequent location failures. This article presents a multimodal indoor navigation assistant that combines graph-based route planning with visual landmark verification to provide step-by-step guidance. The environment is modelled as a directed graph whose nodes are annotated with semantic landmarks, and the graph is constructed primarily from a video of the building, reducing the need for 3D scanners, beacons, or other specialised instruments. Routes are calculated using Dijkstra’s shortest-path algorithm over the semantic graph. During navigation, camera frames are analysed using a restricted vision-language recognition strategy that only considers candidate landmarks from the current and next nodes, reducing false detections and improving interpretability. To increase robustness, a temporary voting mechanism was introduced to confirm node transitions, as well as a hierarchical redirection strategy with local and global recovery. The system is implemented in two modes: handheld mode with visual cues using augmented reality arrows, mini map and voice instructions, and hands-free mode with front camera using voice instructions and keywords. Evaluation involved preliminary technical testing in the United Kingdom followed by formal user validation in Spain. During these trials, participants reported high usability, strong confidence and safety, and increased perceived independence. Full article
Show Figures

Figure 1

Back to TopTop