Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (54)

Search Parameters:
Keywords = urban and indoor navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6416 KiB  
Article
Advanced Monocular Outdoor Pose Estimation in Autonomous Systems: Leveraging Optical Flow, Depth Estimation, and Semantic Segmentation with Dynamic Object Removal
by Alireza Ghasemieh and Rasha Kashef
Sensors 2024, 24(24), 8040; https://doi.org/10.3390/s24248040 - 17 Dec 2024
Cited by 2 | Viewed by 1644
Abstract
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor [...] Read more.
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor spaces. Moreover, GPS reliance introduces vulnerabilities to signal disruptions, which can lead to significant operational failures. Hence, developing alternative localization techniques that do not depend on external signals is essential, showing a critical need for robust, GPS-independent localization solutions adaptable to different applications, ranging from Earth-based autonomous vehicles to robotic missions on Mars. This paper addresses these challenges using Visual odometry (VO) to estimate a camera’s pose by analyzing captured image sequences in GPS-denied areas tailored for autonomous vehicles (AVs), where safety and real-time decision-making are paramount. Extensive research has been dedicated to pose estimation using LiDAR or stereo cameras, which, despite their accuracy, are constrained by weight, cost, and complexity. In contrast, monocular vision is practical and cost-effective, making it a popular choice for drones, cars, and autonomous vehicles. However, robust and reliable monocular pose estimation models remain underexplored. This research aims to fill this gap by developing a novel adaptive framework for outdoor pose estimation and safe navigation using enhanced visual odometry systems with monocular cameras, especially for applications where deploying additional sensors is not feasible due to cost or physical constraints. This framework is designed to be adaptable across different vehicles and platforms, ensuring accurate and reliable pose estimation. We integrate advanced control theory to provide safety guarantees for motion control, ensuring that the AV can react safely to the imminent hazards and unknown trajectories of nearby traffic agents. The focus is on creating an AI-driven model(s) that meets the performance standards of multi-sensor systems while leveraging the inherent advantages of monocular vision. This research uses state-of-the-art machine learning techniques to advance visual odometry’s technical capabilities and ensure its adaptability across different platforms, cameras, and environments. By merging cutting-edge visual odometry techniques with robust control theory, our approach enhances both the safety and performance of AVs in complex traffic situations, directly addressing the challenge of safe and adaptive navigation. Experimental results on the KITTI odometry dataset demonstrate a significant improvement in pose estimation accuracy, offering a cost-effective and robust solution for real-world applications. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

28 pages, 6709 KiB  
Article
A 3D Model-Based Framework for Real-Time Emergency Evacuation Using GIS and IoT Devices
by Noopur Tyagi, Jaiteg Singh, Saravjeet Singh and Sukhjit Singh Sehra
ISPRS Int. J. Geo-Inf. 2024, 13(12), 445; https://doi.org/10.3390/ijgi13120445 - 9 Dec 2024
Cited by 2 | Viewed by 2098
Abstract
Advancements in 3D modelling technology have facilitated more immersive and efficient solutions in spatial planning and user-centred design. In healthcare systems, 3D modelling is beneficial in various applications, such as emergency evacuation, pathfinding, and localization. These models support the fast and efficient planning [...] Read more.
Advancements in 3D modelling technology have facilitated more immersive and efficient solutions in spatial planning and user-centred design. In healthcare systems, 3D modelling is beneficial in various applications, such as emergency evacuation, pathfinding, and localization. These models support the fast and efficient planning of evacuation routes, ensuring the safety of patients, staff, and visitors, and guiding them in cases of emergency. To improve urban modelling and planning, 3D representation and analysis are used. Considering the advantages of 3D modelling, this study proposes a framework for 3D indoor navigation and employs a multiphase methodology to enhance spatial planning and user experience. Our approach combines state-of-the art GIS technology with a 3D hybrid model. The proposed framework incorporates federated learning (FL) along with edge computing and Internet of Things (IoT) devices to achieve accurate floor-level localization and navigation. In the first phase of the methodology, Quantum Geographic Information System (QGIS) software was used to create a 3D model of the building’s architectural details, which are required for efficient indoor navigation during emergency evacuations in healthcare systems. In the second phase, the 3D model and an FL-based recurrent neural network (RNN) technique were utilized to achieve real-time indoor positioning. This method resulted in highly precise outcomes, attaining an accuracy rate over 99% at distances of no less than 10 metres. Continuous monitoring and effective pathfinding ensure that users can navigate safely and effectively during emergencies. IoT devices were connected with the building’s navigation software in Phase 3. As per the performed analysis, it was observed that the proposed framework provided 98.7% routing accuracy between different locations during emergency situations. By improving safety, building accessibility, and energy efficiency, this research addresses the health and environmental impacts of modern technologies. Full article
(This article belongs to the Special Issue HealthScape: Intersections of Health, Environment, and GIS&T)
Show Figures

Figure 1

28 pages, 1509 KiB  
Article
A Precise and Scalable Indoor Positioning System Using Cross-Modal Knowledge Distillation
by Hamada Rizk, Ahmed Elmogy, Mohamed Rihan and Hirozumi Yamaguchi
Sensors 2024, 24(22), 7322; https://doi.org/10.3390/s24227322 - 16 Nov 2024
Cited by 4 | Viewed by 2025
Abstract
User location has emerged as a pivotal factor in human-centered environments, driving applications like tracking, navigation, healthcare, and emergency response that align with Sustainable Development Goals (SDGs). However, accurate indoor localization remains challenging due to the limitations of GPS in indoor settings, where [...] Read more.
User location has emerged as a pivotal factor in human-centered environments, driving applications like tracking, navigation, healthcare, and emergency response that align with Sustainable Development Goals (SDGs). However, accurate indoor localization remains challenging due to the limitations of GPS in indoor settings, where signal interference and reflections disrupt satellite connections. While Received Signal Strength Indicator (RSSI) methods are commonly employed, they are affected by environmental noise, multipath fading, and signal interference. Round-Trip Time (RTT)-based localization techniques provide a more resilient alternative but are not universally supported across access points due to infrastructure limitations. To address these challenges, we introduce DistilLoc: a cross-knowledge distillation framework that transfers knowledge from an RTT-based teacher model to an RSSI-based student model. By applying a teacher–student architecture, where the RTT model (teacher) trains the RSSI model (student), DistilLoc enhances RSSI-based localization with the accuracy and robustness of RTT without requiring RTT data during deployment. At the core of DistilLoc, the FNet architecture is employed for its computational efficiency and capacity to capture complex relationships among RSSI signals from multiple access points. This enables the student model to learn a robust mapping from RSSI measurements to precise location estimates, reducing computational demands while improving scalability. Evaluation in two cluttered indoor environments of varying sizes using Android devices and Google WiFi access points, DistilLoc achieved sub-meter localization accuracy, with median errors of 0.42 m and 0.32 m, respectively, demonstrating improvements of 267% over conventional RSSI methods and 496% over multilateration-based approaches. These results validate DistilLoc as a scalable, accurate solution for indoor localization, enabling intelligent, resource-efficient urban environments that contribute to SDG 9 (Industry, Innovation, and Infrastructure) and SDG 11 (Sustainable Cities and Communities). Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

38 pages, 3275 KiB  
Review
Comprehensive Review: High-Performance Positioning Systems for Navigation and Wayfinding for Visually Impaired People
by Jean Marc Feghali, Cheng Feng, Arnab Majumdar and Washington Yotto Ochieng
Sensors 2024, 24(21), 7020; https://doi.org/10.3390/s24217020 - 31 Oct 2024
Cited by 5 | Viewed by 3301
Abstract
The global increase in the population of Visually Impaired People (VIPs) underscores the rapidly growing demand for a robust navigation system to provide safe navigation in diverse environments. State-of-the-art VIP navigation systems cannot achieve the required performance (accuracy, integrity, availability, and integrity) because [...] Read more.
The global increase in the population of Visually Impaired People (VIPs) underscores the rapidly growing demand for a robust navigation system to provide safe navigation in diverse environments. State-of-the-art VIP navigation systems cannot achieve the required performance (accuracy, integrity, availability, and integrity) because of insufficient positioning capabilities and unreliable investigations of transition areas and complex environments (indoor, outdoor, and urban). The primary reason for these challenges lies in the segregation of Visual Impairment (VI) research within medical and engineering disciplines, impeding technology developers’ access to comprehensive user requirements. To bridge this gap, this paper conducts a comprehensive review covering global classifications of VI, international and regional standards for VIP navigation, fundamental VIP requirements, experimentation on VIP behavior, an evaluation of state-of-the-art positioning systems for VIP navigation and wayfinding, and ways to overcome difficulties during exceptional times such as COVID-19. This review identifies current research gaps, offering insights into areas requiring advancements. Future work and recommendations are presented to enhance VIP mobility, enable daily activities, and promote societal integration. This paper addresses the urgent need for high-performance navigation systems for the growing population of VIPs, highlighting the limitations of current technologies in complex environments. Through a comprehensive review of VI classifications, VIPs’ navigation standards, user requirements, and positioning systems, this paper identifies research gaps and offers recommendations to improve VIP mobility and societal integration. Full article
(This article belongs to the Special Issue Feature Review Papers in Intelligent Sensors)
Show Figures

Figure 1

19 pages, 7149 KiB  
Article
Continuous High-Precision Positioning in Smartphones by FGO-Based Fusion of GNSS–PPK and PDR
by Amjad Hussain Magsi, Luis Enrique Díez and Stefan Knauth
Micromachines 2024, 15(9), 1141; https://doi.org/10.3390/mi15091141 - 11 Sep 2024
Cited by 3 | Viewed by 4413
Abstract
The availability of raw Global Navigation Satellites System (GNSS) measurements in Android smartphones fosters advancements in high-precision positioning for mass-market devices. However, challenges like inconsistent pseudo-range and carrier phase observations, limited dual-frequency data integrity, and unidentified hardware biases on the receiver side prevent [...] Read more.
The availability of raw Global Navigation Satellites System (GNSS) measurements in Android smartphones fosters advancements in high-precision positioning for mass-market devices. However, challenges like inconsistent pseudo-range and carrier phase observations, limited dual-frequency data integrity, and unidentified hardware biases on the receiver side prevent the ambiguity resolution of smartphone GNSS. Consequently, relying solely on GNSS for high-precision positioning may result in frequent cycle slips in complex conditions such as deep urban canyons, underpasses, forests, and indoor areas due to non-line-of-sight (NLOS) and multipath conditions. Inertial/GNSS fusion is the traditional common solution to tackle these challenges because of their complementary capabilities. For pedestrians and smartphones with low-cost inertial sensors, the usual architecture is Pedestrian Dead Reckoning (PDR)+ GNSS. In addition to this, different GNSS processing techniques like Precise Point Positioning (PPP) and Real-Time Kinematic (RTK) have also been integrated with INS. However, integration with PDR has been limited and only with Kalman Filter (KF) and its variants being the main fusion techniques. Recently, Factor Graph Optimization (FGO) has started to be used as a fusion technique due to its superior accuracy. To the best of our knowledge, on the one hand, no work has tested the fusion of GNSS Post-Processed Kinematics (PPK) and PDR on smartphones. And, on the other hand, the works that have evaluated the fusion of GNSS and PDR employing FGO have always performed it using the GNSS Single-Point Positioning (SPP) technique. Therefore, this work aims to combine the use of the GNSS PPK technique and the FGO fusion technique to evaluate the improvement in accuracy that can be obtained on a smartphone compared with the usual GNSS SPP and KF fusion strategies. We improved the Google Pixel 4 smartphone GNSS using Post-Processed Kinematics (PPK) with the open-source RTKLIB 2.4.3 software, then fused it with PDR via KF and FGO for comparison in offline mode. Our findings indicate that FGO-based PDR+GNSS–PPK improves accuracy by 22.5% compared with FGO-based PDR+GNSS–SPP, which shows smartphones obtain high-precision positioning with the implementation of GNSS–PPK via FGO. Full article
Show Figures

Figure 1

21 pages, 5998 KiB  
Article
VE-LIOM: A Versatile and Efficient LiDAR-Inertial Odometry and Mapping System
by Yuhang Gao and Long Zhao
Remote Sens. 2024, 16(15), 2772; https://doi.org/10.3390/rs16152772 - 29 Jul 2024
Cited by 2 | Viewed by 4223
Abstract
LiDAR has emerged as one of the most pivotal sensors in the field of navigation, owing to its expansive measurement range, high resolution, and adeptness in capturing intricate scene details. This significance is particularly pronounced in challenging navigation scenarios where GNSS signals encounter [...] Read more.
LiDAR has emerged as one of the most pivotal sensors in the field of navigation, owing to its expansive measurement range, high resolution, and adeptness in capturing intricate scene details. This significance is particularly pronounced in challenging navigation scenarios where GNSS signals encounter interference, such as within urban canyons and indoor environments. However, the copious volume of point cloud data poses a challenge, rendering traditional iterative closest point (ICP) methods inadequate in meeting real-time odometry requirements. Consequently, many algorithms have turned to feature extraction approaches. Nonetheless, with the advent of diverse scanning mode LiDARs, there arises a necessity to devise unique methods tailored to these sensors to facilitate algorithm migration. To address this challenge, we propose a weighted point-to-plane matching strategy that focuses on local details without relying on feature extraction. This improved approach mitigates the impact of imperfect plane fitting on localization accuracy. Moreover, we present a classification optimization method based on the normal vectors of planes to further refine algorithmic efficiency. Finally, we devise a tightly coupled LiDAR-inertial odometry system founded upon optimization schemes. Notably, we pioneer the derivation of an online gravity estimation method from the perspective of S2 manifold optimization, effectively minimizing the influence of gravity estimation errors introduced during the initialization phase on localization accuracy. The efficacy of the proposed method was validated through experimentation employing various LiDAR sensors. The outcomes of indoor and outdoor experiments substantiate its capability to furnish real-time and precise localization and mapping results. Full article
Show Figures

Figure 1

16 pages, 3615 KiB  
Article
High-Precision BEV-Based Road Recognition Method for Warehouse AMR Based on IndoorPathNet and Transfer Learning
by Tianwei Zhang, Ci He, Shiwen Li, Rong Lai, Zili Wang, Lemiao Qiu and Shuyou Zhang
Appl. Sci. 2024, 14(11), 4587; https://doi.org/10.3390/app14114587 - 27 May 2024
Viewed by 1411
Abstract
The rapid development and application of AMRs is important for Industry 4.0 and smart logistics. For large-scale dynamic flat warehouses, vision-based road recognition amidst complex obstacles is paramount for improving navigation efficiency and flexibility, while avoiding frequent manual settings. However, current mainstream road [...] Read more.
The rapid development and application of AMRs is important for Industry 4.0 and smart logistics. For large-scale dynamic flat warehouses, vision-based road recognition amidst complex obstacles is paramount for improving navigation efficiency and flexibility, while avoiding frequent manual settings. However, current mainstream road recognition methods face significant challenges of unsatisfactory accuracy and efficiency, as well as the lack of a large-scale high-quality dataset. To address this, this paper introduces IndoorPathNet, a transfer-learning-based Bird’s Eye View (BEV) indoor path segmentation network that furnishes directional guidance to AMRs through real-time segmented indoor pathway maps. IndoorPathNet employs a lightweight U-shaped architecture integrated with spatial self-attention mechanisms to augment the speed and accuracy of indoor pathway segmentation. Moreover, it surmounts the challenge of training posed by the scarcity of publicly available semantic datasets for warehouses through the strategic employment of transfer learning. Comparative experiments conducted between IndoorPathNet and four other lightweight models on the Urban Aerial Vehicle Image Dataset (UAVID) yielded a maximum Intersection Over Union (IOU) of 82.2%. On the Warehouse Indoor Path Dataset, the maximum IOU attained was 98.4% while achieving a processing speed of 9.81 frames per second (FPS) with a 1024 × 1024 input on a single 3060 GPU. Full article
(This article belongs to the Special Issue Deep Learning for Object Detection)
Show Figures

Figure 1

19 pages, 8286 KiB  
Article
GNSS/5G Joint Position Based on Weighted Robust Iterative Kalman Filter
by Hongjian Jiao, Xiaoxuan Tao, Liang Chen, Xin Zhou and Zhanghai Ju
Remote Sens. 2024, 16(6), 1009; https://doi.org/10.3390/rs16061009 - 13 Mar 2024
Cited by 7 | Viewed by 2613
Abstract
The Global Navigation Satellite System (GNSS) is widely used for its high accuracy, wide coverage, and strong real-time performance. However, limited by the navigation signal mechanism, satellite signals in urban canyons, bridges, tunnels, and other environments are seriously affected by non-line-of-sight and multipath [...] Read more.
The Global Navigation Satellite System (GNSS) is widely used for its high accuracy, wide coverage, and strong real-time performance. However, limited by the navigation signal mechanism, satellite signals in urban canyons, bridges, tunnels, and other environments are seriously affected by non-line-of-sight and multipath effects, which greatly reduce positioning accuracy and positioning continuity. In order to meet the positioning requirements of human and vehicle navigation in complex environments, it was necessary to carry out this research on the integration of multiple signal sources. The Fifth Generation (5G) signal possesses key attributes, such as low latency, high bandwidth, and substantial capacity. Simultaneously, 5G Base Stations (BSs), serving as a fundamental mobile communication infrastructure, extend their coverage into areas traditionally challenging for GNSS technology, including indoor environments, tunnels, and urban canyons. Based on the actual needs, this paper proposes a system algorithm based on 5G and GNSS joint positioning, aiming at the situation that the User Equipment (UE) only establishes the connection with the 5G base station with the strongest signal. Considering the inherent nonlinear problem of user position and angle measurements in 5G observation, an angle cosine solution is proposed. Furthermore, enhancements to the Sage–Husa Adaptive Kalman Filter (SHAKF) algorithm are introduced to tackle issues related to observation weight distribution and adaptive updates of observation noise in multi-system joint positioning, particularly when there is a lack of prior information. This paper also introduces dual gross error detection adaptive correction of the forgetting factor based on innovation in the iterative Kalman filter to enhance accuracy and robustness. Finally, a series of simulation experiments and semi-physical experiments were conducted. The numerical results show that compared with the traditional method, the angle cosine method reduces the average number of iterations from 9.17 to 3 with higher accuracy, which greatly improves the efficiency of the algorithm. Meanwhile, compared with the standard Extended Kalman Filter (EKF), the proposed algorithm improved 48.66%, 35.17%, and 38.23% at 1σ/2σ/3σ, respectively. Full article
Show Figures

Figure 1

17 pages, 12823 KiB  
Article
Towards Fully Autonomous UAV: Damaged Building-Opening Detection for Outdoor-Indoor Transition in Urban Search and Rescue
by Ali Surojaya, Ning Zhang, John Ray Bergado and Francesco Nex
Electronics 2024, 13(3), 558; https://doi.org/10.3390/electronics13030558 - 30 Jan 2024
Cited by 5 | Viewed by 1910
Abstract
Autonomous unmanned aerial vehicle (UAV) technology is a promising technology for minimizing human involvement in dangerous activities like urban search and rescue missions (USAR), both in indoor and outdoor. Automated navigation from outdoor to indoor environments is not trivial, as it encompasses the [...] Read more.
Autonomous unmanned aerial vehicle (UAV) technology is a promising technology for minimizing human involvement in dangerous activities like urban search and rescue missions (USAR), both in indoor and outdoor. Automated navigation from outdoor to indoor environments is not trivial, as it encompasses the ability of a UAV to automatically map and locate the openings in a damaged building. This study focuses on developing a deep learning model for the detection of damaged building openings in real time. A novel damaged building-opening dataset containing images and mask annotations, as well as a comparison between single and multi-task learning-based detectors are given. The deep learning-based detector used in this study is based on YOLOv5. First, this study compared the different versions of YOLOv5 (i.e., small, medium, and large) capacity to perform damaged building-opening detections. Second, a multitask learning YOLOv5 was trained on the same dataset and compared with the single-task detector. The multitask learning (MTL) was developed based on the YOLOv5 object detection architecture, adding a segmentation branch jointly with the detection head. This study found that the MTL-based YOLOv5 can improve detection performance by combining detection and segmentation losses. The YOLOv5s-MTL trained on the damaged building-opening dataset obtained 0.648 mAP, an increase of 0.167 from the single-task-based network, while its inference speed was 73 frames per second on the tested platform. Full article
(This article belongs to the Special Issue Control and Applications of Intelligent Unmanned Aerial Vehicle)
Show Figures

Figure 1

5 pages, 1049 KiB  
Proceeding Paper
Performance of Assisted-Global Navigation Satellite System from Network Mobile to Precise Positioning on Smartphones
by Mónica Zabala Haro, Ángel Martín, Ana Anquela and María Jesús Jiménez
Environ. Sci. Proc. 2023, 28(1), 23; https://doi.org/10.3390/environsciproc2023028023 - 15 Jan 2024
Viewed by 860
Abstract
Indoor navigation is the most challenging environment regarding precise positioning service for a smartphone’s physical quality limitations and interferences for high buildings, trees and multipath fading in the GNSS signal received. A GPS by itself cannot offer a solution; the A-GNSS from a [...] Read more.
Indoor navigation is the most challenging environment regarding precise positioning service for a smartphone’s physical quality limitations and interferences for high buildings, trees and multipath fading in the GNSS signal received. A GPS by itself cannot offer a solution; the A-GNSS from a network mobile provided through telecommunication infrastructure provides information that is useful to counteract these issues. A smartphone has full connectivity to the mobile network 24/7 and has access to the GNSS database when required, and the assisted information is sent over an Internet Protocol (IP) and processed by the GNSS chip, increasing the accuracy, TTFF, and availability of data even in harsh environments. The outdoor, light indoor, and urban canyon scenarios are experienced when driving in some places in the city, and they are recorded with Geo++ and processed with RTKlib using a single frequency in a standalone and multi-constellation double-frequency smartphone, Xiaomi Mi 8, with A-GNSS. The results show good accuracy in the SPS for over 10 (m) and in assisted positioning over 50 (m); the TTFF in assisted positioning is always 5 (s), and in the SPS, it reaches 20 (s). Finally, during the trajectory, only the assisted positioning can compute the position; this is because of the data availability from a mobile network. Full article
(This article belongs to the Proceedings of IV Conference on Geomatics Engineering)
Show Figures

Figure 1

17 pages, 3584 KiB  
Article
Enhancing Indoor Navigation in Intelligent Transportation Systems with 3D RIF and Quantum GIS
by Jaiteg Singh, Noopur Tyagi, Saravjeet Singh, Ahmad Ali AlZubi, Firas Ibrahim AlZubi, Sukhjit Singh Sehra and Farman Ali
Sustainability 2023, 15(22), 15833; https://doi.org/10.3390/su152215833 - 10 Nov 2023
Cited by 4 | Viewed by 2749
Abstract
Innovative technologies have been incorporated into intelligent transportation systems (ITS) to improve sustainability, safety, and efficiency, hence revolutionising traditional transportation. The combination of three-dimensional (3D) indoor building mapping and navigation is a groundbreaking development in the field of ITS. A novel methodology, the [...] Read more.
Innovative technologies have been incorporated into intelligent transportation systems (ITS) to improve sustainability, safety, and efficiency, hence revolutionising traditional transportation. The combination of three-dimensional (3D) indoor building mapping and navigation is a groundbreaking development in the field of ITS. A novel methodology, the “Three-Dimensional Routing Information Framework “(3D RIF), is designed to improve indoor navigation systems in the field of ITS. By leveraging the Quantum Geographic Information System (QGIS), this framework can produce three-dimensional routing data and incorporate sophisticated routing algorithms to handle the complexities associated with indoor navigation. The paper provides a detailed examination of how the framework can be implemented in transport systems in urban environments, with a specific focus on optimising indoor navigation for various applications, including emergency services, tourism, and logistics. The framework includes real-time updates and point-of-interest information, thereby enhancing the overall indoor navigation experience. The 3D RIF’s framework boosts the efficiency and effectiveness of intelligent transportation services by optimising the utilisation of internal resources. The research outcomes are emphasised, demonstrating a mean enhancement of around 25.51% in travel. The measurable enhancement highlighted in this statement emphasises the beneficial influence of ITS on the efficiency of travel, hence underscoring the significance of the ongoing progress in this field. Full article
(This article belongs to the Special Issue Intelligent Transportation Systems towards Sustainable Transportation)
Show Figures

Figure 1

28 pages, 13987 KiB  
Article
Keypoint Detection and Description through Deep Learning in Unstructured Environments
by Georgios Petrakis and Panagiotis Partsinevelos
Robotics 2023, 12(5), 137; https://doi.org/10.3390/robotics12050137 - 30 Sep 2023
Cited by 6 | Viewed by 5492
Abstract
Feature extraction plays a crucial role in computer vision and autonomous navigation, offering valuable information for real-time localization and scene understanding. However, although multiple studies investigate keypoint detection and description algorithms in urban and indoor environments, far fewer studies concentrate in unstructured environments. [...] Read more.
Feature extraction plays a crucial role in computer vision and autonomous navigation, offering valuable information for real-time localization and scene understanding. However, although multiple studies investigate keypoint detection and description algorithms in urban and indoor environments, far fewer studies concentrate in unstructured environments. In this study, a multi-task deep learning architecture is developed for keypoint detection and description, focused on poor-featured unstructured and planetary scenes with low or changing illumination. The proposed architecture was trained and evaluated using a training and benchmark dataset with earthy and planetary scenes. Moreover, the trained model was integrated in a visual SLAM (Simultaneous Localization and Maping) system as a feature extraction module, and tested in two feature-poor unstructured areas. Regarding the results, the proposed architecture provides a mAP (mean Average Precision) in a level of 0.95 in terms of keypoint description, outperforming well-known handcrafted algorithms while the proposed SLAM achieved two times lower RMSE error in a poor-featured area with low illumination, compared with ORB-SLAM2. To the best of the authors’ knowledge, this is the first study that investigates the potential of keypoint detection and description through deep learning in unstructured and planetary environments. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots in Unstructured Environments)
Show Figures

Figure 1

25 pages, 5643 KiB  
Article
Autonomous Multi-Floor Localization Based on Smartphone-Integrated Sensors and Pedestrian Indoor Network
by Chaoyang Shi, Wenxin Teng, Yi Zhang, Yue Yu, Liang Chen, Ruizhi Chen and Qingquan Li
Remote Sens. 2023, 15(11), 2933; https://doi.org/10.3390/rs15112933 - 4 Jun 2023
Cited by 7 | Viewed by 2607
Abstract
Autonomous localization without local wireless facilities is proven as an efficient way for realizing location-based services in complex urban environments. The precision of the current map-matching algorithms is subject to the poor ability of integrated sensor-based trajectory estimation and the efficient combination of [...] Read more.
Autonomous localization without local wireless facilities is proven as an efficient way for realizing location-based services in complex urban environments. The precision of the current map-matching algorithms is subject to the poor ability of integrated sensor-based trajectory estimation and the efficient combination of pedestrian motion information and the pedestrian indoor network. This paper proposes an autonomous multi-floor localization framework based on smartphone-integrated sensors and pedestrian network matching (ML-ISNM). A robust data and model dual-driven pedestrian trajectory estimator is proposed for accurate integrated sensor-based positioning under different handheld modes and disturbed environments. A bi-directional long short-term memory (Bi-LSTM) network is further applied for floor identification using extracted environmental features and pedestrian motion features, and further combined with the indoor network matching algorithm for acquiring accurate location and floor observations. In the multi-source fusion procedure, an error ellipse-enhanced unscented Kalman filter is developed for the intelligent combination of a trajectory estimator, human motion constraints, and the extracted pedestrian network. Comprehensive experiments indicate that the presented ML-ISNM achieves autonomous and accurate multi-floor positioning performance in complex and large-scale urban buildings. The final evaluated average localization error was lower than 1.13 m without the assistance of wireless facilities or a navigation database. Full article
Show Figures

Figure 1

25 pages, 4453 KiB  
Article
Intelligent Fusion Structure for Wi-Fi/BLE/QR/MEMS Sensor-Based Indoor Localization
by Yue Yu, Yi Zhang, Liang Chen and Ruizhi Chen
Remote Sens. 2023, 15(5), 1202; https://doi.org/10.3390/rs15051202 - 22 Feb 2023
Cited by 27 | Viewed by 3674
Abstract
Due to the complexity of urban environments, localizing pedestrians indoors using mobile terminals is an urgent task in many emerging areas. Multi-source fusion-based localization is considered to be an effective way to provide location-based services in large-scale indoor areas. This paper presents an [...] Read more.
Due to the complexity of urban environments, localizing pedestrians indoors using mobile terminals is an urgent task in many emerging areas. Multi-source fusion-based localization is considered to be an effective way to provide location-based services in large-scale indoor areas. This paper presents an intelligent 3D indoor localization framework that uses the integration of Wi-Fi, Bluetooth Low Energy (BLE), quick response (QR) code, and micro-electro-mechanical system sensors (the 3D-WBQM framework). An enhanced inertial odometry was developed for accurate pedestrian localization and trajectory optimization in indoor spaces containing magnetic interference and external acceleration, and Wi-Fi fine time Measurement stations, BLE nodes, and QR codes were applied for landmark detection to provide an absolute reference for trajectory optimization and crowdsourced navigation database construction. In addition, the robust unscented Kalman filter (RUKF) was applied as a generic integrated model to combine the estimated location results from inertial odometry, BLE, QR, Wi-Fi FTM, and the crowdsourced Wi-Fi fingerprinting for large-scale indoor positioning. The experimental results indicated that the proposed 3D-WBQM framework was verified to realize autonomous and accurate positioning performance in large-scale indoor areas using different location sources, and meter-level positioning accuracy can be acquired in Wi-Fi FTM supported areas. Full article
Show Figures

Figure 1

13 pages, 10409 KiB  
Technical Note
Towards Digital Twinning on the Web: Heterogeneous 3D Data Fusion Based on Open-Source Structure
by Marcello La Guardia and Mila Koeva
Remote Sens. 2023, 15(3), 721; https://doi.org/10.3390/rs15030721 - 26 Jan 2023
Cited by 20 | Viewed by 4058
Abstract
Recent advances in Computer Science and the spread of internet connection have allowed specialists to virtualize complex environments on the web and offer further information with realistic exploration experiences. At the same time, the fruition of complex geospatial datasets (point clouds, Building Information [...] Read more.
Recent advances in Computer Science and the spread of internet connection have allowed specialists to virtualize complex environments on the web and offer further information with realistic exploration experiences. At the same time, the fruition of complex geospatial datasets (point clouds, Building Information Modelling (BIM) models, 2D and 3D models) on the web is still a challenge, because usually it involves the usage of different proprietary software solutions, and the input data need further simplification for computational effort reduction. Moreover, integrating geospatial datasets acquired in different ways with various sensors remains a challenge. An interesting question, in that respect, is how to integrate 3D information in a 3D GIS (Geographic Information System) environment and manage different scales of information in the same application. Integrating a multiscale level of information is currently the first step when it comes to digital twinning. It is needed to properly manage complex urban datasets in digital twins related to the management of the buildings (cadastral management, prevention of natural and anthropogenic hazards, structure monitoring, etc.). Therefore, the current research shows the development of a freely accessible 3D Web navigation model based on open-source technology that allows the visualization of heterogeneous complex geospatial datasets in the same virtual environment. This solution employs JavaScript libraries based on WebGL technology. The model is accessible through web browsers and does not need software installation from the user side. The case study is the new building of the University of Twente—Faculty of Geo-Information (ITC), located in Enschede (the Netherlands). The developed solution allows switching between heterogeneous datasets (point clouds, BIM, 2D and 3D models) at different scales and visualization (indoor first-person navigation, outdoor navigation, urban navigation). This solution could be employed by governmental stakeholders or the private sector to remotely visualize complex datasets on the web in a unique visualization, and take decisions only based on open-source solutions. Furthermore, this system can incorporate underground data or real-time sensor data from the IoT (Internet of Things) for digital twinning tasks. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

Back to TopTop