Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (423)

Search Parameters:
Keywords = vision navigation system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 9379 KiB  
Article
UDirEar: Heading Direction Tracking with Commercial UWB Earbud by Interaural Distance Calibration
by Minseok Kim, Younho Nam, Jinyou Kim and Young-Joo Suh
Electronics 2025, 14(15), 2940; https://doi.org/10.3390/electronics14152940 - 23 Jul 2025
Abstract
Accurate heading direction tracking is essential for immersive VR/AR, spatial audio rendering, and robotic navigation. Existing IMU-based methods suffer from drift and vibration artifacts, vision-based approaches require LoS and raise privacy concerns, and RF techniques often need dedicated infrastructure. We propose UDirEar, a [...] Read more.
Accurate heading direction tracking is essential for immersive VR/AR, spatial audio rendering, and robotic navigation. Existing IMU-based methods suffer from drift and vibration artifacts, vision-based approaches require LoS and raise privacy concerns, and RF techniques often need dedicated infrastructure. We propose UDirEar, a COTS UWB device-based system that estimates user heading using solely high-level UWB information like distance and unit direction. By initializing an EKF with each user’s constant interaural distance, UDirEar compensates for the earbuds’ roto-translational motion without additional sensors. We evaluate UDirEar on a step-motor-driven dummy head against an IMU-only baseline (MAE 30.8°), examining robustness across dummy head–initiator distances, elapsed time, EKF calibration conditions, and NLoS scenarios. UDirEar achieves a mean absolute error of 3.84° and maintains stable performance under all tested conditions. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Figure 1

19 pages, 5755 KiB  
Article
A Context-Aware Doorway Alignment and Depth Estimation Algorithm for Assistive Wheelchairs
by Shanelle Tennekoon, Nushara Wedasingha, Anuradhi Welhenge, Nimsiri Abhayasinghe and Iain Murray
Computers 2025, 14(7), 284; https://doi.org/10.3390/computers14070284 - 17 Jul 2025
Viewed by 199
Abstract
Navigating through doorways remains a daily challenge for wheelchair users, often leading to frustration, collisions, or dependence on assistance. These challenges highlight a pressing need for intelligent doorway detection algorithm for assistive wheelchairs that go beyond traditional object detection. This study presents the [...] Read more.
Navigating through doorways remains a daily challenge for wheelchair users, often leading to frustration, collisions, or dependence on assistance. These challenges highlight a pressing need for intelligent doorway detection algorithm for assistive wheelchairs that go beyond traditional object detection. This study presents the algorithmic development of a lightweight, vision-based doorway detection and alignment module with contextual awareness. It integrates channel and spatial attention, semantic feature fusion, unsupervised depth estimation, and doorway alignment that offers real-time navigational guidance to the wheelchairs control system. The model achieved a mean average precision of 95.8% and a F1 score of 93%, while maintaining low computational demands suitable for future deployment on embedded systems. By eliminating the need for depth sensors and enabling contextual awareness, this study offers a robust solution to improve indoor mobility and deliver actionable feedback to support safe and independent doorway traversal for wheelchair users. Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
Show Figures

Figure 1

22 pages, 3768 KiB  
Article
A Collaborative Navigation Model Based on Multi-Sensor Fusion of Beidou and Binocular Vision for Complex Environments
by Yongxiang Yang and Zhilong Yu
Appl. Sci. 2025, 15(14), 7912; https://doi.org/10.3390/app15147912 - 16 Jul 2025
Viewed by 256
Abstract
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references [...] Read more.
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references and binocular vision enabling local environmental perception through a collaborative fusion strategy. The Unscented Kalman Filter (UKF) is used to integrate data from multiple sensors to ensure high-precision positioning and dynamic obstacle avoidance capabilities for robots in complex environments. Simulation results show that the Beidou–Binocular Cooperative Navigation (BBCN) model achieves a global positioning error of less than 5 cm in non-interference scenarios, and an error of only 6.2 cm under high-intensity electromagnetic interference, significantly outperforming the single Beidou model’s error of 40.2 cm. The path planning efficiency is close to optimal (with an efficiency factor within 1.05), and the obstacle avoidance success rate reaches 95%, while the system delay remains within 80 ms, meeting the real-time requirements of industrial scenarios. The innovative fusion approach enables unprecedented reliability for autonomous robot inspection in high-voltage environments, offering significant practical value in reducing human risk exposure, lowering maintenance costs, and improving inspection efficiency in power industry applications. This technology enables continuous monitoring of critical power infrastructure that was previously difficult to automate due to navigation challenges in electromagnetically complex environments. Full article
(This article belongs to the Special Issue Advanced Robotics, Mechatronics, and Automation)
Show Figures

Figure 1

21 pages, 1118 KiB  
Review
Integrating Large Language Models into Robotic Autonomy: A Review of Motion, Voice, and Training Pipelines
by Yutong Liu, Qingquan Sun and Dhruvi Rajeshkumar Kapadia
AI 2025, 6(7), 158; https://doi.org/10.3390/ai6070158 - 15 Jul 2025
Viewed by 780
Abstract
This survey provides a comprehensive review of the integration of large language models (LLMs) into autonomous robotic systems, organized around four key pillars: locomotion, navigation, manipulation, and voice-based interaction. We examine how LLMs enhance robotic autonomy by translating high-level natural language commands into [...] Read more.
This survey provides a comprehensive review of the integration of large language models (LLMs) into autonomous robotic systems, organized around four key pillars: locomotion, navigation, manipulation, and voice-based interaction. We examine how LLMs enhance robotic autonomy by translating high-level natural language commands into low-level control signals, supporting semantic planning and enabling adaptive execution. Systems like SayTap improve gait stability through LLM-generated contact patterns, while TrustNavGPT achieves a 5.7% word error rate (WER) under noisy voice-guided conditions by modeling user uncertainty. Frameworks such as MapGPT, LLM-Planner, and 3D-LOTUS++ integrate multi-modal data—including vision, speech, and proprioception—for robust planning and real-time recovery. We also highlight the use of physics-informed neural networks (PINNs) to model object deformation and support precision in contact-rich manipulation tasks. To bridge the gap between simulation and real-world deployment, we synthesize best practices from benchmark datasets (e.g., RH20T, Open X-Embodiment) and training pipelines designed for one-shot imitation learning and cross-embodiment generalization. Additionally, we analyze deployment trade-offs across cloud, edge, and hybrid architectures, emphasizing latency, scalability, and privacy. The survey concludes with a multi-dimensional taxonomy and cross-domain synthesis, offering design insights and future directions for building intelligent, human-aligned robotic systems powered by LLMs. Full article
Show Figures

Figure 1

25 pages, 315 KiB  
Review
Motion Capture Technologies for Athletic Performance Enhancement and Injury Risk Assessment: A Review for Multi-Sport Organizations
by Bahman Adlou, Christopher Wilburn and Wendi Weimar
Sensors 2025, 25(14), 4384; https://doi.org/10.3390/s25144384 - 13 Jul 2025
Viewed by 542
Abstract
Background: Motion capture (MoCap) technologies have transformed athlete monitoring, yet athletic departments face complex decisions when selecting systems for multiple sports. Methods: We conducted a narrative review of peer-reviewed studies (2015–2025) examining optical marker-based, inertial measurement unit (IMU) systems, including Global Navigation Satellite [...] Read more.
Background: Motion capture (MoCap) technologies have transformed athlete monitoring, yet athletic departments face complex decisions when selecting systems for multiple sports. Methods: We conducted a narrative review of peer-reviewed studies (2015–2025) examining optical marker-based, inertial measurement unit (IMU) systems, including Global Navigation Satellite System (GNSS)-integrated systems, and markerless computer vision systems. Studies were evaluated for validated accuracy metrics across indoor court, aquatic, and outdoor field environments. Results: Optical systems maintain sub-millimeter accuracy in controlled environments but face field limitations. IMU systems demonstrate an angular accuracy of 2–8° depending on movement complexity. Markerless systems show variable accuracy (sagittal: 3–15°, transverse: 3–57°). Environmental factors substantially impact system performance, with aquatic settings introducing an additional orientation error of 2° versus terrestrial applications. Outdoor environments challenge GNSS-based tracking (±0.3–3 m positional accuracy). Critical gaps include limited gender-specific validation and insufficient long-term reliability data. Conclusions: This review proposes a tiered implementation framework combining foundation-level team monitoring with specialized assessment tools. This evidence-based approach guides the selection of technology aligned with organizational priorities, sport-specific requirements, and resource constraints. Full article
(This article belongs to the Special Issue Sensors Technology for Sports Biomechanics Applications)
23 pages, 16886 KiB  
Article
SAVL: Scene-Adaptive UAV Visual Localization Using Sparse Feature Extraction and Incremental Descriptor Mapping
by Ganchao Liu, Zhengxi Li, Qiang Gao and Yuan Yuan
Remote Sens. 2025, 17(14), 2408; https://doi.org/10.3390/rs17142408 - 12 Jul 2025
Viewed by 309
Abstract
In recent years, the use of UAVs has become widespread. Long distance flight of UAVs requires obtaining precise geographic coordinates. Global Navigation Satellite Systems (GNSS) are the most common positioning models, but their signals are susceptible to interference from obstacles and complex electromagnetic [...] Read more.
In recent years, the use of UAVs has become widespread. Long distance flight of UAVs requires obtaining precise geographic coordinates. Global Navigation Satellite Systems (GNSS) are the most common positioning models, but their signals are susceptible to interference from obstacles and complex electromagnetic environments. In this case, vision-based technology can serve as an alternative solution to ensure the self-positioning capability of UAVs. Therefore, a scene adaptive UAV visual localization framework (SAVL) is proposed. In the proposed framework, UAV images are mapped to satellite images with geographic coordinates through pixel-level matching to locate UAVs. Firstly, to tackle the challenge of inaccurate localization resulting from sparse terrain features, this work proposes a novel feature extraction network grounded in a general visual model, leveraging the robust zero-shot generalization capability of the pre-trained model and extracting sparse features from UAV and satellite imagery. Secondly, in order to overcome the problem of weak generalization ability in unknown scenarios, a descriptor incremental mapping module was designed, which reduces multi-source image differences at the semantic level through UAV satellite image descriptor mapping and constructs a confidence-based incremental strategy to dynamically adapt to the scene. Finally, due to the lack of annotated public datasets, a scene-rich UAV dataset (RealUAV) was constructed to study UAV visual localization in real-world environments. In order to evaluate the localization performance of the proposed framework, several related methods were compared and analyzed in detail. The results on the dataset indicate that the proposed method achieves excellent positioning accuracy, with an average error of only 8.71 m. Full article
Show Figures

Figure 1

13 pages, 1574 KiB  
Article
SnapStick: Merging AI and Accessibility to Enhance Navigation for Blind Users
by Shehzaib Shafique, Gian Luca Bailo, Silvia Zanchi, Mattia Barbieri, Walter Setti, Giulio Sciortino, Carlos Beltran, Alice De Luca, Alessio Del Bue and Monica Gori
Technologies 2025, 13(7), 297; https://doi.org/10.3390/technologies13070297 - 11 Jul 2025
Viewed by 292
Abstract
Navigational aids play a vital role in enhancing the mobility and independence of blind and visually impaired (VI) individuals. However, existing solutions often present challenges related to discomfort, complexity, and limited ability to provide detailed environmental awareness. To address these limitations, we introduce [...] Read more.
Navigational aids play a vital role in enhancing the mobility and independence of blind and visually impaired (VI) individuals. However, existing solutions often present challenges related to discomfort, complexity, and limited ability to provide detailed environmental awareness. To address these limitations, we introduce SnapStick, an innovative assistive technology designed to improve spatial perception and navigation. SnapStick integrates a Bluetooth-enabled smart cane, bone-conduction headphones, and a smartphone application powered by the Florence-2 Vision Language Model (VLM) to deliver real-time object recognition, text reading, bus route detection, and detailed scene descriptions. To assess the system’s effectiveness and user experience, eleven blind participants evaluated SnapStick, and usability was measured using the System Usability Scale (SUS). In addition to the 94% accuracy, the device received an SUS score of 84.7%, indicating high user satisfaction, ease of use, and comfort. Participants reported that SnapStick significantly improved their ability to navigate, recognize objects, identify text, and detect landmarks with greater confidence. The system’s ability to provide accurate and accessible auditory feedback proved essential for real-world applications, making it a practical and user-friendly solution. These findings highlight SnapStick’s potential to serve as an effective assistive device for blind individuals, enhancing autonomy, safety, and navigation capabilities in daily life. Future work will explore further refinements to optimize user experience and adaptability across different environments. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

40 pages, 2250 KiB  
Review
Comprehensive Comparative Analysis of Lower Limb Exoskeleton Research: Control, Design, and Application
by Sk Hasan and Nafizul Alam
Actuators 2025, 14(7), 342; https://doi.org/10.3390/act14070342 - 9 Jul 2025
Viewed by 406
Abstract
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric [...] Read more.
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric use, and industrial support. Applications range from sit-to-stand transitions and post-stroke therapy to balance support and real-world navigation. Control approaches vary from traditional impedance and fuzzy logic models to advanced data-driven frameworks, including reinforcement learning, recurrent neural networks, and digital twin-based optimization. These controllers support personalized and adaptive interaction, enabling real-time intent recognition, torque modulation, and gait phase synchronization across different users and tasks. Hardware platforms include powered multi-degree-of-freedom exoskeletons, passive assistive devices, compliant joint systems, and pediatric-specific configurations. Innovations in actuator design, modular architecture, and lightweight materials support increased usability and energy efficiency. Sensor systems integrate EMG, EEG, IMU, vision, and force feedback, supporting multimodal perception for motion prediction, terrain classification, and user monitoring. Human–robot interaction strategies emphasize safe, intuitive, and cooperative engagement. Controllers are increasingly user-specific, leveraging biosignals and gait metrics to tailor assistance. Evaluation methodologies include simulation, phantom testing, and human–subject trials across clinical and real-world environments, with performance measured through joint tracking accuracy, stability indices, and functional mobility scores. Overall, the review highlights the field’s evolution toward intelligent, adaptable, and user-centered systems, offering promising solutions for rehabilitation, mobility enhancement, and assistive autonomy in diverse populations. Following a detailed review of current developments, strategic recommendations are made to enhance and evolve existing exoskeleton technologies. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

10 pages, 4530 KiB  
Article
A Switchable-Mode Full-Color Imaging System with Wide Field of View for All Time Periods
by Shubin Liu, Linwei Guo, Kai Hu and Chunbo Zou
Photonics 2025, 12(7), 689; https://doi.org/10.3390/photonics12070689 - 8 Jul 2025
Viewed by 232
Abstract
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging [...] Read more.
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging platform that integrates a 155 mm f/6 telephoto daytime camera with a 52 mm f/1.5 large-aperture low-light full-color night-vision camera into a single, co-registered 26 cm housing. By employing a sixth-order aspheric surface to reduce the element count and weight, our system achieves near-diffraction-limited MTF (>0.5 at 90.9 lp/mm) in daylight and sub-pixel RMS blur < 7 μm at 38.5 lp/mm under low-light conditions. Field validation at 0.0009 lux confirms high-SNR, full-color capture from bright noon to the darkest nights, enabling seamless switching between long-range, high-resolution surveillance and sensitive, low-light color imaging. This compact, robust design promises to elevate applications in security monitoring, autonomous navigation, wildlife observation, and disaster response by providing uninterrupted, color-faithful vision in all lighting regimes. Full article
(This article belongs to the Special Issue Research on Optical Materials and Components for 3D Displays)
Show Figures

Figure 1

32 pages, 2740 KiB  
Article
Vision-Based Navigation and Perception for Autonomous Robots: Sensors, SLAM, Control Strategies, and Cross-Domain Applications—A Review
by Eder A. Rodríguez-Martínez, Wendy Flores-Fuentes, Farouk Achakir, Oleg Sergiyenko and Fabian N. Murrieta-Rico
Eng 2025, 6(7), 153; https://doi.org/10.3390/eng6070153 - 7 Jul 2025
Viewed by 930
Abstract
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from [...] Read more.
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from sensing to deployment. We first examine the expanding sensor palette—monocular and multi-camera rigs, stereo and RGB-D devices, LiDAR–camera hybrids, event cameras, and infrared systems—highlighting the complementary operating envelopes and the rise of learning-based depth inference. The advances in visual localization and mapping are then analyzed, contrasting sparse and dense SLAM approaches, as well as monocular, stereo, and visual–inertial formulations. Additional topics include loop closure, semantic mapping, and LiDAR–visual–inertial fusion, which enables drift-free operation in dynamic environments. Building on these foundations, we review the navigation and control strategies, spanning classical planning, reinforcement and imitation learning, hybrid topological–metric memories, and emerging visual language guidance. Application case studies—autonomous driving, industrial manipulation, autonomous underwater vehicles, planetary rovers, aerial drones, and humanoids—demonstrate how tailored sensor suites and algorithms meet domain-specific constraints. Finally, the future research trajectories are distilled: generative AI for synthetic training data and scene completion; high-density 3D perception with solid-state LiDAR and neural implicit representations; event-based vision for ultra-fast control; and human-centric autonomy in next-generation robots. By providing a unified taxonomy, a comparative analysis, and engineering guidelines, this review aims to inform researchers and practitioners designing robust, scalable, vision-driven robotic systems. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

17 pages, 2430 KiB  
Article
Multimodal Navigation and Virtual Companion System: A Wearable Device Assisting Blind People in Independent Travel
by Jingjing Xu, Caiyi Wang, Yancheng Li, Xuantuo Huang, Meina Zhao, Zhuoqun Shen, Yiding Liu, Yuxin Wan, Fengrong Sun, Jianhua Zhang and Shengyong Xu
Sensors 2025, 25(13), 4223; https://doi.org/10.3390/s25134223 - 6 Jul 2025
Viewed by 359
Abstract
Visual impairment or even loss seriously affects quality of life. Benefited by the rapid development of sound/laser detection, Global Positioning System (GPS)/Beidou positioning, machine vision and other technologies, the quality of life of blind people is expected to be improved through visual substitution [...] Read more.
Visual impairment or even loss seriously affects quality of life. Benefited by the rapid development of sound/laser detection, Global Positioning System (GPS)/Beidou positioning, machine vision and other technologies, the quality of life of blind people is expected to be improved through visual substitution technology. The existing visual substitution devices still have limitations in terms of safety, robustness, and ease of operation. The remote companion system developed here fully utilizes multimodal navigation and remote communication technologies, and the positioning and interaction functions of commercial mobile phones. Together with the accumulated judgment of backend personnel, it can provide real-time, safe, and reliable navigation services for blind people, helping them complete daily activities such as independent travel, circulation, and shopping. The practical results show that the system not only has strong operability and is easy to use, but also can provide users with a strong sense of security and companionship, making it suitable for promotion. In the future, this system can also be promoted for other vulnerable groups such as the elderly. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

24 pages, 41430 KiB  
Article
An Optimal Viewpoint-Guided Visual Indexing Method for UAV Autonomous Localization
by Zhiyang Ye, Yukun Zheng, Zheng Ji and Wei Liu
Remote Sens. 2025, 17(13), 2194; https://doi.org/10.3390/rs17132194 - 25 Jun 2025
Viewed by 493
Abstract
The autonomous positioning of drone-based remote sensing plays an important role in navigation in urban environments. Due to GNSS (Global Navigation Satellite System) signal occlusion, obtaining precise drone locations is still a challenging issue. Inspired by vision-based positioning methods, we proposed an autonomous [...] Read more.
The autonomous positioning of drone-based remote sensing plays an important role in navigation in urban environments. Due to GNSS (Global Navigation Satellite System) signal occlusion, obtaining precise drone locations is still a challenging issue. Inspired by vision-based positioning methods, we proposed an autonomous positioning method based on multi-view reference images rendered from the scene’s 3D geometric mesh and apply a bag-of-words (BoW) image retrieval pipeline to achieve efficient and scalable positioning, without utilizing deep learning-based retrieval or 3D point cloud registration. To minimize the number of reference images, scene coverage quantification and optimization are employed to generate the optimal viewpoints. The proposed method jointly exploits a visual-bag-of-words tree to accelerate reference image retrieval and improve retrieval accuracy, and the Perspective-n-Point (PnP) algorithm is utilized to obtain the drone’s pose. Experiments are conducted in urban real-word scenarios and the results show that positioning errors are decreased, with accuracy ranging from sub-meter to 5 m and an average latency of 0.7–1.3 s; this indicates that our method significantly improves accuracy and latency, offering robust, real-time performance over extensive areas without relying on GNSS or dense point clouds. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

28 pages, 4256 KiB  
Article
Accessible IoT Dashboard Design with AI-Enhanced Descriptions for Visually Impaired Users
by George Alex Stelea, Livia Sangeorzan and Nicoleta Enache-David
Future Internet 2025, 17(7), 274; https://doi.org/10.3390/fi17070274 - 21 Jun 2025
Viewed by 963
Abstract
The proliferation of the Internet of Things (IoT) has led to an abundance of data streams and real-time dashboards in domains such as smart cities, healthcare, manufacturing, and agriculture. However, many current IoT dashboards emphasize complex visualizations with minimal textual cues, posing significant [...] Read more.
The proliferation of the Internet of Things (IoT) has led to an abundance of data streams and real-time dashboards in domains such as smart cities, healthcare, manufacturing, and agriculture. However, many current IoT dashboards emphasize complex visualizations with minimal textual cues, posing significant barriers to users with visual impairments who rely on screen readers or other assistive technologies. This paper presents AccessiDashboard, a web-based IoT dashboard platform that prioritizes accessible design from the ground up. The system uses semantic HTML5 and WAI-ARIA compliance to ensure that screen readers can accurately interpret and navigate the interface. In addition to standard chart presentations, AccessiDashboard automatically generates long descriptions of graphs and visual elements, offering a text-first alternative interface for non-visual data exploration. The platform supports multi-modal data consumption (visual charts, bullet lists, tables, and narrative descriptions) and leverages Large Language Models (LLMs) to produce context-aware textual representations of sensor data. A privacy-by-design approach is adopted for the AI integration to address ethical and regulatory concerns. Early evaluation suggests that AccessiDashboard reduces cognitive and navigational load for users with vision disabilities, demonstrating its potential as a blueprint for future inclusive IoT monitoring solutions. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Graphical abstract

19 pages, 8609 KiB  
Article
A Microwave Vision-Enhanced Environmental Perception Method for the Visual Navigation of UAVs
by Rui Li, Dewei Wu, Peiran Li, Chenhao Zhao, Jingyi Zhang and Jing He
Remote Sens. 2025, 17(12), 2107; https://doi.org/10.3390/rs17122107 - 19 Jun 2025
Viewed by 299
Abstract
Visual navigation technology holds significant potential for applications involving unmanned aerial vehicles (UAVs). However, the inherent spectral limitations of optical-dependent navigation systems prove particularly inadequate for high-altitude long-endurance (HALE) UAV operations, as they are fundamentally constrained in maintaining reliable environment perception under conditions [...] Read more.
Visual navigation technology holds significant potential for applications involving unmanned aerial vehicles (UAVs). However, the inherent spectral limitations of optical-dependent navigation systems prove particularly inadequate for high-altitude long-endurance (HALE) UAV operations, as they are fundamentally constrained in maintaining reliable environment perception under conditions of fluctuating illumination and persistent cloud cover. To address this challenge, this paper introduces microwave vision to assist optical vision for environmental measurement and proposes a novel microwave vision-enhanced environmental perception method. In particular, the richness of perceived environmental information can be enhanced by SAR and optical image fusion processing in the case of sufficient light and clear weather. In order to simultaneously mitigate inherent SAR speckle noise and address existing fusion algorithms’ inadequate consideration of UAV navigation-specific environmental perception requirements, this paper designs a SAR Target-Augmented Fusion (STAF) algorithm based on the target detection of SAR images. On the basis of image preprocessing, this algorithm utilizes constant false alarm rate (CFAR) detection along with morphological operations to extract critical target information from SAR images. Subsequently, the intensity–hue–saturation (IHS) transform is employed to integrate this extracted information into the optical image. The experimental results show that the proposed microwave vision-enhanced environmental perception method effectively utilizes microwave vision to shape target information perception in the electromagnetic spectrum and enhance the information content of environmental measurement results. The unique information extracted by the STAF algorithm from SAR images can effectively enhance the optical images while retaining their main attributes. This method can effectively enhance the environmental measurement robustness and information acquisition ability of the visual navigation system. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

18 pages, 509 KiB  
Article
Service Quality Barriers Encountered in Urban Public Transport by People with Disability in South Africa
by Babra Duri and Rose Luke
Soc. Sci. 2025, 14(6), 366; https://doi.org/10.3390/socsci14060366 - 10 Jun 2025
Viewed by 692
Abstract
With rapid urbanisation and population growth, transport equity has become a critical issue, especially when considering the mobility gap among people with disability. Understanding the dynamics between the quality of public transport services and the mobility of people with disability is critical to [...] Read more.
With rapid urbanisation and population growth, transport equity has become a critical issue, especially when considering the mobility gap among people with disability. Understanding the dynamics between the quality of public transport services and the mobility of people with disability is critical to fostering transport equity and inclusivity. This research investigated service quality barriers encountered by people with disability in the City of Tshwane while navigating the city’s public transport system. A quantitative research method was employed, using a structured questionnaire to collect primary data from people with mobility, vision, and hearing disability. The responses were analysed using descriptive statistics, exploratory factor analysis (EFA), and multiple comparison tests to uncover trends and differences among the groups. The findings reveal that people with all types of disability experience considerable service quality challenges. Long travel and waiting times are major concerns amongst people with mobility disability, which lead to heightened inconvenience. The research also found a pervasive lack of transport information, which aggravates the difficulties faced by people with disability. Lastly, the absence of announcements of stops further compounds the challenges experienced by people with a vision disability. The study emphasises the need for high quality public transport services that prioritise accessible and inclusive public transport that caters to all. Addressing service quality barriers in public transport promotes participation in socio-economic life among people with disability. This study contributes to the broader goal of transport equity and highlights the importance of inclusive transport policies and the priority areas that require consideration in a typical developing country. Full article
(This article belongs to the Section Community and Urban Sociology)
Show Figures

Figure 1

Back to TopTop