Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (242)

Search Parameters:
Keywords = visually impaired assistance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
58 pages, 1238 KiB  
Review
The Collapse of Brain Clearance: Glymphatic-Venous Failure, Aquaporin-4 Breakdown, and AI-Empowered Precision Neurotherapeutics in Intracranial Hypertension
by Matei Șerban, Corneliu Toader and Răzvan-Adrian Covache-Busuioc
Int. J. Mol. Sci. 2025, 26(15), 7223; https://doi.org/10.3390/ijms26157223 - 25 Jul 2025
Viewed by 328
Abstract
Although intracranial hypertension (ICH) has traditionally been framed as simply a numerical escalation of intracranial pressure (ICP) and usually dealt with in its clinical form and not in terms of its complex underlying pathophysiology, an emerging body of evidence indicates that ICH is [...] Read more.
Although intracranial hypertension (ICH) has traditionally been framed as simply a numerical escalation of intracranial pressure (ICP) and usually dealt with in its clinical form and not in terms of its complex underlying pathophysiology, an emerging body of evidence indicates that ICH is not simply an elevated ICP process but a complex process of molecular dysregulation, glymphatic dysfunction, and neurovascular insufficiency. Our aim in this paper is to provide a complete synthesis of all the new thinking that is occurring in this space, primarily on the intersection of glymphatic dysfunction and cerebral vein physiology. The aspiration is to review how glymphatic dysfunction, largely secondary to aquaporin-4 (AQP4) dysfunction, can lead to delayed cerebrospinal fluid (CSF) clearance and thus the accumulation of extravascular fluid resulting in elevated ICP. A range of other factors such as oxidative stress, endothelin-1, and neuroinflammation seem to significantly impair cerebral autoregulation, making ICH challenging to manage. Combining recent studies, we intend to provide a revised conceptualization of ICH that recognizes the nuance and complexity of ICH that is understated by previous models. We wish to also address novel diagnostics aimed at better capturing the dynamic nature of ICH. Recent advances in non-invasive imaging (i.e., 4D flow MRI and dynamic contrast-enhanced MRI; DCE-MRI) allow for better visualization of dynamic changes to the glymphatic and cerebral blood flow (CBF) system. Finally, wearable ICP monitors and AI-assisted diagnostics will create opportunities for these continuous and real-time assessments, especially in limited resource settings. Our goal is to provide examples of opportunities that exist that might augment early recognition and improve personalized care while ensuring we realize practical challenges and limitations. We also consider what may be therapeutically possible now and in the future. Therapeutic opportunities discussed include CRISPR-based gene editing aimed at restoring AQP4 function, nano-robotics aimed at drug targeting, and bioelectronic devices purposed for ICP modulation. Certainly, these proposals are innovative in nature but will require ethically responsible confirmation of long-term safety and availability, particularly to low- and middle-income countries (LMICs), where the burdens of secondary ICH remain preeminent. Throughout the review, we will be restrained to a balanced pursuit of innovative ideas and ethical considerations to attain global health equity. It is not our intent to provide unequivocal answers, but instead to encourage informed discussions at the intersections of research, clinical practice, and the public health field. We hope this review may stimulate further discussion about ICH and highlight research opportunities to conduct translational research in modern neuroscience with real, approachable, and patient-centered care. Full article
(This article belongs to the Special Issue Latest Review Papers in Molecular Neurobiology 2025)
Show Figures

Figure 1

24 pages, 8344 KiB  
Article
Research and Implementation of Travel Aids for Blind and Visually Impaired People
by Jun Xu, Shilong Xu, Mingyu Ma, Jing Ma and Chuanlong Li
Sensors 2025, 25(14), 4518; https://doi.org/10.3390/s25144518 - 21 Jul 2025
Viewed by 341
Abstract
Blind and visually impaired (BVI) people face significant challenges in perception, navigation, and safety during travel. Existing infrastructure (e.g., blind lanes) and traditional aids (e.g., walking sticks, basic audio feedback) provide limited flexibility and interactivity for complex environments. To solve this problem, we [...] Read more.
Blind and visually impaired (BVI) people face significant challenges in perception, navigation, and safety during travel. Existing infrastructure (e.g., blind lanes) and traditional aids (e.g., walking sticks, basic audio feedback) provide limited flexibility and interactivity for complex environments. To solve this problem, we propose a real-time travel assistance system based on deep learning. The hardware comprises an NVIDIA Jetson Nano controller, an Intel D435i depth camera for environmental sensing, and SG90 servo motors for feedback. To address embedded device computational constraints, we developed a lightweight object detection and segmentation algorithm. Key innovations include a multi-scale attention feature extraction backbone, a dual-stream fusion module incorporating the Mamba architecture, and adaptive context-aware detection/segmentation heads. This design ensures high computational efficiency and real-time performance. The system workflow is as follows: (1) the D435i captures real-time environmental data; (2) the processor analyzes this data, converting obstacle distances and path deviations into electrical signals; (3) servo motors deliver vibratory feedback for guidance and alerts. Preliminary tests confirm that the system can effectively detect obstacles and correct path deviations in real time, suggesting its potential to assist BVI users. However, as this is a work in progress, comprehensive field trials with BVI participants are required to fully validate its efficacy. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

20 pages, 1012 KiB  
Article
Interaction with Tactile Paving in a Virtual Reality Environment: Simulation of an Urban Environment for People with Visual Impairments
by Nikolaos Tzimos, Iordanis Kyriazidis, George Voutsakelis, Sotirios Kontogiannis and George Kokkonis
Multimodal Technol. Interact. 2025, 9(7), 71; https://doi.org/10.3390/mti9070071 - 14 Jul 2025
Viewed by 399
Abstract
Blindness and low vision are increasing serious public health issues that affect a significant percentage of the population worldwide. Vision plays a crucial role in spatial navigation and daily activities. Its reduction or loss creates numerous challenges for an individual. Assistive technology can [...] Read more.
Blindness and low vision are increasing serious public health issues that affect a significant percentage of the population worldwide. Vision plays a crucial role in spatial navigation and daily activities. Its reduction or loss creates numerous challenges for an individual. Assistive technology can enhance mobility and navigation in outdoor environments. In the field of orientation and mobility training, technologies with haptic interaction can assist individuals with visual impairments in learning how to navigate safely and effectively using the sense of touch. This paper presents a virtual reality platform designed to support the development of navigation techniques within a safe yet realistic environment, expanding upon existing research in the field. Following extensive optimization, we present a visual representation that accurately simulates various 3D tile textures using graphics replicating real tactile surfaces. We conducted a user interaction study in a virtual environment consisting of 3D navigation tiles enhanced with tactile textures, placed appropriately for a real-world scenario, to assess user performance and experience. This study also assess the usability and user experience of the platform. We hope that the findings will contribute to the development of new universal navigation techniques for people with visual impairments. Full article
Show Figures

Figure 1

17 pages, 5189 KiB  
Article
YOLO-Extreme: Obstacle Detection for Visually Impaired Navigation Under Foggy Weather
by Wei Wang, Bin Jing, Xiaoru Yu, Wei Zhang, Shengyu Wang, Ziqi Tang and Liping Yang
Sensors 2025, 25(14), 4338; https://doi.org/10.3390/s25144338 - 11 Jul 2025
Viewed by 541
Abstract
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. [...] Read more.
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. The proposed architecture incorporates three novel modules: the Dual-Branch Bottleneck Block (DBB) for capturing both local spatial and global semantic features, the Multi-Dimensional Collaborative Attention Module (MCAM) for joint spatial-channel attention modeling to enhance salient obstacle features and reduce background interference in foggy conditions, and the Channel-Selective Fusion Block (CSFB) for robust multi-scale feature integration. Comprehensive experiments conducted on the Real-world Task-driven Traffic Scene (RTTS) foggy dataset demonstrate that YOLO-Extreme achieves state-of-the-art detection accuracy and maintains high inference speed, outperforming existing dehazing-and-detect and mainstream object detection methods. To further verify the generalization capability of the proposed framework, we also performed cross-dataset experiments on the Foggy Cityscapes dataset, where YOLO-Extreme consistently demonstrated superior detection performance across diverse foggy urban scenes. The proposed framework significantly improves the reliability and safety of assistive navigation for visually impaired individuals under challenging weather conditions, offering practical value for real-world deployment. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

13 pages, 1574 KiB  
Article
SnapStick: Merging AI and Accessibility to Enhance Navigation for Blind Users
by Shehzaib Shafique, Gian Luca Bailo, Silvia Zanchi, Mattia Barbieri, Walter Setti, Giulio Sciortino, Carlos Beltran, Alice De Luca, Alessio Del Bue and Monica Gori
Technologies 2025, 13(7), 297; https://doi.org/10.3390/technologies13070297 - 11 Jul 2025
Viewed by 399
Abstract
Navigational aids play a vital role in enhancing the mobility and independence of blind and visually impaired (VI) individuals. However, existing solutions often present challenges related to discomfort, complexity, and limited ability to provide detailed environmental awareness. To address these limitations, we introduce [...] Read more.
Navigational aids play a vital role in enhancing the mobility and independence of blind and visually impaired (VI) individuals. However, existing solutions often present challenges related to discomfort, complexity, and limited ability to provide detailed environmental awareness. To address these limitations, we introduce SnapStick, an innovative assistive technology designed to improve spatial perception and navigation. SnapStick integrates a Bluetooth-enabled smart cane, bone-conduction headphones, and a smartphone application powered by the Florence-2 Vision Language Model (VLM) to deliver real-time object recognition, text reading, bus route detection, and detailed scene descriptions. To assess the system’s effectiveness and user experience, eleven blind participants evaluated SnapStick, and usability was measured using the System Usability Scale (SUS). In addition to the 94% accuracy, the device received an SUS score of 84.7%, indicating high user satisfaction, ease of use, and comfort. Participants reported that SnapStick significantly improved their ability to navigate, recognize objects, identify text, and detect landmarks with greater confidence. The system’s ability to provide accurate and accessible auditory feedback proved essential for real-world applications, making it a practical and user-friendly solution. These findings highlight SnapStick’s potential to serve as an effective assistive device for blind individuals, enhancing autonomy, safety, and navigation capabilities in daily life. Future work will explore further refinements to optimize user experience and adaptability across different environments. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

31 pages, 9881 KiB  
Article
Guide Robot Based on Image Processing and Path Planning
by Chen-Hsien Yang and Jih-Gau Juang
Machines 2025, 13(7), 560; https://doi.org/10.3390/machines13070560 - 27 Jun 2025
Viewed by 293
Abstract
While guide dogs remain the primary aid for visually impaired individuals, robotic guides continue to be an important area of research. This study introduces an indoor guide robot designed to physically assist a blind person by holding their hand with a robotic arm [...] Read more.
While guide dogs remain the primary aid for visually impaired individuals, robotic guides continue to be an important area of research. This study introduces an indoor guide robot designed to physically assist a blind person by holding their hand with a robotic arm and guiding them to a specified destination. To enable hand-holding, we employed a camera combined with object detection to identify the human hand and a closed-loop control system to manage the robotic arm’s movements. For path planning, we implemented a Dueling Double Deep Q Network (D3QN) enhanced with a genetic algorithm. To address dynamic obstacles, the robot utilizes a depth camera alongside fuzzy logic to control its wheels and navigate around them. A 3D point cloud map is generated to determine the start and end points accurately. The D3QN algorithm, supplemented by variables defined using the genetic algorithm, is then used to plan the robot’s path. As a result, the robot can safely guide blind individuals to their destinations without collisions. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots and UAVs, 2nd Edition)
Show Figures

Figure 1

18 pages, 668 KiB  
Article
The Experiences of Living with a Visual Impairment in Peru: Personal, Medical, and Educational Perspectives
by Jorge Luis Cueva-Vargas, Claire Laballestrier and Joseph Paul Nemargut
Int. J. Environ. Res. Public Health 2025, 22(7), 984; https://doi.org/10.3390/ijerph22070984 - 23 Jun 2025
Viewed by 455
Abstract
Background: Nearly 5 million people in Peru live with visual impairments, many of which are irreversible. within addition to eye care services, these individuals could benefit from government services and rehabilitation to improve their quality of life and promote equitable, inclusive social participation. [...] Read more.
Background: Nearly 5 million people in Peru live with visual impairments, many of which are irreversible. within addition to eye care services, these individuals could benefit from government services and rehabilitation to improve their quality of life and promote equitable, inclusive social participation. Although numerous government policies address this, little is known about their perception and implementation. Methods: Semi-structured individual online interviews were conducted with 29 people (7 low vision, 12 blind, 6 educators/rehabilitators, 4 medical doctors) in Peru between July and November 2024. Each participant was asked to respond to the same 16 open-ended questions. Their transcripts were coded into themes in 5 domains: assistive devices, vision rehabilitation services, government assistance programs, accessibility for people with visual impairments, and eye care services. The themes were compared among members of each group. Results: Themes from educators/rehabilitators aligned well with those with blindness but much less with ophthalmologists and those with low vision. Participants mentioned that assistive devices are not traditionally provided by the government. There was little mention of vision rehabilitation services, particularly from low vision participants. Additionally, participants with visual impairments mentioned a lack of sensitivity from teachers, employers, and transport drivers. Interestingly, none of the participants with visual impairments benefitted from financial assistance. Conclusions: Many of the barriers are societal, referring to the lack of understanding from the public in relation to employment, education, transportation, or the use of assistive devices. People with visual impairments and educators should be included in any policy decisions to promote equality for Peruvians with vision loss. Full article
(This article belongs to the Section Global Health)
Show Figures

Figure 1

28 pages, 4256 KiB  
Article
Accessible IoT Dashboard Design with AI-Enhanced Descriptions for Visually Impaired Users
by George Alex Stelea, Livia Sangeorzan and Nicoleta Enache-David
Future Internet 2025, 17(7), 274; https://doi.org/10.3390/fi17070274 - 21 Jun 2025
Viewed by 1056
Abstract
The proliferation of the Internet of Things (IoT) has led to an abundance of data streams and real-time dashboards in domains such as smart cities, healthcare, manufacturing, and agriculture. However, many current IoT dashboards emphasize complex visualizations with minimal textual cues, posing significant [...] Read more.
The proliferation of the Internet of Things (IoT) has led to an abundance of data streams and real-time dashboards in domains such as smart cities, healthcare, manufacturing, and agriculture. However, many current IoT dashboards emphasize complex visualizations with minimal textual cues, posing significant barriers to users with visual impairments who rely on screen readers or other assistive technologies. This paper presents AccessiDashboard, a web-based IoT dashboard platform that prioritizes accessible design from the ground up. The system uses semantic HTML5 and WAI-ARIA compliance to ensure that screen readers can accurately interpret and navigate the interface. In addition to standard chart presentations, AccessiDashboard automatically generates long descriptions of graphs and visual elements, offering a text-first alternative interface for non-visual data exploration. The platform supports multi-modal data consumption (visual charts, bullet lists, tables, and narrative descriptions) and leverages Large Language Models (LLMs) to produce context-aware textual representations of sensor data. A privacy-by-design approach is adopted for the AI integration to address ethical and regulatory concerns. Early evaluation suggests that AccessiDashboard reduces cognitive and navigational load for users with vision disabilities, demonstrating its potential as a blueprint for future inclusive IoT monitoring solutions. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Graphical abstract

18 pages, 579 KiB  
Article
Sustainable AI Solutions for Empowering Visually Impaired Students: The Role of Assistive Technologies in Academic Success
by Ibrahim A. Elshaer, Sameer M. AlNajdi and Mostafa A. Salem
Sustainability 2025, 17(12), 5609; https://doi.org/10.3390/su17125609 - 18 Jun 2025
Cited by 3 | Viewed by 701
Abstract
This paper examines the impacts of AI-powered assistive technologies (AIATs) on the academic success of higher education university students with visual impairments. As digital learning contexts become progressively more prevalent in higher education institutions, it is critical to understand how these technologies foster [...] Read more.
This paper examines the impacts of AI-powered assistive technologies (AIATs) on the academic success of higher education university students with visual impairments. As digital learning contexts become progressively more prevalent in higher education institutions, it is critical to understand how these technologies foster the academic success of university students with blindness or low vision. Based on the Unified Theory of Acceptance and Use of Technology (UTAUT) model, the study conducted a quantitative research approach and collected data from 390 visually impaired students who were enrolled in different universities across Saudi Arabia (SA). Employing Partial Least Squares Structural Equation Modeling (PLS-SEM), the paper tested the influences of four UTAUT dimensions—Performance Expectancy (PE), Effort Expectancy (EE), Social Influence (SI), and Facilitating Conditions (FC)—on Academic Performance (AP), while also evaluating the mediating role of Behavioral Intention (BI). The results revealed a significant positive relationship between the implementation of AI-based assistive tools and students’ academic success. Particularly, BI emerged as a key mediator in these intersections. The results indicated that PE (β = 0.137, R2 = 0.745), SI (β = 0.070, R2 = 0.745), and BI (β = 0.792, R2 = 0.745) significantly affected AP. In contrast, EE (β = −0.041, R2 = 0.745) and FC (β = −0.004, R2 = 0.745) did not have a significant effect on AP. Concerning predictors of BI, PE (β = 0.412, R2 = 0.317), SI (β = 0.462, R2 = 0.317), and EE (β = 0.139, R2 = 0.317) were all positively associated with BI. However, FC had a significant negative association with BI (β = −0.194, R2 = 0.317). Additionally, the analysis revealed that EE, SI, and PE can all indirectly enhance Academic Performance by influencing BI. The findings provide practical insights for higher education policymakers, higher education administrators, and AI designers, emphasizing the need to improve the accessibility and usability of sustainable and long-term assistive technologies to better accommodate learners with visual impairments in higher education contexts. Full article
(This article belongs to the Special Issue Artificial Intelligence in Education and Sustainable Development)
Show Figures

Figure 1

26 pages, 478 KiB  
Article
Physical Disabilities and Impediments to the Priesthood According to Orthodox Canon Law, with a Case Study of the Romanian Orthodox Church
by Răzvan Perșa
Religions 2025, 16(6), 789; https://doi.org/10.3390/rel16060789 - 17 Jun 2025
Viewed by 772
Abstract
This study examines, within the broader context of historical and cultural influences from Byzantine and Western canonical traditions, the canonical and theological treatment of physical disabilities as impediments to the priesthood within modern Orthodox Canon Law. It shows how traditional Orthodox Canon Law, [...] Read more.
This study examines, within the broader context of historical and cultural influences from Byzantine and Western canonical traditions, the canonical and theological treatment of physical disabilities as impediments to the priesthood within modern Orthodox Canon Law. It shows how traditional Orthodox Canon Law, particularly influenced by medieval Roman Catholic canonical understanding, has historically emphasised physical integrity as a requirement for ordination. The study critically examines historical and contemporary canonical attitudes towards candidates with hearing, speech, or visual impairments or with locomotor disability through the analysis of Apostolic canons, Canons of Ecumenical Councils, and later canonical sources. The methods include a critical canonical and historical analysis of primary sources such as the Canons, patristic writings, and synodal legislation, with particular reference to the initiatives of the Romanian Orthodox Church in the modern cultural and pastoral context. The study observes that, although such impairments continue to be recognised as canonical impediments according to traditional Orthodox law, contemporary ecclesial practice increasingly reflects a pastoral sensitivity that allows, in certain contexts, for the inclusion of persons with disabilities in ordained ministry. This is typically achieved through adaptations that preserve the integrity of liturgical function, such as assistance from co-ministers or specialised training. These developments, while not amounting to a formal canonical revision, signal a broader pastoral and ecclesiological openness toward the integration of persons with disabilities within the life of the Church. Full article
18 pages, 1981 KiB  
Article
Overcoming Challenges in Learning Prerequisites for Adaptive Functioning: Tele-Rehabilitation for Young Girls with Rett Syndrome
by Rosa Angela Fabio, Samantha Giannatiempo and Michela Perina
J. Pers. Med. 2025, 15(6), 250; https://doi.org/10.3390/jpm15060250 - 14 Jun 2025
Cited by 1 | Viewed by 500
Abstract
Background/Objectives: Rett Syndrome (RTT) is a rare neurodevelopmental disorder that affects girls and is characterized by severe motor and cognitive impairments, the loss of purposeful hand use, and communication difficulties. Children with RTT, especially those aged 5 to 9 years, often struggle [...] Read more.
Background/Objectives: Rett Syndrome (RTT) is a rare neurodevelopmental disorder that affects girls and is characterized by severe motor and cognitive impairments, the loss of purposeful hand use, and communication difficulties. Children with RTT, especially those aged 5 to 9 years, often struggle to develop the foundational skills necessary for adaptive functioning, such as eye contact, object tracking, functional gestures, turn-taking, and basic communication. These abilities are essential for cognitive, social, and motor development and contribute to greater autonomy in daily life. This study aimed to explore the feasibility of a structured telerehabilitation program and to provide preliminary observations of its potential utility for young girls with RTT, addressing the presumed challenge of engaging this population in video-based interactive training. Methods: The intervention consisted of 30 remotely delivered sessions (each lasting 90 min), with assessments at baseline (A), after 5 weeks (B1), and after 10 weeks (B2). Quantitative outcome measures focused on changes in eye contact, object tracking, functional gestures, social engagement, and responsiveness to visual stimulus. Results: The findings indicate that the program was feasible and well-tolerated. Improvements were observed across all measured domains, and participants showed high levels of engagement and participation throughout the intervention. While these results are preliminary, they suggest that interactive digital formats may be promising for supporting foundational learning processes in children with RTT. Conclusions: This study provides initial evidence that telerehabilitation is a feasible approach for engaging young girls with RTT and supporting adaptive skill development. These findings may inform future research and the design of controlled studies to evaluate the efficacy of technology-assisted interventions in this population. Full article
(This article belongs to the Special Issue Ehealth, Telemedicine, and AI in the Precision Medicine Era)
Show Figures

Graphical abstract

13 pages, 4169 KiB  
Article
Application of Multimodal AI to Aid Scene Perception for the Visually Impaired
by Piotr Skulimowski
Appl. Sci. 2025, 15(12), 6442; https://doi.org/10.3390/app15126442 - 7 Jun 2025
Viewed by 821
Abstract
In this paper, the use of generative multimodal models for image analysis is proposed, with the goal of determining the selection of parameters for 3D scene segmentation algorithms in systems designed to assist blind individuals in navigation. AI algorithms enable scene type detection, [...] Read more.
In this paper, the use of generative multimodal models for image analysis is proposed, with the goal of determining the selection of parameters for 3D scene segmentation algorithms in systems designed to assist blind individuals in navigation. AI algorithms enable scene type detection, lighting condition assessment, and determination of whether a scene can be used to obtain parameters necessary for system initialization, such as the orientation of imaging sensors relative to the ground. Additionally, the effectiveness of extracting selected scene parameters using four multimodal models is evaluated, and the results are compared to annotations made by a human. The obtained results highlight the potential of utilizing such models to enhance the functionality of systems belonging to the Electronics Travel Aid group, particularly in terms of parameter selection for scene segmentation algorithms and scene presentation to visually impaired individuals. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 6479 KiB  
Article
Deep-Learning-Based Cognitive Assistance Embedded Systems for People with Visual Impairment
by Huu-Huy Ngo, Hung Linh Le and Feng-Cheng Lin
Appl. Sci. 2025, 15(11), 5887; https://doi.org/10.3390/app15115887 - 23 May 2025
Viewed by 619
Abstract
For people with vision impairment, various daily tasks, such as independent navigation, information access, and context awareness, may be challenging. Although several smart devices have been developed to assist blind people, most of these devices focus exclusively on navigation assistance and obstacle avoidance. [...] Read more.
For people with vision impairment, various daily tasks, such as independent navigation, information access, and context awareness, may be challenging. Although several smart devices have been developed to assist blind people, most of these devices focus exclusively on navigation assistance and obstacle avoidance. In this study, we developed a portable system for not only obstacle avoidance but also identifying people and their emotions. The core of the developed system is a powerful and portable edge computing device that implements various deep learning algorithms for images captured from a webcam. The user can easily select a function by using a remote control device, and the system vocally reports the results to the user. The developed system has three primary functions: detecting the names and emotions of known people; detecting the age, gender, and emotion of unknown people; and detecting objects. To validate the performance of the developed system, a prototype was constructed and tested. The results reveal that the developed system has high accuracy and responsiveness and is therefore suitable for practical applications as a navigation and social assistive device for people with visual impairment. Full article
(This article belongs to the Special Issue Improving Healthcare with Artificial Intelligence)
Show Figures

Figure 1

13 pages, 2080 KiB  
Article
From Barriers to Breakthroughs: Rethinking Autonomous Vehicle Design for Visually Impaired Users
by Myungbin Choi, Taehun Kim, Seungjae Kim, Taejin Kim and Wonjoon Kim
Appl. Sci. 2025, 15(10), 5659; https://doi.org/10.3390/app15105659 - 19 May 2025
Viewed by 634
Abstract
The movement of visually impaired people is still limited, and they often require assistance from others. In this study, along with the development of autonomous driving technology, a future mobility design that will help visually impaired people conveniently move around was proposed. The [...] Read more.
The movement of visually impaired people is still limited, and they often require assistance from others. In this study, along with the development of autonomous driving technology, a future mobility design that will help visually impaired people conveniently move around was proposed. The Double-Diamond model, a representative UX evaluation method, was revised and used for the evaluation. After discovering the mobility problems of the visually impaired, we developed the problem into an idea and designed future mobility based on the idea. Then, it was delivered to visually impaired people, and a utility test was performed on the new concept and functions. Six functions were proposed in scenarios for each moving process, and the evaluation results showed that drop-off notification using multi-senses showed the highest utilization. It is hoped that the expansion of self-driving vehicles will increase the mobility of visually impaired people with difficulty driving. Full article
(This article belongs to the Special Issue Current Status and Perspectives in Human–Computer Interaction)
Show Figures

Figure 1

29 pages, 1306 KiB  
Review
Artificial Vision Systems for Mobility Impairment Detection: Integrating Synthetic Data, Ethical Considerations, and Real-World Applications
by Santiago Felipe Luna-Romero, Mauren Abreu de Souza and Luis Serpa Andrade
Technologies 2025, 13(5), 198; https://doi.org/10.3390/technologies13050198 - 13 May 2025
Viewed by 1068
Abstract
Global estimates suggest that over a billion people worldwide—more than 15% of the global population—live with some form of mobility disability, underscoring the pressing need for innovative technological solutions. Recent advancements in artificial vision systems, driven by deep learning and image processing techniques, [...] Read more.
Global estimates suggest that over a billion people worldwide—more than 15% of the global population—live with some form of mobility disability, underscoring the pressing need for innovative technological solutions. Recent advancements in artificial vision systems, driven by deep learning and image processing techniques, offer promising avenues for detecting mobility aids and monitoring gait or posture anomalies. This paper presents a systematic review conducted in accordance with ProKnow-C guidelines, examining key methodologies, datasets, and ethical considerations in mobility impairment detection from 2015 to 2025. Our analysis reveals that convolutional neural network (CNN) approaches, such as YOLO and Faster R-CNN, frequently outperform traditional computer vision methods in accuracy and real-time efficiency, though their success depends on the availability of large, high-quality datasets that capture real-world variability. While synthetic data generation helps mitigate dataset limitations, models trained predominantly on simulated images often exhibit reduced performance in uncontrolled environments due to the domain gap. Moreover, ethical and privacy concerns related to the handling of sensitive visual data remain insufficiently addressed, highlighting the need for robust privacy safeguards, transparent data governance, and effective bias mitigation protocols. Overall, this review emphasizes the potential of artificial vision systems to transform assistive technologies for mobility impairments and calls for multidisciplinary efforts to ensure these systems are technically robust, ethically sound, and widely adoptable. Full article
Show Figures

Figure 1

Back to TopTop