1. Introduction
Blind students encounter notable extrinsic challenges in the university environment impacting their academic and social integration. Specific to the university experience, blind students find it difficult to navigate spatially complex campus buildings and layouts, access learning materials in alternative/accessible formats, and engage socially with peers and fellow lecturers. The assistance traditionally provided by means of a white cane and guide dogs, while helpful, often becomes revealed to be inadequate in the dynamic and at times cluttered university environment that can impose limitations on obstacle avoidance and orientation.
To address these extrinsic challenges and enhance the blind student’s transition into independence, this research develops a smart glasses system and a complementary mobile app. The smart glasses combine Internet of Things (IoT) technologies, including ultrasonic and RFID sensors, to provide real-time environmental information, obstacle avoidance, and indoor navigation capability. The mobile app further provides support by offering features such as lesson reminders, timetable access, and emergency contact support features.
The main goal of the smart glasses system is to enhance independence, increase access, and enrich the overall university experience for students with visual impairments. By improving navigation and accessibility to information, the smart glasses system aims to promote inclusivity and a supportive learning experience.
This paper describes the design, implementation, and a thorough technical evaluation of the proposed smart glasses system. The primary objective of this evaluation is to rigorously validate the system’s foundational technical capabilities and performance against existing assistive technologies. We will show that our multi-sensor fusion, optimized algorithms, and integrated functionalities provide significant advancements over fragmented or less robust prior approaches. The subsequent sections provide a comprehensive overview of the system’s architecture, core functionalities, and the results obtained from its evaluation.
2. Literature Review
Over the last decade, considerable advances have been achieved in the field of assistive technologies for visually impaired people, particularly in the realm of smart glasses, whose potential for improving independence and accessibility has been recognized. This section critically examines existing smart glasses projects, pinpoints their inherent limitations in responding to the multiple needs of visually impaired university students, and consequently delineates the specific research gaps that our ‘smart glasses for blind students’ project aims to address.
2.1. Existing Assistive Smart Glasses Projects and Their Limitations
Many smart glasses projects have been designed to assist people with visual impairments, leveraging diverse technologies. While these initiatives present exciting functionalities, a critical investigation reveals common limitations in fully meeting the complex needs of users in dynamic environments.
2.1.1. Navigation and Obstacle Detection Systems
Navigating effectively and safely in dynamic and complex environments, such as university campuses, remains a major challenge for visually impaired people. Traditional aids such as white canes or guide dogs provide basic support but do not enable real-time global spatial awareness. Technological aids have endeavoured to compensate for this deficiency. Early prototypes of smart glasses relied on ultrasonic sensors to detect and evade obstacles (Meliones [
1]).
Although simple and cost-effective, these systems have a low accuracy in cluttered environments and have difficulty identifying individual objects, often behaving merely as thresholds [
2]. Their susceptibility to missing transparent or dynamic obstacles poses a significant safety risk. More evolved systems integrate computer vision cameras or LiDAR to further improve obstacle detection, [
3,
4]. Nevertheless, camera-based systems can be computationally expensive and liable to latency due to their reliance on cloud computing, and they may not cope well with new environments. The need for robust testing is emphasized [
5,
6], while accuracy is discussed [
7]. (Ramani et al. and Baig et al. [
5,
6] emphasize the need for robust testing, while Ashiq et al. [
2] discussed accuracy).
LiDAR systems, while offering high precision, can be affected by adverse weather conditions and are often associated with high sensor costs. For in-built navigation, GPS is ineffective, requiring a variety of in-built positioning methods, the most common of which are Bluetooth beacons [
8] and Wi-Fi triangulation. Yet, achieving seamless indoor–outdoor transitions and high-precision, detailed guidance in complicated indoor layouts like university buildings remains a considerable challenge [
9,
10], which emphasizes the lack of resilience in current systems.
2.1.2. Object Recognition and Information Access Systems
Visual information tends to come in the form of signs, printed papers, and commonly as cues in the environment, but it is critical to autonomy regardless of the aspects of mobility. The developments of object recognition and interpreting scenes using artificial intelligence methods have offered some early accessible solutions that provide audio feedback from visual input. These methods are relatively new, still in development, and are computationally costly; they employ cloud computing-based processing, which introduces a time lag, meaning they face limitations in real-time work [
5]. Another challenge is how we might explore new objects and environments without wasting time on relearning process. The benefit of smart glasses coupled with text-to-speech (TTS) and image captioning will allow for audible output of text or image descriptions.
That said, reading, recognizing, and encoding text in numerous font faces and layouts [
11] and also describing more complex images with context [
12] represents a very real challenge.
2.1.3. Holistic Support and User-Centric Design Considerations
Most existing solutions focus on a limited number of isolated tasks, often ignoring the broad, customized needs of users, particularly in contexts such as higher education. The importance of creating user-friendly, usable, and accessible features in smart glasses for blind people has been noted consistently [
13].
Table 1 provides a comparative overview of several representative smart glasses systems, highlighting their key features and inherent limitations that our proposed system aims to overcome.
2.2. Specific Needs of Visually Impaired Students in University Settings
Visually impaired students face unique and pronounced challenges within the university environment that go beyond general navigation. These specific needs demand tailored assistive technologies. One primary area is navigation and wayfinding, which involves not just getting from building to building but precisely finding specific classrooms, lecture halls, labs, or even seats within a large, often changing, building layout. Another critical need is access to information and learning materials, as students require immediate access to traditional learning materials (textbooks, handouts, whiteboard content, and presentation slides) in accessible formats, often in real time. Furthermore, social interaction and inclusion present significant hurdles, as recognizing faces, interpreting social cues, and initiating spontaneous conversations or group activities can be challenging, impacting individuals’ ability to fully participate in university social life and collaborative learning. Finally, time management and organization can be difficult without visual cues, as keeping track of diverse course schedules, deadlines, and extracurricular activities creates administrative and learning challenges.
2.3. Identified Research Gaps and Our Contributions
As the previous literature review indicates, there is no comprehensive and integrated assistive technology platform with which to address the unique and multifaceted challenges of visually impaired students in the complex environment of a university, despite the progress that continues to be made in some areas. The fragmented approaches that do exist do not address the holistic needs of this unique population.
This project proposes a novel smart glasses system that directly addresses these critical gaps through the following key contributions.
Holistic spatial awareness using multiple sensors: Our platform integrates ultrasonic, LiDar, and RFID sensors to provide whole-room, real-time 360° obstacle detection and accurate continuous indoor localization, making it able to avoid the limitations of limited context-awareness and inaccurate indoor navigation that impact safety and confidence in the complex layouts of universities.
Optimized real-time dynamic navigation: We present an optimized Dijkstra’s algorithm designed for use in dynamic indoor environments, allowing for real-time navigation by continuous rerouting from moving obstacles or changing conditions. This represents a significant upgrade from static navigation solutions, which are essential for dynamic movement.
AI-powered environmental perception and accessible information synthesis: The system will utilize image recognition and auditory conversion, which enhances processing speed. Hence, an environmental perception connects contextually appropriate information in real time from large signs, books, and objects, thus addressing lag and accuracy issues with previous systems.
Gamified platform for enhanced social and academic engagement: Going beyond mere physical assistance, we integrate a unique accompanying gamified platform that offers lesson/timetable reminders, emergency support, and features designed to aid social interaction. This approach encourages greater independence, social inclusion and better time management, thereby meeting the wider needs of university life.
By leveraging these technologies and focusing on the specific context of university students, we seek to provide an actual comprehensive solution to many of the challenges faced in higher education, thus revolutionising the whole university experience and improving independence for visually impaired people.
3. System Methodology: Hardware Platform and Core Sensing
This section presents a comprehensive description of the proposed smart glasses system, including overall architecture, hardware and software components used, and how they are typically integrated and function in the system. It provides the technical information to understand the functionality and implementation of the system.
3.1. Smart Glasses Architecture and Communication
The architectural design of the smart glasses is built to enhance access and autonomy for visually impaired students. The system’s hardware and software components work in tandem to provide timely assistance. The overarching system architecture, illustrating the entire platform ecosystem, is presented in
Figure 1.
Smart glasses act as the main hub to capture data from various onboard sensors, including the GPS module (for location data), multiple ultrasonic sensors (for object distance), and the camera (for visual input). This raw data is then initially processed and analyzed by the onboard Raspberry Pi.
For advanced processing, decision making, and access to comprehensive mapping information or user profiles, the partially processed data is sent to a remote application server (cloud-based or local). This communication primarily occurs via Wi-Fi [
14] (operating on dual-band 2.4 GHz and 5 GHz frequencies), chosen for its high bandwidth, which is necessary for real-time video streaming, and its ubiquitous presence within university campus infrastructure. The server can then send the correct information back to the smart glasses for display to the user or utilize the text-to-speech engine to convert text into speech that will be transmitted through bone conduction headphones. This bidirectional flow of information enables a continuous feedback loop. User commands, captured through the onboard microphone or a mobile app interface, further shape the system’s behaviour, allowing for greater information access and system customization. A Firebase database serves as system memory for the secure storage of user data and mapping information for navigation goals, as illustrated in
Figure 2.
3.2. Hardware Components
The core of our assistive technology is a custom-designed smart glasses prototype that integrates multiple sensor modalities and processing capabilities. The selection of each hardware component was carefully guided by considerations of real-time performance, accuracy, power efficiency, compactness, and wearer comfort.
This device incorporates a carefully selected array of integrated hardware modules, whose detailed specifications and primary functions are summarized in
Table 2 and whose physical arrangement and integration on the prototype are visually represented in
Figure 3.
Table 2 presents a detailed overview of the main hardware components incorporated into the smart glasses, as well as their main specifications and functions.
3.3. Sensor Integration and Fusion
To achieve comprehensive 360° obstacle detection and robust environment perception, our smart glasses system employs a synergistic combination of distinct sensor types, as detailed in
Table 2 (
Section 3.2). The data streams from these heterogeneous sensors—including ultrasonic for rapid proximity sensing, LiDAR for detailed environmental mapping, and RFID for precise indoor localization—are continuously collected and strategically fused. This multimodal fusion is critical for constructing a highly reliable, real-time spatial model of the user’s immediate surroundings, effectively compensating for the individual limitations and inherent noise of each modality.
To achieve comprehensive 360° obstacle detection and robust environment perception, our smart glasses system employs a synergistic combination of distinct sensor types, as detailed in
Table 2 (
Section 3.2). The data streams from these heterogeneous sensors—including ultrasonic for rapid proximity sensing, LiDAR for detailed environmental mapping, and RFID for precise indoor localization—are continuously collected and strategically fused. This multimodal fusion is important for generating a highly informative real-time spatial model of the user’s immediate environment by offsetting individual limitations and the noise associated with each modality.
Sensor Fusion Methodology: Extended Kalman Filter (EKF)
The raw, often asynchronous, data from the integrated ultrasonic, LiDAR, and RFID sensors are continuously transmitted to the onboard processing unit. An Extended Kalman Filter (EKF) is utilized as the primary sensor fusion technique to merge this multimodal information. The EKF is vital in robustly estimating the user’s dynamic state (e.g., position and orientation) in real time by optimally combining noisy data from multiple sources of information.
This intelligent fusion process leverages the complementary strengths of each sensor to overcome individual weaknesses: RFID data provides highly accurate, absolute positional anchors to correct drift; LiDAR contributes a dense and precise understanding of the environment’s geometry for obstacle mapping; and ultrasonic sensors deliver rapid, close-range proximity alerts essential for immediate collision avoidance, especially with dynamic obstacles. The EKF synthesises various rates and types of input data to produce a highly reliable, real-time obstacle map and accurate user location. Fully fused environmental data is thus the most reliable input for subsequent navigation and perception modules.
4. Software Components and Algorithmic Design
This complementary section will detail the intelligence layers of the system, including navigation algorithms, perception capabilities, user interaction, and the remote platform functionalities.
4.1. Navigation and Localization
This module is the core intelligence responsible for accurately determining the user’s position within the university environment and calculating dynamic, safe pathways. It uses the rich environmental data provided by the sensor fusion module (
Section 3.3) along with accurate localization techniques.
4.1.1. Precise Indoor Localization
As GPS is inefficient indoors, our system integrates several technologies to achieve high-precision indoor tracking. RFID-based location is utilised, using the built-in RFID reader to detect nearness and read data from passive RFID tags located at strategic, known, and precise locations on the university campus. These tags (e.g., at room entrances, corridor junctions, and key points of interest) serve as highly accurate localization anchors. This is further augmented by UWB (ultra-wideband) technology for real-time, continuous positioning between RFID tag points. UWB receivers are strategically deployed across various university building ceilings and hallways, and the smart glasses transmit UWB signals, with receivers recording Time Difference of Arrival (TDOA) or Time of Flight (ToF) data. A fingerprinting approach for RFID/UWB augmented with IMU data for dead reckoning between beacon points processes these multimodal signals. This blending of RFID pinpoint accuracy with UWB continuous tracking and IMU data provides a very accurate and robust localization solution that determines user location anywhere on the campus at room-level or point-of-interest accuracy within the campus
4.1.2. Dynamic Pathfinding and Guidance
The system’s fundamental pathfinding features are activated to deliver dynamic, safe, and optimal routes after the user’s exact location has been determined. Pathways, rooms, and areas of interest are all part of the university environment, which is depicted as a complicated graph with nodes for locations and edges for paths that can be traversed and weighted (e.g., distance, accessibility factors). As previously described in
Section 4.1.3: Dijkstra’s Algorithm Optimisation, the system correctly applies a uniquely optimised Dijkstra’s algorithm for the pathfinding. The algorithm was created to deal with the issues that come from standard static pathfinding, as the algorithm is receiving real-time information on the environment through the sensor integration and fusion (
Section 3.3), allowing the weights of the graph edges to change in real time. For example, if an obstacle is found on a path and temporarily blocks a segment, its weight is instantly changed to a high number (or infinity) to show that it cannot be crossed. Leveraging optimizations such as a priority queue and graph simplification (as explained in
Section 4.1.3), the algorithm ensures extremely low-latency path recalculations. This rapid response is essential for instantly navigating newly appeared obstacles or changing environmental circumstances. The system is designed to provide a level of predictive navigation by continually integrating live sensor data, which allows changes to the navigation pathway before a user physically reaches a newly formed obstacle; this will dramatically increase safety.
4.1.3. Optimized Dijkstra’s Algorithm (Detailed Implementation)
To achieve the necessary real-time performance, the system employs an enhanced version of Dijkstra’s algorithm with the following key optimizations. The pseudocode for this optimized algorithm is presented in
Figure 4.
First, the system uses a priority queue (more precisely, a min-heap) to manage the unvisited nodes rather than an ordinary list or set as in the standard Dijkstra’s algorithm (Optimisation 1). By allowing the algorithm to retrieve the node with the smallest distance in O(log n) time—where ‘n’ is the number of nodes in the graph—this improvement offers a notable performance advantage. Depending on the size and architectural complexity of the building, ‘n’ for a typical university building floor can vary from 100 to 500 nodes. To guarantee instant obstacle avoidance and fluid movement, navigation instructions in real-time assistive technology must be delivered with a very low latency, ideally within tens of milliseconds. Thus, in dynamic, safety-critical scenarios, this improvement in time complexity is essential for ensuring safety and preserving responsiveness.
Second, the system reduces the complexity of the environment representation by using a graph simplification technique (Optimisation 2). In contrast to the standard Dijkstra algorithm, which works on the full, detailed graph, this system pre-processes the graph to remove unnecessary or irrelevant information. For instance, edges that represent non-navigable areas (like solid walls) may be eliminated, or sequences of nodes that represent a long, straight hallway may be compressed into a single edge. The IsEssential(edge) function identifies whether an edge represents a crucial path or connection point that must be conserved for accurate and safe navigation. This function significantly reduces the number of nodes and edges that the algorithm has to process, speeding up route calculation. For very fine-grained maps, this simplification may result in a very small distance approximation, but the computational efficiency gain is crucial for real-time navigation, and the approximation is insignificant when weighed against the advantages for user responsiveness.
Figure 5 further illustrates how the system recalculates and provides an alternate path in response to an unexpected obstacle, highlighting the efficacy of this dynamic rerouting capability in response to real-time environmental changes.
This diagram illustrates the practical advantage of our modified Dijkstra’s algorithm in real-time navigation. The system initially calculates an optimal path (shown as a solid blue line in the figure). Our algorithm quickly calculates a new, safe and optimal path (represented by a dotted red line) when a dynamic obstacle X (such as a cleaning trolley or a group of students) is encountered on that path, providing continuous, safe guidance for the visually impaired user.
Explanation of the Scenario this Diagram Illustrates
Initial Optimal Path (Before Obstacle): To start, the system finds the shortest path from (Start) to (End), to be (Start) -> (O1) -> (O3) -> (O4) -> (O5) -> (End) (this will be shown with arrows along those segments, as shown in blue in the above example (
Figure 5)).
Dynamic Obstacle Appears: Suddenly, a dynamic obstacle X appears on the path segment between (O4) and (O5), making it temporarily impassable.
Re-routing by Modified Dijkstra’s Algorithm: The smart glasses system, through its real-time sensor fusion, immediately detects this new obstacle. The modified Dijkstra’s algorithm quickly recalculates a new optimal and safe path to the destination. The new path might be, for example (Start) -> (O1) -> (O3) -> (O6) -> (O7) -> (End) (in the above actual diagram, the new path is drawn with different arrows, in red and dashed lines, showing that it avoids the blocked segment).
The dynamic rerouting process described (
Figure 6) shows how smart glasses can be used to provide adaptive and intelligent navigation assistance. The system can safely and effectively guide users around dynamic obstacles by combining real-time sensor fusion with a modified version of the Dijkstra algorithm. The navigation experience in various settings, such as shopping centres, hospitals and university campuses, could be greatly enhanced by this technology.
4.2. Environnent Perception
This module extends the system’s awareness by enabling the smart glasses to understand and interpret visual information from the surroundings, converting it into accessible formats for the user, augmenting the data from the direct spatial awareness sensors (
Figure 7). A high-resolution miniature camera (as specified in
Section 3.2) captures real-time video streams. For effective inference, these streams use a convolutional neural network (CNN) built on the YOLOv5 architecture. This model recognises a broad range of objects, signs, and environmental features typical of a university after being trained on a variety of datasets, including the COCO dataset supplemented with particular university-related objects like lecture hall signs, cafeteria items, and campus landmarks. While initial object detection for low-latency alerts occurs on the onboard Raspberry Pi, more complex scene understanding and detailed image captioning are offloaded to the remote platform server via Wi-Fi, balancing real-time needs with computational resources. Recognized objects, as deciphered text from signs, and generated image captions are then converted into natural-sounding audio feedback using a high-quality Google Cloud text-to-speech API, which is delivered via bone conduction headphones for non-obtrusive information access.
4.3. User Interaction and Haptic Feedback
The system provides users with intuitive control that goes beyond audio and utilizes multiple modalities for feedback, allowing for an accessible and responsive interaction experience. The users will use audio commands as the primary method of interaction, through microphones embedded in the glasses frame.
The user’s voice commands will be transcribed into text by a speech-to-text (STT) engine and an intent recognition module that interprets user requests using natural language processing (NLP) in order to trigger appropriate system functionalities (for example, Navigate to library, What is this object?, and Call emergency contact). To optimize information delivery, the system provides audio personalization, allowing for configuration of audio parameters such as speech rate, volume, pitch, and sound effects based on individual user settings. In addition to providing audio cues, the glasses frame contains two vibration motors embedded on either temple of the glasses frame, providing essential haptic feedback. Each vibration engine has a unique pattern and intensity of vibration, providing specific and exclusive non-auditory information, such as directional cues (a prolonged vibration means to turn), proximity warnings (the frequency of vibration increases as an obstacle gets closer), or an affirmation of a voice command, allowing the information to be communicated and/or providing safety.
4.4. Holistic Assistance Platform
Beyond the glasses’ immediate onboard capabilities, the companion platform offers expanded functionalities by utilising remote computing power to improve support and handle data in a comprehensive manner. This platform, which is an essential part of the “IoT platform” shown in
Figure 1, runs on a Google Cloud-based server architecture and securely connects to the smart glasses via Wi-Fi.
The platform incorporates a Gamification Module (explained in more detail in
Section 5) that applies the concepts of challenge and exploration. Its goal is to encourage people to explore unfamiliar areas or identify objects by rewarding them with points or achievements. Timetables, lesson reminders, and assignment due dates are all managed by its educational functionality, which also has the ability to integrate with the university’s Learning Management System (LMS) to automatically retrieve schedules. It pushes contextual, location-aware information to the glasses based on the user’s schedule and proximity (e.g., “You are near the physics lab, your next class is in room 205 in 10 min”). A critical emergency support feature enables individuals to quickly trigger emergency calls or send their precise location to predefined contacts via voice command or a dedicated physical button on the glasses. Furthermore, the platform securely manages user data (anonymized for privacy), including patterns of navigation, which are utilized to continuously refine and adapt navigation routes and educational content, enabling the system’s evolution based on individual usage patterns.
5. Gamification Principles and User-Centric Design Philosophy
The project is inspired by the interactive features of contemporary video games. Just like gamification, which encourages players to discover uncharted territory and overcome adversity in the environment using technology, smart glasses are designed to enable visually impaired students to explore university sites with more confidence and independence, as shown in
Figure 8.
5.1. Exploration
Many video games allow players to discover expansive territories and uncover hidden pathways and possibilities. Similarly, visually impaired individuals often face challenges in discovering their immediate surroundings, such as a house or university campus. Our smart glasses system aids in wayfinding by providing real-time information and directional assistance, allowing for exploration of environments with greater technical support for confidence and freedom.
5.2. Problem Solving
Just as video games challenge players to solve problems, overcome obstacles, and develop strategic thinking through various tools, our smart glasses system is designed to enable visually impaired individuals to solve problems in a university setting. The system promotes autonomous decision making by providing instant environmental information, obstacle recognition, and orientation assistance. This feature leverages artificial intelligence (AI) algorithms for object recognition, scene analysis, and informed decision making, thereby technically enhancing orientation and interaction capabilities.
5.3. Technology Empowerment
Video games demonstrate the power of technology to transform and enhance players’ skills. Gamers commonly use a variety of tools, gadgets, and devices to accomplish objectives, solve puzzles, and enable engaging interactivity. Similarly, our smart glasses system integrates comparable technologies to technically augment the abilities of visually impaired individuals. These smart glasses are designed to improve interaction with their surrounding environment through sensors and AI-based algorithms. The concepts of assistive technology and accessibility are in line with this use of technology to increase user capabilities.
6. System Evaluation and Findings
This section presents the technical performance evaluation of the proposed smart glasses system, outlining all its features that are essential for helping people with visual impairments. The methodology for this evaluation is comprehensively described in
Section 6.1. The results demonstrate early promising gains in some key technical attributes against traditional methods and existing technology solutions.
6.1. Evaluation Methodology
The evaluation of the system was performed through a series of rigorous technical tests within a controlled testbed environment. The testbed allowed us to evaluate the system in a safe, repeatable and controlled environment. This allowed us to explore the system held in all core functionalities systematically and collect precise and measured performance metrics. The evaluation plan comprised the following specific steps.
Defined Test Scenarios: In the controlled testbed, we carried out predetermined obstacle detection tasks, information retrieval tests, and navigation activities. To mimic actual university settings, these scenarios were created to methodically change specific features. We changed both the speed and trajectory of dynamic obstacles and the density of static barriers (such as chairs and tables) for navigation. We introduced items with varying textures and reflectivity, as well as changed the lighting conditions for the vision testing. To evaluate the accuracy of our RFID/UWB system, we assessed orientation and distance for localisation.
Data Acquisition: We logged all sensor readings, algorithm outputs, processing times, and system responses automatically. Specifically, for navigation, the system logged the calculated path, real-time rerouting decisions, and the time taken to reach a destination. For obstacle detection, it logged the sensor data and the system’s response (e.g., a haptic or audio alert). All these data points were time-stamped to allow for a precise analysis of latency.
Performance Metrics: We collected quantitative metrics such as accuracy rates for detection, recognition, and localization, as well as latency for rerouting and information retrieval, and error rates (e.g., false positives/negatives). To ensure a direct connection between our methods and results, the metrics in
Table 3 directly correspond to these measurements. For instance, the “Obstacle detection accuracy” was measured by comparing the system’s reported obstacles to the known obstacle locations in the testbed. “Navigation time” was measured algorithmically, logging the duration from a starting node to a destination node, allowing for a direct comparison of our optimized Dijkstra algorithm’s efficiency against standard pathfinding.
To provide context for the system’s performance, the results were compared against published benchmarks for traditional assistive methods and existing technological solutions from the literature. This comparison highlights the advancements provided by our multi-sensor fusion, optimized algorithms, and integrated functionalities over prior approaches.
While we acknowledge that a comprehensive user study is a crucial next step to validate the system’s ease of use and long-term impact, this technical evaluation provides the foundational evidence needed to prove its feasibility and superiority. The logistical and ethical challenges of conducting a user study prevent it from being included in this paper, but it remains a primary goal for future work.
6.2. System Performance and Comparative Analysis
The smart glasses system’s technical performance was rigorously analyzed, yielding quantitative data across key functionalities.
Table 3 summarizes these evaluation results, providing a direct comparison with the objective performance metrics of traditional methods and existing technological solutions discussed in the literature.
Figure 9 visually depicts the comparative technical performance of our smart glasses against these benchmarks across selected features.
Through a systematic technical evaluation with the smart glasses system, we were able to collect a wealth of quantitative information across all key functions, demonstrating a clear technical advantage against traditional methods and existing technological solutions. These technical and performance advantages are substantiated by a multi-tiered testing regimen, as the results of our unit, component, and system testing are comprehensively presented in
Table 3.
In obstacle detection and avoidance, our system achieved 90% accuracy, significantly higher than the estimated below 50% for traditional methods and 60–80% for existing camera/ultrasonic smart glasses. This improved detection yielded a 20% decrease in testbed collisions and a 15% decrease in algorithm-measured navigation time in simulated crowded and cluttered environments, showing a direct avenue toward improved safety on busy campuses.
For navigation and wayfinding, the system’s optimized pathfinding capabilities resulted in a 40% reduction in algorithm-calculated navigation time compared to traditional methods using pre-recorded routes. Having faster and reliable travel means more independence and a rewarding university experience. More accurate location data also reduced re-routing errors by 60%, proving the effectiveness of our dynamic navigation and wayfinding capabilities in real time.
The object recognition and information retrieval features proved to be very promising, with 85% accuracy in recognising common objects in universities, which is significantly higher than the generally lower accuracy of 70% reported for many existing solutions. This was reflected in a 25% reduction in information search latency, indicating efficient data processing.
The text-to-speech feature proved to be 95% accurate in converting text from given sources (measured on a test corpus), being comparable to or exceeding typical speech synthesizers (around 90% accuracy). The system also demonstrated a 10% improvement in text processing speed and achieved a high objective quality score for synthesized speech, indicating enhanced technical potential for accessing printed and digital learning materials.
The system’s facial recognition accuracy was 70% on a controlled dataset. Although there is not a direct comparison with current solutions in the literature, this technical viability shows that the system can handle a variety of user needs beyond navigation, which lays a solid basis for future advancements in social interaction aid.
In conclusion, this technical evaluation offers solid concrete evidence of the remarkable improvements our smart glasses system offers. In addition to providing a solid basis for increased independence, the significant gains in navigation performance, object detection accuracy, and information retrieval demonstrate the technical superiority of our integrated, multi-sensor approach over conventional and currently available fragmented solutions. These findings imply that the system has an immense potential to address the various issues that visually impaired people encounter.
The bar chart below illustrates the comparative technical data from
Table 3, highlighting in particular the performance of the proposed smart glasses compared to traditional methods and existing solutions for key technical indicators such as accuracy and reduction percentages.
6.3. Limitations
While the technical findings of this study are promising, some areas require further investigation in the future, particularly in real-world contexts. The current evaluation provides a detailed account of algorithmic efficiency and technical functioning by examining operational performance.
The focused technical evaluation we conducted as part of our project, which provided a thorough picture of algorithm performance and operational functionality, led to some encouraging fundamental results. This work represents an important step in laying the foundation for future studies in different real-world settings.
An in-depth and comprehensive user study would be considered as part of future work. The objective is to validate the system’s ease of use in real-life scenarios, its long-term effects on users’ independence and quality of life, and its generalisation to different user populations and environments.
Future research should include extensive field testing in more diverse contexts in order to more broadly assess the potential adaptability and generalisability of the overall system. Future research will also focus on physical iterations of the smart glasses prototype to improve battery life (and miniaturisation) and provide certain functional features such as social interaction assistance, which while demonstrating technical feasibility (e.g., accuracy of facial recognition software) require recalibration altogether. The design, architecture, and functional components of the system are appropriately formed based on the existing literature and our understanding of the needs of visually impaired individuals. Further research is therefore needed to understand how custom adaptations can be implemented and the system tested by broader populations for iterative improvement of the system’s overall functionality and experience for different user populations.
7. Conclusions
The proposed smart glasses system is an important advancement in terms of accessibility and independence for visually impaired individuals in an educational environment. By combining Internet of Things technology, computer vision technology, and a user-centred design process, the project tackles complex technical challenges related to navigational, information access, and social interaction. A thorough technical evaluation indicates the strong performance of the system and its ability to foster a more independent and enriched university experience.
The integrated technological system is characterized by a holistic, user-centred design, guiding personalization and accessibility in its operation. In this study, we illustrated how concepts from gaming principles, such as exploration, challenge and technology, can contribute to the design of assistive technologies, improving their intrinsic engagement, environmental awareness, and accessibility features. Our research provides a significant technical advancement that lays the essential foundation for greater independence.
Future work will focus on combination of smart glasses with LMSs (learning management systems) and on continuing studies with larger user populations to accurately assess the potential greater impact of this system in the real world. The proposed smart glasses illustrate the significant potential of contemporary assistive technologies to facilitate participation and independence for visually impaired individuals through cutting-edge technologies. This research will help to boost assistive technologies and build a more accessible and inclusive society.