Next Article in Journal
Correction: Gu et al. The Innovative Application of Visual Communication Design in Modern Art Design. Electronics 2023, 12, 1150
Previous Article in Journal
A Post-Quantum Authentication and Key Agreement Scheme for Drone Swarms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Spatial Awareness: Real-Time Sensory Augmentation with Smart Glasses for Visually Impaired Individuals

Department of Software Engineering, College of Computer Sciences and Engineering, University of Jeddah, Jeddah 22445, Saudi Arabia
Electronics 2025, 14(17), 3365; https://doi.org/10.3390/electronics14173365
Submission received: 21 June 2025 / Revised: 10 August 2025 / Accepted: 13 August 2025 / Published: 25 August 2025
(This article belongs to the Section Computer Science & Engineering)

Abstract

This research presents an innovative Internet of Things (IoT) and artificial intelligence (AI) platform designed to provide holistic assistance and foster autonomy for visually impaired individuals within the university environment. Its main novelty is real-time sensory augmentation and spatial awareness, integrating ultrasonic, LiDAR, and RFID sensors for robust 360° obstacle detection, environmental perception, and precise indoor localization. A novel, optimized Dijkstra algorithm calculates optimal routes; speech and intent recognition enable intuitive voice control. The wearable smart glasses are complemented by a platform providing essential educational functionalities, including lesson reminders, timetables, and emergency assistance. Based on gamified principles of exploration and challenge, the platform includes immersive technology settings, intelligent image recognition, auditory conversion, haptic feedback, and rapid contextual awareness, delivering a sophisticated, effective navigational experience. Exhaustive technical evaluation reveals that a more autonomous and fulfilling university experience is made possible by notable improvements in navigation performance, object detection accuracy, and technical capabilities for social interaction features, according to a thorough technical audit.

1. Introduction

Blind students encounter notable extrinsic challenges in the university environment impacting their academic and social integration. Specific to the university experience, blind students find it difficult to navigate spatially complex campus buildings and layouts, access learning materials in alternative/accessible formats, and engage socially with peers and fellow lecturers. The assistance traditionally provided by means of a white cane and guide dogs, while helpful, often becomes revealed to be inadequate in the dynamic and at times cluttered university environment that can impose limitations on obstacle avoidance and orientation.
To address these extrinsic challenges and enhance the blind student’s transition into independence, this research develops a smart glasses system and a complementary mobile app. The smart glasses combine Internet of Things (IoT) technologies, including ultrasonic and RFID sensors, to provide real-time environmental information, obstacle avoidance, and indoor navigation capability. The mobile app further provides support by offering features such as lesson reminders, timetable access, and emergency contact support features.
The main goal of the smart glasses system is to enhance independence, increase access, and enrich the overall university experience for students with visual impairments. By improving navigation and accessibility to information, the smart glasses system aims to promote inclusivity and a supportive learning experience.
This paper describes the design, implementation, and a thorough technical evaluation of the proposed smart glasses system. The primary objective of this evaluation is to rigorously validate the system’s foundational technical capabilities and performance against existing assistive technologies. We will show that our multi-sensor fusion, optimized algorithms, and integrated functionalities provide significant advancements over fragmented or less robust prior approaches. The subsequent sections provide a comprehensive overview of the system’s architecture, core functionalities, and the results obtained from its evaluation.

2. Literature Review

Over the last decade, considerable advances have been achieved in the field of assistive technologies for visually impaired people, particularly in the realm of smart glasses, whose potential for improving independence and accessibility has been recognized. This section critically examines existing smart glasses projects, pinpoints their inherent limitations in responding to the multiple needs of visually impaired university students, and consequently delineates the specific research gaps that our ‘smart glasses for blind students’ project aims to address.

2.1. Existing Assistive Smart Glasses Projects and Their Limitations

Many smart glasses projects have been designed to assist people with visual impairments, leveraging diverse technologies. While these initiatives present exciting functionalities, a critical investigation reveals common limitations in fully meeting the complex needs of users in dynamic environments.

2.1.1. Navigation and Obstacle Detection Systems

Navigating effectively and safely in dynamic and complex environments, such as university campuses, remains a major challenge for visually impaired people. Traditional aids such as white canes or guide dogs provide basic support but do not enable real-time global spatial awareness. Technological aids have endeavoured to compensate for this deficiency. Early prototypes of smart glasses relied on ultrasonic sensors to detect and evade obstacles (Meliones [1]).
Although simple and cost-effective, these systems have a low accuracy in cluttered environments and have difficulty identifying individual objects, often behaving merely as thresholds [2]. Their susceptibility to missing transparent or dynamic obstacles poses a significant safety risk. More evolved systems integrate computer vision cameras or LiDAR to further improve obstacle detection, [3,4]. Nevertheless, camera-based systems can be computationally expensive and liable to latency due to their reliance on cloud computing, and they may not cope well with new environments. The need for robust testing is emphasized [5,6], while accuracy is discussed [7]. (Ramani et al. and Baig et al. [5,6] emphasize the need for robust testing, while Ashiq et al. [2] discussed accuracy).
LiDAR systems, while offering high precision, can be affected by adverse weather conditions and are often associated with high sensor costs. For in-built navigation, GPS is ineffective, requiring a variety of in-built positioning methods, the most common of which are Bluetooth beacons [8] and Wi-Fi triangulation. Yet, achieving seamless indoor–outdoor transitions and high-precision, detailed guidance in complicated indoor layouts like university buildings remains a considerable challenge [9,10], which emphasizes the lack of resilience in current systems.

2.1.2. Object Recognition and Information Access Systems

Visual information tends to come in the form of signs, printed papers, and commonly as cues in the environment, but it is critical to autonomy regardless of the aspects of mobility. The developments of object recognition and interpreting scenes using artificial intelligence methods have offered some early accessible solutions that provide audio feedback from visual input. These methods are relatively new, still in development, and are computationally costly; they employ cloud computing-based processing, which introduces a time lag, meaning they face limitations in real-time work [5]. Another challenge is how we might explore new objects and environments without wasting time on relearning process. The benefit of smart glasses coupled with text-to-speech (TTS) and image captioning will allow for audible output of text or image descriptions.
That said, reading, recognizing, and encoding text in numerous font faces and layouts [11] and also describing more complex images with context [12] represents a very real challenge.

2.1.3. Holistic Support and User-Centric Design Considerations

Most existing solutions focus on a limited number of isolated tasks, often ignoring the broad, customized needs of users, particularly in contexts such as higher education. The importance of creating user-friendly, usable, and accessible features in smart glasses for blind people has been noted consistently [13].
Table 1 provides a comparative overview of several representative smart glasses systems, highlighting their key features and inherent limitations that our proposed system aims to overcome.

2.2. Specific Needs of Visually Impaired Students in University Settings

Visually impaired students face unique and pronounced challenges within the university environment that go beyond general navigation. These specific needs demand tailored assistive technologies. One primary area is navigation and wayfinding, which involves not just getting from building to building but precisely finding specific classrooms, lecture halls, labs, or even seats within a large, often changing, building layout. Another critical need is access to information and learning materials, as students require immediate access to traditional learning materials (textbooks, handouts, whiteboard content, and presentation slides) in accessible formats, often in real time. Furthermore, social interaction and inclusion present significant hurdles, as recognizing faces, interpreting social cues, and initiating spontaneous conversations or group activities can be challenging, impacting individuals’ ability to fully participate in university social life and collaborative learning. Finally, time management and organization can be difficult without visual cues, as keeping track of diverse course schedules, deadlines, and extracurricular activities creates administrative and learning challenges.

2.3. Identified Research Gaps and Our Contributions

As the previous literature review indicates, there is no comprehensive and integrated assistive technology platform with which to address the unique and multifaceted challenges of visually impaired students in the complex environment of a university, despite the progress that continues to be made in some areas. The fragmented approaches that do exist do not address the holistic needs of this unique population.
This project proposes a novel smart glasses system that directly addresses these critical gaps through the following key contributions.
  • Holistic spatial awareness using multiple sensors: Our platform integrates ultrasonic, LiDar, and RFID sensors to provide whole-room, real-time 360° obstacle detection and accurate continuous indoor localization, making it able to avoid the limitations of limited context-awareness and inaccurate indoor navigation that impact safety and confidence in the complex layouts of universities.
  • Optimized real-time dynamic navigation: We present an optimized Dijkstra’s algorithm designed for use in dynamic indoor environments, allowing for real-time navigation by continuous rerouting from moving obstacles or changing conditions. This represents a significant upgrade from static navigation solutions, which are essential for dynamic movement.
  • AI-powered environmental perception and accessible information synthesis: The system will utilize image recognition and auditory conversion, which enhances processing speed. Hence, an environmental perception connects contextually appropriate information in real time from large signs, books, and objects, thus addressing lag and accuracy issues with previous systems.
  • Gamified platform for enhanced social and academic engagement: Going beyond mere physical assistance, we integrate a unique accompanying gamified platform that offers lesson/timetable reminders, emergency support, and features designed to aid social interaction. This approach encourages greater independence, social inclusion and better time management, thereby meeting the wider needs of university life.
By leveraging these technologies and focusing on the specific context of university students, we seek to provide an actual comprehensive solution to many of the challenges faced in higher education, thus revolutionising the whole university experience and improving independence for visually impaired people.

3. System Methodology: Hardware Platform and Core Sensing

This section presents a comprehensive description of the proposed smart glasses system, including overall architecture, hardware and software components used, and how they are typically integrated and function in the system. It provides the technical information to understand the functionality and implementation of the system.

3.1. Smart Glasses Architecture and Communication

The architectural design of the smart glasses is built to enhance access and autonomy for visually impaired students. The system’s hardware and software components work in tandem to provide timely assistance. The overarching system architecture, illustrating the entire platform ecosystem, is presented in Figure 1.
Smart glasses act as the main hub to capture data from various onboard sensors, including the GPS module (for location data), multiple ultrasonic sensors (for object distance), and the camera (for visual input). This raw data is then initially processed and analyzed by the onboard Raspberry Pi.
For advanced processing, decision making, and access to comprehensive mapping information or user profiles, the partially processed data is sent to a remote application server (cloud-based or local). This communication primarily occurs via Wi-Fi [14] (operating on dual-band 2.4 GHz and 5 GHz frequencies), chosen for its high bandwidth, which is necessary for real-time video streaming, and its ubiquitous presence within university campus infrastructure. The server can then send the correct information back to the smart glasses for display to the user or utilize the text-to-speech engine to convert text into speech that will be transmitted through bone conduction headphones. This bidirectional flow of information enables a continuous feedback loop. User commands, captured through the onboard microphone or a mobile app interface, further shape the system’s behaviour, allowing for greater information access and system customization. A Firebase database serves as system memory for the secure storage of user data and mapping information for navigation goals, as illustrated in Figure 2.

3.2. Hardware Components

The core of our assistive technology is a custom-designed smart glasses prototype that integrates multiple sensor modalities and processing capabilities. The selection of each hardware component was carefully guided by considerations of real-time performance, accuracy, power efficiency, compactness, and wearer comfort.
This device incorporates a carefully selected array of integrated hardware modules, whose detailed specifications and primary functions are summarized in Table 2 and whose physical arrangement and integration on the prototype are visually represented in Figure 3.
Table 2 presents a detailed overview of the main hardware components incorporated into the smart glasses, as well as their main specifications and functions.

3.3. Sensor Integration and Fusion

To achieve comprehensive 360° obstacle detection and robust environment perception, our smart glasses system employs a synergistic combination of distinct sensor types, as detailed in Table 2 (Section 3.2). The data streams from these heterogeneous sensors—including ultrasonic for rapid proximity sensing, LiDAR for detailed environmental mapping, and RFID for precise indoor localization—are continuously collected and strategically fused. This multimodal fusion is critical for constructing a highly reliable, real-time spatial model of the user’s immediate surroundings, effectively compensating for the individual limitations and inherent noise of each modality.
To achieve comprehensive 360° obstacle detection and robust environment perception, our smart glasses system employs a synergistic combination of distinct sensor types, as detailed in Table 2 (Section 3.2). The data streams from these heterogeneous sensors—including ultrasonic for rapid proximity sensing, LiDAR for detailed environmental mapping, and RFID for precise indoor localization—are continuously collected and strategically fused. This multimodal fusion is important for generating a highly informative real-time spatial model of the user’s immediate environment by offsetting individual limitations and the noise associated with each modality.
Sensor Fusion Methodology: Extended Kalman Filter (EKF)
The raw, often asynchronous, data from the integrated ultrasonic, LiDAR, and RFID sensors are continuously transmitted to the onboard processing unit. An Extended Kalman Filter (EKF) is utilized as the primary sensor fusion technique to merge this multimodal information. The EKF is vital in robustly estimating the user’s dynamic state (e.g., position and orientation) in real time by optimally combining noisy data from multiple sources of information.
This intelligent fusion process leverages the complementary strengths of each sensor to overcome individual weaknesses: RFID data provides highly accurate, absolute positional anchors to correct drift; LiDAR contributes a dense and precise understanding of the environment’s geometry for obstacle mapping; and ultrasonic sensors deliver rapid, close-range proximity alerts essential for immediate collision avoidance, especially with dynamic obstacles. The EKF synthesises various rates and types of input data to produce a highly reliable, real-time obstacle map and accurate user location. Fully fused environmental data is thus the most reliable input for subsequent navigation and perception modules.

4. Software Components and Algorithmic Design

This complementary section will detail the intelligence layers of the system, including navigation algorithms, perception capabilities, user interaction, and the remote platform functionalities.

4.1. Navigation and Localization

This module is the core intelligence responsible for accurately determining the user’s position within the university environment and calculating dynamic, safe pathways. It uses the rich environmental data provided by the sensor fusion module (Section 3.3) along with accurate localization techniques.

4.1.1. Precise Indoor Localization

As GPS is inefficient indoors, our system integrates several technologies to achieve high-precision indoor tracking. RFID-based location is utilised, using the built-in RFID reader to detect nearness and read data from passive RFID tags located at strategic, known, and precise locations on the university campus. These tags (e.g., at room entrances, corridor junctions, and key points of interest) serve as highly accurate localization anchors. This is further augmented by UWB (ultra-wideband) technology for real-time, continuous positioning between RFID tag points. UWB receivers are strategically deployed across various university building ceilings and hallways, and the smart glasses transmit UWB signals, with receivers recording Time Difference of Arrival (TDOA) or Time of Flight (ToF) data. A fingerprinting approach for RFID/UWB augmented with IMU data for dead reckoning between beacon points processes these multimodal signals. This blending of RFID pinpoint accuracy with UWB continuous tracking and IMU data provides a very accurate and robust localization solution that determines user location anywhere on the campus at room-level or point-of-interest accuracy within the campus

4.1.2. Dynamic Pathfinding and Guidance

The system’s fundamental pathfinding features are activated to deliver dynamic, safe, and optimal routes after the user’s exact location has been determined. Pathways, rooms, and areas of interest are all part of the university environment, which is depicted as a complicated graph with nodes for locations and edges for paths that can be traversed and weighted (e.g., distance, accessibility factors). As previously described in Section 4.1.3: Dijkstra’s Algorithm Optimisation, the system correctly applies a uniquely optimised Dijkstra’s algorithm for the pathfinding. The algorithm was created to deal with the issues that come from standard static pathfinding, as the algorithm is receiving real-time information on the environment through the sensor integration and fusion (Section 3.3), allowing the weights of the graph edges to change in real time. For example, if an obstacle is found on a path and temporarily blocks a segment, its weight is instantly changed to a high number (or infinity) to show that it cannot be crossed. Leveraging optimizations such as a priority queue and graph simplification (as explained in Section 4.1.3), the algorithm ensures extremely low-latency path recalculations. This rapid response is essential for instantly navigating newly appeared obstacles or changing environmental circumstances. The system is designed to provide a level of predictive navigation by continually integrating live sensor data, which allows changes to the navigation pathway before a user physically reaches a newly formed obstacle; this will dramatically increase safety.

4.1.3. Optimized Dijkstra’s Algorithm (Detailed Implementation)

To achieve the necessary real-time performance, the system employs an enhanced version of Dijkstra’s algorithm with the following key optimizations. The pseudocode for this optimized algorithm is presented in Figure 4.
First, the system uses a priority queue (more precisely, a min-heap) to manage the unvisited nodes rather than an ordinary list or set as in the standard Dijkstra’s algorithm (Optimisation 1). By allowing the algorithm to retrieve the node with the smallest distance in O(log n) time—where ‘n’ is the number of nodes in the graph—this improvement offers a notable performance advantage. Depending on the size and architectural complexity of the building, ‘n’ for a typical university building floor can vary from 100 to 500 nodes. To guarantee instant obstacle avoidance and fluid movement, navigation instructions in real-time assistive technology must be delivered with a very low latency, ideally within tens of milliseconds. Thus, in dynamic, safety-critical scenarios, this improvement in time complexity is essential for ensuring safety and preserving responsiveness.
Second, the system reduces the complexity of the environment representation by using a graph simplification technique (Optimisation 2). In contrast to the standard Dijkstra algorithm, which works on the full, detailed graph, this system pre-processes the graph to remove unnecessary or irrelevant information. For instance, edges that represent non-navigable areas (like solid walls) may be eliminated, or sequences of nodes that represent a long, straight hallway may be compressed into a single edge. The IsEssential(edge) function identifies whether an edge represents a crucial path or connection point that must be conserved for accurate and safe navigation. This function significantly reduces the number of nodes and edges that the algorithm has to process, speeding up route calculation. For very fine-grained maps, this simplification may result in a very small distance approximation, but the computational efficiency gain is crucial for real-time navigation, and the approximation is insignificant when weighed against the advantages for user responsiveness.
Figure 5 further illustrates how the system recalculates and provides an alternate path in response to an unexpected obstacle, highlighting the efficacy of this dynamic rerouting capability in response to real-time environmental changes.
This diagram illustrates the practical advantage of our modified Dijkstra’s algorithm in real-time navigation. The system initially calculates an optimal path (shown as a solid blue line in the figure). Our algorithm quickly calculates a new, safe and optimal path (represented by a dotted red line) when a dynamic obstacle X (such as a cleaning trolley or a group of students) is encountered on that path, providing continuous, safe guidance for the visually impaired user.
Explanation of the Scenario this Diagram Illustrates
Initial Optimal Path (Before Obstacle): To start, the system finds the shortest path from (Start) to (End), to be (Start) -> (O1) -> (O3) -> (O4) -> (O5) -> (End) (this will be shown with arrows along those segments, as shown in blue in the above example (Figure 5)).
Dynamic Obstacle Appears: Suddenly, a dynamic obstacle X appears on the path segment between (O4) and (O5), making it temporarily impassable.
Re-routing by Modified Dijkstra’s Algorithm: The smart glasses system, through its real-time sensor fusion, immediately detects this new obstacle. The modified Dijkstra’s algorithm quickly recalculates a new optimal and safe path to the destination. The new path might be, for example (Start) -> (O1) -> (O3) -> (O6) -> (O7) -> (End) (in the above actual diagram, the new path is drawn with different arrows, in red and dashed lines, showing that it avoids the blocked segment).
The dynamic rerouting process described (Figure 6) shows how smart glasses can be used to provide adaptive and intelligent navigation assistance. The system can safely and effectively guide users around dynamic obstacles by combining real-time sensor fusion with a modified version of the Dijkstra algorithm. The navigation experience in various settings, such as shopping centres, hospitals and university campuses, could be greatly enhanced by this technology.

4.2. Environnent Perception

This module extends the system’s awareness by enabling the smart glasses to understand and interpret visual information from the surroundings, converting it into accessible formats for the user, augmenting the data from the direct spatial awareness sensors (Figure 7). A high-resolution miniature camera (as specified in Section 3.2) captures real-time video streams. For effective inference, these streams use a convolutional neural network (CNN) built on the YOLOv5 architecture. This model recognises a broad range of objects, signs, and environmental features typical of a university after being trained on a variety of datasets, including the COCO dataset supplemented with particular university-related objects like lecture hall signs, cafeteria items, and campus landmarks. While initial object detection for low-latency alerts occurs on the onboard Raspberry Pi, more complex scene understanding and detailed image captioning are offloaded to the remote platform server via Wi-Fi, balancing real-time needs with computational resources. Recognized objects, as deciphered text from signs, and generated image captions are then converted into natural-sounding audio feedback using a high-quality Google Cloud text-to-speech API, which is delivered via bone conduction headphones for non-obtrusive information access.

4.3. User Interaction and Haptic Feedback

The system provides users with intuitive control that goes beyond audio and utilizes multiple modalities for feedback, allowing for an accessible and responsive interaction experience. The users will use audio commands as the primary method of interaction, through microphones embedded in the glasses frame.
The user’s voice commands will be transcribed into text by a speech-to-text (STT) engine and an intent recognition module that interprets user requests using natural language processing (NLP) in order to trigger appropriate system functionalities (for example, Navigate to library, What is this object?, and Call emergency contact). To optimize information delivery, the system provides audio personalization, allowing for configuration of audio parameters such as speech rate, volume, pitch, and sound effects based on individual user settings. In addition to providing audio cues, the glasses frame contains two vibration motors embedded on either temple of the glasses frame, providing essential haptic feedback. Each vibration engine has a unique pattern and intensity of vibration, providing specific and exclusive non-auditory information, such as directional cues (a prolonged vibration means to turn), proximity warnings (the frequency of vibration increases as an obstacle gets closer), or an affirmation of a voice command, allowing the information to be communicated and/or providing safety.

4.4. Holistic Assistance Platform

Beyond the glasses’ immediate onboard capabilities, the companion platform offers expanded functionalities by utilising remote computing power to improve support and handle data in a comprehensive manner. This platform, which is an essential part of the “IoT platform” shown in Figure 1, runs on a Google Cloud-based server architecture and securely connects to the smart glasses via Wi-Fi.
The platform incorporates a Gamification Module (explained in more detail in Section 5) that applies the concepts of challenge and exploration. Its goal is to encourage people to explore unfamiliar areas or identify objects by rewarding them with points or achievements. Timetables, lesson reminders, and assignment due dates are all managed by its educational functionality, which also has the ability to integrate with the university’s Learning Management System (LMS) to automatically retrieve schedules. It pushes contextual, location-aware information to the glasses based on the user’s schedule and proximity (e.g., “You are near the physics lab, your next class is in room 205 in 10 min”). A critical emergency support feature enables individuals to quickly trigger emergency calls or send their precise location to predefined contacts via voice command or a dedicated physical button on the glasses. Furthermore, the platform securely manages user data (anonymized for privacy), including patterns of navigation, which are utilized to continuously refine and adapt navigation routes and educational content, enabling the system’s evolution based on individual usage patterns.

5. Gamification Principles and User-Centric Design Philosophy

The project is inspired by the interactive features of contemporary video games. Just like gamification, which encourages players to discover uncharted territory and overcome adversity in the environment using technology, smart glasses are designed to enable visually impaired students to explore university sites with more confidence and independence, as shown in Figure 8.

5.1. Exploration

Many video games allow players to discover expansive territories and uncover hidden pathways and possibilities. Similarly, visually impaired individuals often face challenges in discovering their immediate surroundings, such as a house or university campus. Our smart glasses system aids in wayfinding by providing real-time information and directional assistance, allowing for exploration of environments with greater technical support for confidence and freedom.

5.2. Problem Solving

Just as video games challenge players to solve problems, overcome obstacles, and develop strategic thinking through various tools, our smart glasses system is designed to enable visually impaired individuals to solve problems in a university setting. The system promotes autonomous decision making by providing instant environmental information, obstacle recognition, and orientation assistance. This feature leverages artificial intelligence (AI) algorithms for object recognition, scene analysis, and informed decision making, thereby technically enhancing orientation and interaction capabilities.

5.3. Technology Empowerment

Video games demonstrate the power of technology to transform and enhance players’ skills. Gamers commonly use a variety of tools, gadgets, and devices to accomplish objectives, solve puzzles, and enable engaging interactivity. Similarly, our smart glasses system integrates comparable technologies to technically augment the abilities of visually impaired individuals. These smart glasses are designed to improve interaction with their surrounding environment through sensors and AI-based algorithms. The concepts of assistive technology and accessibility are in line with this use of technology to increase user capabilities.

6. System Evaluation and Findings

This section presents the technical performance evaluation of the proposed smart glasses system, outlining all its features that are essential for helping people with visual impairments. The methodology for this evaluation is comprehensively described in Section 6.1. The results demonstrate early promising gains in some key technical attributes against traditional methods and existing technology solutions.

6.1. Evaluation Methodology

The evaluation of the system was performed through a series of rigorous technical tests within a controlled testbed environment. The testbed allowed us to evaluate the system in a safe, repeatable and controlled environment. This allowed us to explore the system held in all core functionalities systematically and collect precise and measured performance metrics. The evaluation plan comprised the following specific steps.
Defined Test Scenarios: In the controlled testbed, we carried out predetermined obstacle detection tasks, information retrieval tests, and navigation activities. To mimic actual university settings, these scenarios were created to methodically change specific features. We changed both the speed and trajectory of dynamic obstacles and the density of static barriers (such as chairs and tables) for navigation. We introduced items with varying textures and reflectivity, as well as changed the lighting conditions for the vision testing. To evaluate the accuracy of our RFID/UWB system, we assessed orientation and distance for localisation.
Data Acquisition: We logged all sensor readings, algorithm outputs, processing times, and system responses automatically. Specifically, for navigation, the system logged the calculated path, real-time rerouting decisions, and the time taken to reach a destination. For obstacle detection, it logged the sensor data and the system’s response (e.g., a haptic or audio alert). All these data points were time-stamped to allow for a precise analysis of latency.
Performance Metrics: We collected quantitative metrics such as accuracy rates for detection, recognition, and localization, as well as latency for rerouting and information retrieval, and error rates (e.g., false positives/negatives). To ensure a direct connection between our methods and results, the metrics in Table 3 directly correspond to these measurements. For instance, the “Obstacle detection accuracy” was measured by comparing the system’s reported obstacles to the known obstacle locations in the testbed. “Navigation time” was measured algorithmically, logging the duration from a starting node to a destination node, allowing for a direct comparison of our optimized Dijkstra algorithm’s efficiency against standard pathfinding.
To provide context for the system’s performance, the results were compared against published benchmarks for traditional assistive methods and existing technological solutions from the literature. This comparison highlights the advancements provided by our multi-sensor fusion, optimized algorithms, and integrated functionalities over prior approaches.
While we acknowledge that a comprehensive user study is a crucial next step to validate the system’s ease of use and long-term impact, this technical evaluation provides the foundational evidence needed to prove its feasibility and superiority. The logistical and ethical challenges of conducting a user study prevent it from being included in this paper, but it remains a primary goal for future work.

6.2. System Performance and Comparative Analysis

The smart glasses system’s technical performance was rigorously analyzed, yielding quantitative data across key functionalities. Table 3 summarizes these evaluation results, providing a direct comparison with the objective performance metrics of traditional methods and existing technological solutions discussed in the literature. Figure 9 visually depicts the comparative technical performance of our smart glasses against these benchmarks across selected features.
Through a systematic technical evaluation with the smart glasses system, we were able to collect a wealth of quantitative information across all key functions, demonstrating a clear technical advantage against traditional methods and existing technological solutions. These technical and performance advantages are substantiated by a multi-tiered testing regimen, as the results of our unit, component, and system testing are comprehensively presented in Table 3.
In obstacle detection and avoidance, our system achieved 90% accuracy, significantly higher than the estimated below 50% for traditional methods and 60–80% for existing camera/ultrasonic smart glasses. This improved detection yielded a 20% decrease in testbed collisions and a 15% decrease in algorithm-measured navigation time in simulated crowded and cluttered environments, showing a direct avenue toward improved safety on busy campuses.
For navigation and wayfinding, the system’s optimized pathfinding capabilities resulted in a 40% reduction in algorithm-calculated navigation time compared to traditional methods using pre-recorded routes. Having faster and reliable travel means more independence and a rewarding university experience. More accurate location data also reduced re-routing errors by 60%, proving the effectiveness of our dynamic navigation and wayfinding capabilities in real time.
The object recognition and information retrieval features proved to be very promising, with 85% accuracy in recognising common objects in universities, which is significantly higher than the generally lower accuracy of 70% reported for many existing solutions. This was reflected in a 25% reduction in information search latency, indicating efficient data processing.
The text-to-speech feature proved to be 95% accurate in converting text from given sources (measured on a test corpus), being comparable to or exceeding typical speech synthesizers (around 90% accuracy). The system also demonstrated a 10% improvement in text processing speed and achieved a high objective quality score for synthesized speech, indicating enhanced technical potential for accessing printed and digital learning materials.
The system’s facial recognition accuracy was 70% on a controlled dataset. Although there is not a direct comparison with current solutions in the literature, this technical viability shows that the system can handle a variety of user needs beyond navigation, which lays a solid basis for future advancements in social interaction aid.
In conclusion, this technical evaluation offers solid concrete evidence of the remarkable improvements our smart glasses system offers. In addition to providing a solid basis for increased independence, the significant gains in navigation performance, object detection accuracy, and information retrieval demonstrate the technical superiority of our integrated, multi-sensor approach over conventional and currently available fragmented solutions. These findings imply that the system has an immense potential to address the various issues that visually impaired people encounter.
The bar chart below illustrates the comparative technical data from Table 3, highlighting in particular the performance of the proposed smart glasses compared to traditional methods and existing solutions for key technical indicators such as accuracy and reduction percentages.

6.3. Limitations

While the technical findings of this study are promising, some areas require further investigation in the future, particularly in real-world contexts. The current evaluation provides a detailed account of algorithmic efficiency and technical functioning by examining operational performance.
The focused technical evaluation we conducted as part of our project, which provided a thorough picture of algorithm performance and operational functionality, led to some encouraging fundamental results. This work represents an important step in laying the foundation for future studies in different real-world settings.
An in-depth and comprehensive user study would be considered as part of future work. The objective is to validate the system’s ease of use in real-life scenarios, its long-term effects on users’ independence and quality of life, and its generalisation to different user populations and environments.
Future research should include extensive field testing in more diverse contexts in order to more broadly assess the potential adaptability and generalisability of the overall system. Future research will also focus on physical iterations of the smart glasses prototype to improve battery life (and miniaturisation) and provide certain functional features such as social interaction assistance, which while demonstrating technical feasibility (e.g., accuracy of facial recognition software) require recalibration altogether. The design, architecture, and functional components of the system are appropriately formed based on the existing literature and our understanding of the needs of visually impaired individuals. Further research is therefore needed to understand how custom adaptations can be implemented and the system tested by broader populations for iterative improvement of the system’s overall functionality and experience for different user populations.

7. Conclusions

The proposed smart glasses system is an important advancement in terms of accessibility and independence for visually impaired individuals in an educational environment. By combining Internet of Things technology, computer vision technology, and a user-centred design process, the project tackles complex technical challenges related to navigational, information access, and social interaction. A thorough technical evaluation indicates the strong performance of the system and its ability to foster a more independent and enriched university experience.
The integrated technological system is characterized by a holistic, user-centred design, guiding personalization and accessibility in its operation. In this study, we illustrated how concepts from gaming principles, such as exploration, challenge and technology, can contribute to the design of assistive technologies, improving their intrinsic engagement, environmental awareness, and accessibility features. Our research provides a significant technical advancement that lays the essential foundation for greater independence.
Future work will focus on combination of smart glasses with LMSs (learning management systems) and on continuing studies with larger user populations to accurately assess the potential greater impact of this system in the real world. The proposed smart glasses illustrate the significant potential of contemporary assistive technologies to facilitate participation and independence for visually impaired individuals through cutting-edge technologies. This research will help to boost assistive technologies and build a more accessible and inclusive society.

Funding

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number MoE-IF-UJ-R2-22-04220794-1.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Meliones, A.; Filios, C.; Llorente, J. Reliable Ultrasonic Obstacle Recognition for Outdoor Blind Navigation. Technologies 2022, 10, 54. [Google Scholar] [CrossRef]
  2. Mueen, A.; Awedh, M.; Zafar, B. Multi-obstacle aware smart navigation system for visually impaired people in fog connected IoT-cloud environment. Meas. Control 2022, 55, 1014–1026. [Google Scholar] [CrossRef] [PubMed]
  3. Bhabad, D.; Kadam, S.; Malode, T.; Shinde, G.; Bage, D. Object Detection for Night Vision using Deep Learning Algorithms. Int. J. Comput. Trends Technol. 2023, 71, 87–92. [Google Scholar] [CrossRef]
  4. Gudavalli, S.; Balamurugan, R. LiDAR-Based Obstacle Detection and Distance Estimation in Navigation Assistance for Visually Impaired. In Proceedings of the 2022 6th International Conference on Devices, Circuits and Systems (ICDCS), Tamilnadu, India, 21–22 April 2022; pp. 126–130. [Google Scholar]
  5. Ramani, J.G.; Yamini, P.; Sajeni, G.; Vijesh, G.; Tilak, M. Gen-AI Powered Glasses for Visually Impaired. Int. J. Innov. Res. Technol. 2025, 11, 1–9. [Google Scholar]
  6. Baig, M.S.A.; Gillani, S.A.; Shah, S.M.; Aljawarneh, M.; Khan, A.A.; Siddiqui, M.H. AI-based Wearable Vision Assistance System for the Visually Impaired: Integrating Real-Time Object Recognition and Contextual Understanding Using Large Vision-Language Models. arXiv 2024, arXiv:2412.20059. [Google Scholar]
  7. Ashiq, F.; Asif, M.; Ahmad, M.B.; Zafar, S.; Masood, K.; Mahmood, T. CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People. IEEE Access 2022, 10, 14819–14834. [Google Scholar] [CrossRef]
  8. Malatesh, S.H.; Seetaram Naik, P.; Kumar Singh, A.; Ms, H.; Yaadav, H.; Singh, S. Virtual Vision: Autonomous Indoor Navigation System for Visually Impaired. Int. J. Innov. Res. Technol. 2024, 6, 3022–3028. [Google Scholar]
  9. Croce, D.; Giarré, L.; Pascucci, F.; Tinnirello, I.; Galioto, G.E.; Garlisi, D. An Indoor and Outdoor Navigation System for Visually Impaired People. IEEE Access 2019, 7, 170406–170418. [Google Scholar] [CrossRef]
  10. Feghali, J.M.; Feng, C.; Majumdar, A.; Ochieng, W.Y. Comprehensive Review: High-Performance Positioning Systems for Navigation and Wayfinding for Visually Impaired People. Sensors 2024, 24, 7020. [Google Scholar] [CrossRef] [PubMed]
  11. Dhaliwal, M.K.; Sharma, R. Improving Accessibility and Independence for Blind/Visually Impaired Persons based on Speech Synthesis Technology. Int. J. Comput. Appl. 2024, 186, 12–17. [Google Scholar] [CrossRef]
  12. Wang, L.; Zhang, L.; Chen, X. Context-Aware Image Captioning for Visually Impaired People Using Smart Glasses. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 17551–17560. [Google Scholar]
  13. Orynbay, L.; Razakhova, B.; Peer, P.; Meden, B.; Emeršič, Ž. Recent Advances in Synthesis and Interaction of Speech, Text, and Vision. Electronics 2024, 13, 1726. [Google Scholar] [CrossRef]
  14. IEEE 802.11ac-2013; IEEE Standard for Information technology—Local and Metropolitan Area Networks—Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 4: Enhancements for Very High Throughput for Operation in Bands Below 6 GHz. IEEE: Piscataway, NJ, USA, 2013. Available online: https://standards.ieee.org/ieee/802.11ac/4473/ (accessed on 20 June 2025).
  15. La Grow, S.J.; Prosek, R.A. The long cane: A tool for independence. In Foundations of Orientation and Mobility, 3rd ed.; AFB Press: New York, NY, USA, 2008; pp. 445–485. [Google Scholar]
  16. Willis, S.L.; Ogden, V.E. Guide dog mobility: An overview. In Foundations of Orientation and Mobility, 2nd ed.; AFB Press: New York, NY, USA, 2002; pp. 547–584. [Google Scholar]
  17. Ulrich, I.; Borenstein, J. The GuideCane—Applying mobile robot technologies to assist the visually impaired. IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. 2001, 31, 131–136. [Google Scholar] [CrossRef]
  18. Leporini, B.; Rosellini, M.; Forgione, N. Haptic Wearable System to Assist Visually Impaired People in Obstacle Detection. In PETRA ’22: Proceedings of the 15th International Conference on Pervasive Technologies Related to Assistive Environments, Corfu, Greece, 29 June–1 July 2022; ACM: New York, NY, USA, 2022; pp. 269–272. [Google Scholar]
  19. Helal, A.S.; Moore, S.E.; Ramachandran, B. Drishti: An integrated navigation system for visually impaired and disabled. In Proceedings of the 5th International Symposium on Wearable Computers (ISWC ’01), Zurich, Switzerland, 8–9 October 2001; IEEE: Piscataway, NJ, USA, 2001; pp. 149–156. [Google Scholar] [CrossRef]
  20. American Printing House. Foundations for Tactile Literacy: A Reference Tool for the Tactile Journey, Early to Advanced Skills; APH Press: Louisville, KY, USA, 2023. [Google Scholar]
  21. Hong, J.; Kacorri, H. Understanding How Blind Users Handle Object Recognition Errors: Strategies and Challenges. In ASSETS ’24: Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility, St. John’s, NL, Canada, 27–30 October 2024; ACM: New York, NY, USA, 2024; pp. 1–13. [Google Scholar] [CrossRef]
  22. Ntoa, S.; Margetis, G.; Adami, I.; Balafa, K.; Antona, M.; Stephanidis, C. Digital Accessibility for Users with Disabilities. In Designing for Usability, Inclusion and Sustainability in Human-Computer Interaction, 1st ed.; Salvendy, G., слабоумие, W., Eds.; CRC Press: Boca Raton, FL, USA, 2024; pp. 1–55. [Google Scholar]
  23. Kirboyun, S. How Screen Readers Impact the Academic Works of College and Graduate Students with Visual Impairments. Sak. Univ. J. Educ. 2023, 13, 416–434. [Google Scholar] [CrossRef]
  24. Gallace, A.; Spence, C. In Touch with the Future: The Sense of Touch from Cognitive Neuroscience to Virtual Reality; Oxford University Press: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
  25. Zuidhof, N.; Smetsers, S.; Brinkman, W.-P. Defining Smart Glasses: A Rapid Review of State-of-the-Art Perspectives and Future Challenges from a Social Sciences Perspective. Augment. Hum. Res. 2021, 6, 15. [Google Scholar] [CrossRef]
Figure 1. Overall system architecture diagram.
Figure 1. Overall system architecture diagram.
Electronics 14 03365 g001
Figure 2. Smart glasses’ working components and interactions.
Figure 2. Smart glasses’ working components and interactions.
Electronics 14 03365 g002
Figure 3. Smart glasses’ hardware components.
Figure 3. Smart glasses’ hardware components.
Electronics 14 03365 g003
Figure 4. Pseudocode of the optimized Dijkstra’s algorithm for smart glasses navigation.
Figure 4. Pseudocode of the optimized Dijkstra’s algorithm for smart glasses navigation.
Electronics 14 03365 g004
Figure 5. Dynamic re-routing example with modified Dijkstra’s algorithm. Legend: O: node (intersection/room entrance). ---: navigable path segment. X: dynamic obstacle (e.g., person, cart). Solid blue line: original path (calculated before obstacle appeared). Dashed red line: re-routed path (calculated after obstacle appeared). S: Start Point. E: End Point.
Figure 5. Dynamic re-routing example with modified Dijkstra’s algorithm. Legend: O: node (intersection/room entrance). ---: navigable path segment. X: dynamic obstacle (e.g., person, cart). Solid blue line: original path (calculated before obstacle appeared). Dashed red line: re-routed path (calculated after obstacle appeared). S: Start Point. E: End Point.
Electronics 14 03365 g005
Figure 6. Dynamic path re-routing process.
Figure 6. Dynamic path re-routing process.
Electronics 14 03365 g006
Figure 7. Information flow in the voice-activated system.
Figure 7. Information flow in the voice-activated system.
Electronics 14 03365 g007
Figure 8. Smart glasses with user-centric design philosophy.
Figure 8. Smart glasses with user-centric design philosophy.
Electronics 14 03365 g008
Figure 9. Performance comparison of smart glasses with traditional methods and existing solutions.
Figure 9. Performance comparison of smart glasses with traditional methods and existing solutions.
Electronics 14 03365 g009
Table 1. Comparative table of similar smart glasses systems.
Table 1. Comparative table of similar smart glasses systems.
ProjectKey FeaturesLimitationsReference
Ultrasonic-based Obstacle Detection and AvoidanceUses ultrasonic sensors to detect obstacles and provide alertsLimited accuracy in cluttered environments may not detect transparent or dynamic obstacles[1]
Camera Vision and Deep Learning for Visually ImpairedEmploys camera vision and deep learning for object recognition and scene understandingComputationally expensive, potential latency issues, and may not generalize well to new environments[3]
LiDAR–Camera Fusion for Obstacle DetectionCombines LiDAR and camera data for improved obstacle detection and avoidanceMay be affected by adverse weather conditions and the high cost of LiDAR sensors[4]
Hybrid Indoor–Outdoor Navigation SystemIntegrates GPS and indoor positioning technologies for seamless navigationChallenges in achieving smooth indoor-outdoor transitions and accuracy limitations in complex indoor environments[9]
Indoor Localization with Bluetooth BeaconsUses Bluetooth beacons and smart glasses for accurate indoor localizationRequires deployment of Bluetooth beacons infrastructure; Bluetooth signals have a limited range[8]
Text-to-Speech with Enhanced NaturalnessFocuses on improving the naturalness and prosody of text-to-speech synthesis for smart glassesChallenges in accurately recognizing and processing text in different fonts and layouts[11]
Context-Aware Image CaptioningProvides contextually relevant image descriptions for visually impaired usersDifficulties in accurately capturing and describing complex scenes and potential for misinterpretation[12]
Table 2. Key hardware components of the smart glasses device.
Table 2. Key hardware components of the smart glasses device.
ComponentModel/TypeKey SpecificationsPrimary Function/Purpose
Onboard Processing UnitRaspberry Pi 4 Model B8 GB RAM, Quad-core ARM Cortex-A72 CPU, Operating Freq: 1.5 GHzCentral processing for sensor data acquisition, algorithm execution, and communication management.
High-Resolution CameraArducam 16 MP Autofocus Camera Module16 Megapixel, Autofocus, 1080p@60fps video, FoV: 60 degreesCaptures visual input for object recognition, text detection, and scene analysis.
Ultrasonic SensorsHC-SR04 modules (×3)Range: 2–400 cm, Accuracy: ±3 mm, Beam Angle: 15°Real-time, short-range obstacle detection and immediate proximity alerts.
LiDAR SensorSlamtec RPLIDAR A1M8360° scan, Range: up to 12 m, Angular Resolution: 0.5°, Scan Freq: 10 HzGenerates detailed point cloud data for comprehensive 360° obstacle mapping and environmental understanding.
RFID ReaderMFRC522 RFID ModuleOperating Freq: 13.56 MHz, Read Range: 5–10 cm, ISO/IEC 14443 Type A compliantEnables pinpoint indoor localization by interacting with strategically placed passive RFID tags.
GPS ModuleNEO-6M GPS moduleAccuracy: ±2.5 m CEP, 50-channel GPS receiverProvides outdoor localization and navigation data for campus wayfinding.
IMUMPU-60503-axis accelerometer, 3-axis gyroscope, Digital Motion Processor™Provides data on user’s orientation and head movements for navigation refinement and image stabilization.
Audio OutputBone Conduction HeadphonesFrequency Response: 100 Hz–18,000 Hz, Output: 88 dB Delivers audio feedback without blocking ear canals, maintaining situational awareness.
MicrophonesMEMS microphones (×2)Omnidirectional, Sensitivity: −42 dBFS, Frequency Response: 100 –10,000 HzCaptures user voice commands for system interaction.
Rechargeable BatteryLiPo Battery Pack (5000 mAh)Capacity: 5000 mAh, Voltage: 3.7V, operating time: 6–8 h, Weight: 50 gPowers the complete system and is optimized for life and minimal weight.
Table 3. Technical performance evaluation results of the smart glasses system.
Table 3. Technical performance evaluation results of the smart glasses system.
FeatureProposed Smart Glasses (Average)Traditional Methods/
Existing Solutions
(Average/Accuracy)
Reference for
Traditional
/Existing
Solutions
Scenarios Tested
(Technical)
Measures (Objective
Technical)
Obstacle Detection and
Avoidance
90% accuracy, 20% reduction in collisions, 15% decrease in navigation timeAccuracy for ultrasonic-based systems: <50% [1]; Camera-based systems: 60-80% [3,5]; limited range/accuracy for traditional methods [15,16,17,18].[1,3,5,15,16,17,18]Navigating cluttered environments;
controlled detection of transparent/dynamic objects in test rigs
Obstacle detection
accuracy (%); number of collisions; time taken to
navigate a set
distance
Navigation and
Wayfinding
40% reduction in algorithm-calculated navigation time, 60%
decrease in
re-routing
errors, 85%
localization accuracy (RMSE)
Up to 30% error rate in indoor localization for existing systems [9]; reliance on pre-learned routes/sighted assistance (not directly comparable technically) [19][9,19]Pathfinding in complex building layouts; localization in pre-mapped indoor testbedsTime taken to reach destination (algorithmically); number of re-routing
errors; localization
accuracy (RMSE in meters)
Object Recognition and
Information
Retrieval
85% accuracy in object classification, 25% reduction in information retrieval
latency
Accuracy varies, generally below 70% [5];
limited capabilities [20,21]
[5,20,21]Identifying objects in a diverse dataset;
retrieving associated
information from a
database
Accuracy of object
classification (%);
information
retrieval latency (ms)
Text-to-Speech and Image
Captioning
95% accuracy in text conversion, 10% improvement in text processing speed, 90% naturalness score (BLEU/MOS equivalent on test set)Accuracy generally above 90% for clear text [11]; basic capabilities in some existing glasses [8,22,23][8,11,22,23]Converting diverse text fonts/layouts to speech; generating captions for test
images
Accuracy of text-to-speech conversion (%); text processing speed (words/sec); objective speech quality metrics (e.g., MOS score on a test set)
Social
Interaction
70% accuracy in facial
recognition
Limited research on social interaction features in existing smart glasses; accuracy estimates unavailable [24,25][24,25]Facial recognition on controlled datasets with varying angles and lighting;
identification of simple facial expressions
Accuracy of facial
recognition (%)
Note: 1. Bhabad, D., et al. (2023) [3]; 2. Meliones et al. (2022) [1]; 3. Ramani et al. (2025) [5]; 4. Croce et al. (2019) [9]; 5. Malatesh et al. (2025) [8]; 6. Dhaliwal and Sharma (2024) [11]; 7. La Grow, S. J., & Prosek, R. A. (2008) [15]; 8. Willis, S. L., & Ogden, V. E. (2002) [16]; 9. Ulrich, I., & Borenstein, J. (2001) [17]; 10. Leporini, et al. (2022) [18]; 11. Helal, A. S., et al. (2001) [19]; 12. American Printing House. (2023) [20]; 13. Hong and Kacorri (2024) [21]; 14. Ntoa et al. (2024) [22]; 15. Kirboyun (2023) [23]; 16. Gallace, A., & Spence, C. (2014) [24]; 17. Geldmacher, M., & Mallot, H. A. (2011) [25].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aloui, N. Towards Spatial Awareness: Real-Time Sensory Augmentation with Smart Glasses for Visually Impaired Individuals. Electronics 2025, 14, 3365. https://doi.org/10.3390/electronics14173365

AMA Style

Aloui N. Towards Spatial Awareness: Real-Time Sensory Augmentation with Smart Glasses for Visually Impaired Individuals. Electronics. 2025; 14(17):3365. https://doi.org/10.3390/electronics14173365

Chicago/Turabian Style

Aloui, Nadia. 2025. "Towards Spatial Awareness: Real-Time Sensory Augmentation with Smart Glasses for Visually Impaired Individuals" Electronics 14, no. 17: 3365. https://doi.org/10.3390/electronics14173365

APA Style

Aloui, N. (2025). Towards Spatial Awareness: Real-Time Sensory Augmentation with Smart Glasses for Visually Impaired Individuals. Electronics, 14(17), 3365. https://doi.org/10.3390/electronics14173365

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop