Next Article in Journal
A Soft-Error-Tolerant SAR ADC with Dual-Capacitor Sample-and-Hold Control for Sensor Systems
Previous Article in Journal
Assessment of Eutrophication and DOC Sources Tracing in the Sea Area around Dajin Island Using CASI and MODIS Images Coupled with CDOM Optical Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Inertial Measurement Unit Sensors in Assistive Technologies for Visually Impaired People, a Review

by
Karla Miriam Reyes Leiva
1,2,*,
Milagros Jaén-Vargas
1,
Benito Codina
3,4 and
José Javier Serrano Olmedo
1,5
1
Center for Biomedical Technology (CTB), Universidad Politécnica de Madrid, 28223 Madrid, Spain
2
Engineering Faculty, Universidad Tecnológica Centroamericana UNITEC, 211001 San Pedro Sula, Honduras
3
Didactic and Educational Research Department, Universidad de La Laguna, 38204 San Cristóbal de La Laguna, Spain
4
Spanish Blind Organization (ONCE), 38003 Santa Cruz de Tenerife, Spain
5
Networking Center of Biomedical Research for Bioengineering Biomaterials and Nanomedicine, Instituto de Salud Carlos III, 28029 Madrid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(14), 4767; https://doi.org/10.3390/s21144767
Submission received: 9 June 2021 / Revised: 10 July 2021 / Accepted: 11 July 2021 / Published: 13 July 2021
(This article belongs to the Section Wearables)

Abstract

:
A diverse array of assistive technologies have been developed to help Visually Impaired People (VIP) face many basic daily autonomy challenges. Inertial measurement unit sensors, on the other hand, have been used for navigation, guidance, and localization but especially for full body motion tracking due to their low cost and miniaturization, which have allowed the estimation of kinematic parameters and biomechanical analysis for different field of applications. The aim of this work was to present a comprehensive approach of assistive technologies for VIP that include inertial sensors as input, producing results on the comprehension of technical characteristics of the inertial sensors, the methodologies applied, and their specific role in each developed system. The results show that there are just a few inertial sensor-based systems. However, these sensors provide essential information when combined with optical sensors and radio signals for navigation and special application fields. The discussion includes new avenues of research, missing elements, and usability analysis, since a limitation evidenced in the selected articles is the lack of user-centered designs. Finally, regarding application fields, it has been highlighted that a gap exists in the literature regarding aids for rehabilitation and biomechanical analysis of VIP. Most of the findings are focused on navigation and obstacle detection, and this should be considered for future applications.

1. Introduction

According to the World Health Organization, about 28% of the global population (2.2 billion) is visually impaired or blind [1]. Vision impairments have different limitations, including distance and near-vision impairment. As technology advances, there is a need to develop high-quality assistive systems for the inclusion of visually impaired people (VIP) into a technological world to improve their Quality of Life (QoL) and to facilitate daily challenges such as finding and keeping a job, mobility, using public transport, and doing physical activity (PA) [2,3,4,5,6]. With the ongoing progress in computer science, such as deep learning, and in hardware development, such as sensor miniaturization, researchers have developed human activity recognition (HAR) algorithms that enable automatic feature extractions [7,8,9], for instance, by using inertial sensor’s acquisitions as input data [10,11,12,13,14].
In addition, there are great advances in miniaturized sensors capable of providing parameters of moving objects, such as position and velocity [15,16,17]. The fusion of the advances in both sensors and artificial intelligence has led to many projects that seek to support VIP in navigation [18,19,20,21,22,23,24,25,26,27], traveling [28,29,30,31], representation of the real world [32,33,34,35], obstacle detection on wayfinding [29,36,37,38], assistant robots [39,40], and other applications for general mobility, for monitoring and improving PA, and for sports participation of the VIP [41,42,43,44]. The application spectrum is extensive and may include even sensor fusion for monitoring the vital signs of guide dogs in training [45]. However, a large quantity of these systems are designed to provide VIP with information obtained from their surroundings. Electronic Travel Aid Systems (ETAS) [3] are one of the most studied assistive technologies and, according to the recent state of the art, wearable assistive devices for the visually impaired can be divided into two categories: Video camera-based ETAS and Sensorial network ETAS. Sensorial network ETAS are based primarily on GPS, BLE beacons, RFID, Ultrasound sensors, and Infrared sensors [3,46].
An important factor constraining the development of these assistive technologies is that there are limitations regarding the accuracy of these systems. Another important factor is poor acceptance by the blind community, which is a factor related to the limitation of the visual rehabilitation programs in which these systems should be included [47].
Technologies based on inertial measurement unit sensors (IMU) are used in a large and ever-growing number of applications such as intelligence guidance, mineral exploration, self-driving robots [15], full-body motion tracking [48,49,50,51,52,53], and navigation as well [54,55]. IMUs are widely used because they provide positioning information based on the dead-reckoning method, which determines the current position based on estimates of velocity and heading, departing from a known previous position. This type of navigation and tracking information is useful in areas where infrastructure-less positioning systems are required [56], including VIP applications.
There is a large amount of literature on the use of inertial sensors to estimate position and orientation. However, as mentioned before, sensor acquisition in ETAS and other VIP systems developed for navigation and assistance are not primarily IMU-based technologies, although their sensing includes IMUs data. This is due to integration drift, which is a known disadvantage of IMUs that can generate position errors in the dead-reckoning method. To face the drift error, many authors suggest incorporating inertial measurement as part of the acquired sensing and fusing the values with the sensing of optic sensors and global positioning systems (GPS) also to include the drift reduction algorithms, which greatly contribute to a more accurate positioning within IMU data [56,57,58]. As mentioned before, there is a wide range of applications for IMUs. The aims of this systematic review are as follows: (1) to provide an overview of current state of the art research and development on technology that implements IMU sensors in support of VIP; (2) to provide an understanding of how IMU sensors work and how they are used in current developments; (3) to review challenges currently faced in research focused on assisting VIP; (4) to explore application fields, besides those for navigation, in which these types of sensors can be used to support VIP. To enhance reproducibility, the details of the procedure are provided; this is a thematic review in which we pre-selected for content as described below and in which additional relevant findings are discussed.

2. Materials and Methods

2.1. Literature Search Method

Literature searches were performed in the IEEE Xplore (IE), Web of Science (WoS), and PubMed databases. Considering rapid advances in technology, we focused on articles published in the last 5 years (until December 2020) to give an overview of the most recent developments. Searches were performed in IE, WoS, and PubMed on 15 December 2020. Only articles written in English were considered. Our screening was filtered in two stages: a general search and a refinement in the three databases. Terms used in the general search were (IMU* OR accelerometer OR gyroscope OR magnetometer OR inertial measurement). For the refinement, the terms were (visually impaired OR blind OR visual impairment).

2.2. Eligibility Criteria

The following criteria were used to select the articles included in this review: (1) articles with proposed systems including IMU sensor technology with at least one kind of measurement (accelerometer, gyroscope), (2) in the implementation of IoT systems on the developments and (3) rehabilitation or physical monitoring of daily life activities and (4) in publications within experimental results of their developments, including the participation of VIP or blindfolded (BF) volunteers.

2.3. Inclusion Criteria

The initial search resulted in 637 articles (IEEE = 85, WoS = 324, PubMed = 264). To be included in this review, the aim of an article was required to be a development to aid VIP exclusively; articles regarding navigation, human motion, or PA monitoring not developed for VIP were excluded. After applying the selection criteria and removing duplicates, 40 articles remained to be reviewed (Figure 1).

3. Results

The articles were summarized and divided into four categories according to the inertial sensor used, “sensor input”. The first section discusses nine articles reporting to use accelerometer input, the second section considers four articles that used gyroscope input, the third and four sections discuss 13 articles using both accelerometer and gyroscope input and 14 articles reported that used accelerometer, gyroscope, and magnetometer input.
Then, the usability, the application trends, and the artificial intelligence incorporation in the reviewed articles are discussed. Note that in the extension of this section, the role of the inertial measurement unit in each development is highlighted; however, most of the selected articles integrate sensor fusion in which other types of sensors (non-inertial) are used.

3.1. Accelerometer

As shown in Table 1, 23% of the reviewed articles reported the use of an accelerometer. An accelerometer measures the external specific force acting on the sensor, which consists of both the sensor’s acceleration and the acceleration due to the earth’s gravity. The accelerometer input served several purposes, including position estimation, monitoring of physical movement, and vibration detection. The most frequently used accelerometers included ActiGraph from Actigraph Corp, ADXL from Analog Devices, and KXR from Kionincs.
The Actigraph Corp wearable accelerometers were used in many clinical trials found in the systematic review. The accelerometers were used to measure PA within the VI community, with a major experimental focus on kids and older adults [60,61,62]. Since the blind spend more time in sedentary activities, trials attempted to determine relationships between falls and levels of PA [63]. These wearable sensors present output data of the three-accelerometer axis independently and provide activity counts as a composite vector magnitude of the axis. For instance, familial trials were conducted to correlate PA between VI, their parents, and siblings [66]. While in [65], the authors studied the PA of children with VI during different segments of the school day from the special school for VI in Xingqing Districts in Yinchuan, China. In this trial, a total of 600 min acceleration was recorded per day, and this input was analyzed within the ActiLife Lifestyle Monitoring System from Actigraph.
Nkechinyere et al. [67] developed software to identify specific daily activities performed by VI and elderly persons. The system uses a wearable accelerometer sensor to collect data that is then submitted to neural network regression (NNR) to characterize each activity as standing, sitting, bending, lying down, or walking. Falls and critical falls are also identified. In this work, the velocity–acceleration measurements are converted to gravity–acceleration by multiplying velocity by sensitivity on each axis. Then, gravity acceleration (G) is calculated using the sum of the squares of the X, Y, and Z-axis in order to remove negative values, and a G target value is representative for individual neural network training.
Vibration or shocks could also be determined by thresholds fixed for each axis. Case in point, within the system of Hirano et al., the vibration of a KXR94-2050 3-axis accelerometer sensor was used to allow blind runners to synchronize and match their running tempo with sighted guides (see Figure 2). The algorithm in this system was designed so that when the blind runner’s foot touches the floor, a vibration signal was induced in the guide runner’s ankle, allowing for synchronization of the race pace [64]. The algorithm was tasked with identifying foot strikes using a low-pass filter applied to samples. The peak acceleration values caused by foot strikes were detected and sent as vibrotactile feedback through a transducer. An acceleration threshold value (4.74 m/s2) was previously settled upon by trial.
The work of [59] presents the design and usage of two assistive technologies for VIP that use an ADXL345 accelerometer: a vibrotactile belt and a stereovision system. The vibrotactile belt (NavBelt modified [68]) was connected to ultrasonic sensors (LV-MaxSonar-EZ0) and to the accelerometer. The acceleration values were used to detect and eradicate unnecessary vibrations when detecting user motion, returning user movement and velocity information. The motion and velocity of a user can be recognized using acceleration by detecting stationary periods with the Zero Velocity Update algorithm [18].

3.2. Gyroscope

A gyroscope measures angular velocity: the rate of change of the sensor’s orientation. Thus, the integration of gyroscope measurements provides information about the orientation of the sensor. Four articles reported using gyroscope input for different roles within sensor fusion, as shown in Table 2.
In what can be considered a “rehabilitation” application, the authors of [72] used the gyroscope information to characterize long white cane usage in VI volunteers. In their system, the velocity of the cane’s sweeping movement was obtained by analyzing the gyroscope’s Z-axis signal. The sweeping frequency corresponded to the number of complete sweep cycles performed per second. The sweeping speed was defined as the angular velocity during the sweeping period. In addition, the authors experimented with grasping characteristics based on the positions of the thumb and index finger. They added an optoelectronic motion tracking system (QTM/Oqus, Qualisys AB) to obtain accurate cane orientation angles related to tilt, grip rotation, and sweeping movement.
On a different device proposed to help blind people detect stairs [69], the gyroscope output was used to determine if a change in the distance between the user’s head and the ground was due to head tilt or to the user stepping down or up. The method used an ultrasound sensor to measure the distance between the user’s head and the ground. The measured distance was compared to a reference distance that corresponded to the floor. So, to establish the reference distance, the gyroscope measured the α angle (corresponding to the head tilt); therefore, the reference distance was A/(sin α + cos α), where A corresponds to the distance from the head tilt to the floor by trigonometry.
With the gyroscope values, the buzzers also could provide information about the distance from the step as well as the height and depth of the step; in this system, a MPU-6050 IMU was used to obtain the tilt angle. The same sensor was used in [70], where the approach was detecting rotation and movements in an automated smart cane. In this system, a high-frequency sound wave was emitted, and its return was used to calculate the distance to an object. Since the device moves continuously while walking, the bottom sonar sensor was fixed in its initial place to detect high surfaces. The gyroscope values were used to control the servo motor; when the sensor value deviated from the fixed value, the servo rotated and returned to the initial fixed value.
Oommen et al. [71] developed a prototype to aid VI swimmers to train with more independence. The device consisted of a smartphone attached to the waist of the swimmer and Voice Recognition Technology (VRT) as the interface, so the swimmer could manipulate the application communicating to the VRT with waterproof Bluetooth earphones. With the help of the camera facing the bottom of the pool and the gyroscope data from the smartphone, the algorithm alerted the swimmer when the end of the lane was near and when the swimmer drifted sideways. The gyroscope was sampled at a minimum rate of 20 times per second and was used to correct the camera images for swimmer roll during strokes. The inertial measurement was essential to determine the orientation of the device and of the gravity vector in the navigation frame. This meant that camera frames were processed only when the device was parallel to the bottom lines, providing a correct estimation.

3.3. Accelerometer and Gyroscope Fusion

Accelerometers and gyroscopes are frequently used together in navigation situations when the position and orientation (i.e., attitude) of a device or person are of interest. Articles that reported using input from both accelerometers and gyroscopes sensors represented 33% of those reviewed. As expected, all dealt with navigation aid applications. The most frequently cited IMU was the MPU-6050 from TDK InvenSense, which has a Digital Motion Processor (DMP) to the fusion of the three-axis accelerometer and three-axis gyroscope; more details are described in Table 3.
Croce et al. [73] designed a system where a smartphone camera was the main sensor, which was used to detect special paths such as colored tapes or a painted line. It was also a tracking system based on the integration of a MPU6500 IMU. The authors used the accelerometer values for “Activity recognition”; the accelerometer covariance along the three axes was analyzed to determine if the user is standing still or walking. For heading estimation (direction of the user), the gyroscope data were used to identify the smartphone reference frame with respect to the navigation frame. To estimate the user position with respect to the navigation frame, the Z-axis of acceleration (vertical acceleration) is analyzed to identify steps, while the minimum and maximum vertical acceleration signals are retrieved for peak detection and zero crossings. These features helped to evaluate cardinality so the algorithm could be reliable with different users and different walking speeds. Displacement (s) was evaluated using the algorithm proposed by Weinberg for MEMs in 2002 [74]: Δ s = β α ( k ) M α ( k ) m 4 , where a ( k ) M is the actual time (k) maximum and α ( k ) m is the actual time (k) minimum of vertical accelerations, and β is the average length of a step. Finally, sensor fusion with the Computer Vision and PDR algorithms was done by implementing an Extended Kalman Filter. In this model, other parameters, such as angular velocity and direction of the user velocity, were provided by the IMU for the discrete time state model.
Table 3. Summary reviewed articles in the accelerometer and gyroscope fusion section.
Table 3. Summary reviewed articles in the accelerometer and gyroscope fusion section.
RoleIMUSensor FusionRERef.
Pedestrian dead reckoningMPU-6050RGB-D camera, GPS0.41 m[73]
Motion detectionMPU-6050RGB camera, ultrasonic sensor-[75]
Position estimation and orientationMPU-6050CMOS camera, line laser0.4–1 m[76]
Fall detection and attitude estimationNot specifiedRGB-D camera, GPS, velocity sensor-[77]
Fall detectionSmartphone IMUUltrasonic sensor, GPS10–20 m[78]
OrientationMPU-6050GPS, ultrasonic, and wet floor sensors-[79]
Attitude estimationNot specifiedRGB-D camera-[80]
Step countingNot specifiedUltrasonic sensor, GPS-[81]
Orientation and Height estimationNot specifiedRGB-D camera-[82]
Heading estimationNot specifiedGPS, compass2.9–1.7 m[83]
Angular velocity and accelerationSmartphone IMUStrain gauges-[84]
Pose estimationLSM9DS1RGB-D camera-[85]
Orientation of the head and handBMI055 BoschUWB FMCW radar sensor-[86]
Silva & Wimalaratne [75] used the MPU-6050 inbuilt motion fusion for image deblur processing in an optical obstacle detection belt. This was necessary because the camera’s continuous movement on the body causes image blurring. The deblur process was done by providing the approximate trajectory of the camera motion. In addition, Ref. [78] used it to create fall alarms in a proposed prototype of an intelligent walking stick for VI and elderly people. The hardware sensing was composed of ultrasonic sensors and GPS. The IMU module monitors the posture of the crutches in real time and verifies whether the horizontal angle and the acceleration direction of the crutches are normal so that a fall can be detected.
Chen et al. [77] reported another way to detect falls using inertial measurement sensors. Among other characteristics, fall detection using altitude estimation is based on an algorithm that processes values from inertial sensors in real time. The pitch angle represents the Y-axis rotation (body’s backward pitch); the roll, the X-axis rotation (body side-slip angle from left to right); and the yaw, the rotation angle around the X-axis (rotation angle of the body from left to right). The authors used a Kalman filter algorithm to suppress noise and improve the reliability. Also, they collected additional information (angular velocity and acceleration) to improve the method for user safety.
An assistive device called NavCane [79] was developed to aid VIP finding obstacle-free paths. The NavCane can detect wet floors and obstacles at different levels, and it provides simplified feedback. The inertial sensor input was used to determine the orientation of the cane. To determine the tilt angle, (inclination), they used the gravity vector and its projection on the axes of the accelerometer. Therefore, the tilt’s angle of the device was measured using the X and Y-axis inclination of the accelerometer. Then, the inverse sine of the X-axis and inverse cosine of the Y-axis were processed to determine the inclination angle from the measured acceleration. The authors did not indicate which type of IMU they used. In [80], which is an expansion of previous work [87,88], the authors proposed the adaptive ground segmentation method for obstacle avoidance. For this method, they used the camera’s attitude angle measured by an IMU, and a corresponding 3D point cloud in the world coordinate system was created and merged with GPS data.
Finally, Fan et al. [66] developed a virtual cane based on an FPGA device (Xilinx ZYNQ-7000). This device is designed to build a map of obstacles in front of the user based on signals emitted by the devices: the laser light provided from the line laser is captured by a CMOS camera. Then, images with a line laser stripe were transferred to the FPGA device. Using the camera’s internal and external parameters, FPGA can calculate the true distance between obstacles and the camera. The user must swing the device horizontally so that the vertical light can scan all the objects ahead. Therefore, the IMU tracks the system’s pointing angle and relative position with respect to the world coordinate frame. Having both pieces of information, they calculate the distance and shape of obstacles in real time.
In this section, all the authors reported sensor fusion with at least one optical sensor, including also lasers and ultrasonic sensors to obtain information about the surroundings for obstacle detection and navigation [75,76]. The reported navigation error with the lowest value was 0.41 m, as shown in Table 3 by [73]. In this system, PDR is provided by the MPU-6050 and the sensor input includes GPS and an RGB-D camera. This system is the ARIANNA (Path Recognition for Indoor Assisted Navigation with Augmented Perception), which was focused on navigation without obstacle detection.
Other authors proposed the improvement of the functionality of the existing smart vision canes by adding functions such as an emergency call to send a GPS address if the person gets lost, or a remote-control feature to find the stick in case the person loses the stick. The system has an indoor and outdoor guiding system. In indoor systems, the acceleration and gyroscope input is used to count the steps and to verify the directions in which the steps are being produced, so the system can ensure that the person took the exact predefined path to the desired place by using the Adafruit wave kit for feedback to the user [81].
Li et al. proposed a framework to avoid objects in indoor environments [82]. This framework is composed of an RGB-D camera and IMU to detect objects and make a collision-free patch in real time. The acceleration and angular velocity of the IMU are used to obtain the real orientation and height of the camera that are necessary to create a ground segmentation. Decomposing gravity from three-axis accelerations allowed obtaining the camera initial orientation (pitch and roll angles). On the other hand, real-time orientation was calculated by integrating the gyroscope measurements. The initial height estimation was based on the distance between the chest camera position to the ground. A pedestrian crossing mobile application was developed for the blind by [83]. This system, which could send crossing requests to the signal controller via The National Transportation Communication for Intelligent Transportation System Protocol (NTCIP) without the need for pushing the conventional actuation push button, was a proposal for the traditional accessible pedestrian signal systems. In this system, the inertial measurement united from the smartphone was used to estimate the heading.
An augmented reality system was developed based on radar technology and internal sensors. In this system, measured distances get translated into an interpretable sound rendered in a virtual audio space. The relative orientation of the devices in the head and the hand are computed using the transformation matrix output of both IMUs. At the initial point, the startup of the system initializes the orientation. The azimuth and elevation angles were computed from the resultant matrix in addition to the radar sensor measures. The collected input was the database for the convolution processing [86].
Gill, Seth, and Scheme evaluated the effectiveness of a multi-sensor cane in detecting changes in gait proposed for the elderly and visually impaired; the IOT multi-sensor system included strain gauges to measure load. Different walking conditions, including impaired vision and walking abnormalities due to incorrect cane lengths of the volunteers, were tested by simulating walking abnormalities. The inertial measurement values were used to classify the walking cycle events [84].
A device capable of detecting and locating objects as an object manipulation aid was proposed by [85]. The hand-worn device was composed of an RGB-D camera and an inertial sensor that was used for pose estimation by Depth Enhanced Visual-Inertial Odometry (DVIO). The system provides electro tactile and audio feedback to the user.

3.4. Accelerometer, Gyroscope, and Magnetometer Fusion

Magnetometers complement accelerometers by providing sensor heading (orientation around the gravity vector), which is information that accelerometers alone cannot provide. With the fusion of accelerometers, gyroscopes, and magnetometers, the orientation is estimated based on the direction of the magnetic field. In addition, other embedded models that estimate pose can be obtained, which in many cases are more accurate models. In this section, the sensor fusion reported by the authors includes technology such as optical and ultrasonic sensors, BLE beacons, and GPS.
The complementation of the IMU sensor fusion with the mentioned sensors resulted in improvements in navigation system precision. The accuracy obtained with IMU sensors integrated in smartphones is less precise than that obtained with external IMU sensors. For instance, errors on the order of meters (from 1.5 to 6 m) were registered with internal smartphone sensors, while, with external sensors, errors decreased in the order of centimeters (6.17 to 104 cm). This is also attributed to sensor fusion (see Table 4).
A Robotic Navigation Aid (RNA) system described by Zhang and Yen [89] used an RGB-Depth camera and inertial input from an VN-100 IMU/AHRS VectorNav IMU to acquire information about surroundings (Figure 3). In this prototype, the sensor fusion was used for the Visual Inertial Odometry (VIO), which estimates the RNA’s pose (orientation and position) so that the path planning node can use the position information to compute desired path and confirm the Desired Direction of Travel that would be used to control the active rolling tip. IMU measurements were used to calculate two of three components from the VIO: For floor detection, the gravity direction 𝑔⃗ and the inclination angle θ of RNA. The IMU state estimation calculates its own pose, velocity, and biases from its own measurements in conjunction with the extracted floor plane data, the tracked features, and the depth data. Then, these parameters are sent to the path planning node. The mode selection is performed automatically on the Human Intent Detection interface. The gyroscope indicates the actual turn angle in motion, which is compared to the expected turn angle from the encoder data (measuring the user’s compliance). If both turn angles are equal, this indicates to the system that it intends to use the RNA in its robocane mode instead of its with-cane mode (without robotic aid), expecting a motor-controlled motion of the tip.
The continuation of the previously developed work was also included in this review [85]. This article presented a new method to achieve more accuracy during the navigation. This method is mentioned above, DVIO, which integrates the geometric feature and the visual features of the scene with the inertial data for more acute estimation of the RNA’s pose.
Another navigation system was designed to aid VI mobility in places where there is no access to GPS, Bluetooth, or Wi-Fi [90]. The proposed method requires previously recorded paths using the inertial sensors of a smartphone for further guidance. The values obtained from the IMU were used to count steps and determine orientation to calculate route segments, distances, and turns. Vertical acceleration was used to estimate distance because it has a bigger amplitude than does horizontal acceleration. The average step length was calculated from a 20 m walk, and this information served as an input parameter to the program. The average azimuth of the segments was also calculated using the magnetic field perturbation from all three axes of the magnetometer compass. The system is similar to that of [91], who designed a smartphone-based indoor navigation application to aid the VI when using public transport. The system calculates pedestrian dead reckoning based on graphics, and it uses the existing tactile paths for positioning and navigation. Determining the attitude of the smartphone, relative to the user, was crucial to construct the pedestrian dead reckoning algorithm (PDR). The algorithm consisted of computing the orientation quaternion of the local-level frame relative to the body frame. The tilt components, such as roll and pitch (computed with acceleration data) and the heading component, were determined separately (computed with magnetometer data). The quaternion estimation involves a prediction step in which the angular rates (obtained from the gyroscope) are applied. With the step absolute orientation angle, if the step length is known, the change of position can be estimated; as the sensors from smartphones are generally low cost and not specifically designed for navigation, a PDR trajectory needs to be created by developing a map matching algorithm with a graph created with tactile paths (BLE beacons).
An indoor navigation system was developed based on the same sensor input as in [90] (inertial sensor from smartphone and Bluetooth beacons) [30]. A framework was proposed that combines relative position-based learning techniques from IMUs and absolute position based on iBeacons (Figure 4). In this framework, gyroscope data were used to detect relative turns and magnetometer axis fusion with accelerometer data provided heading detection. In addition, an accelerometer provided the adaptive relative position detection. With these three components, it is possible to obtain relative positioning, which, combined with absolute positioning, can provide a final estimated position. Features, including step size, orientation, standby position, roll pitch and yaw, relative turn, movement, and heading angle were all extracted from IMU sensors.
Considering the ability of a VIP to use auditory information to locate sound sources, a Real-Time Localization System (RTLS) for indoor and outdoor sports was designed by Ferrand et al. [92]. The system consists of an accurate 2D position system, a head tracker, and a sound renderer to simulate a virtual sound source. For this system, they used a BNO055 Bosch IMU sensor to determine the orientation of the body, in fusion with an Ultra-Wide Band sensor to provide precise distance measurement and an Optical Flow sensor to determine the velocity of the person when walking. They used the localization of the user to create an augmented reality scene with virtual sound. The software spatializes a sound depending on the position of the user, so the user could identify its own position in the track based on the sound that was being produced. In these systems, the Euler angles from IMU as the head tracker was essential for precision.
Ciabanou et al. [101] developed a system to detect indoor staircases with the help of an RGB-D camera. The algorithm is based on clustering patches from the normal map. To support information provided by the images, an IMU sensor was used to obtain absolute orientation to correct the normal orientation with respect to the depth sensor movement. No additional information about the characteristics of the input or orientation algorithm was provided by the authors, as there are several models to obtain absolute orientation and will be discussed in the next section. In the final algorithm, an important input, “Tangle”, is referred to as a threshold for filtering by orientation. According to the authors, the mean rotation angle of every region was computed.
Simoes et al. [93] created an indoor wearable navigation system for users with visual impairments using computer vision, ultrasonic sensors, and an accelerometer, gyroscope, and magnetometer. The authors didn’t provide specific information related to the use of the inertial sensors but declared that during navigation, the path to the next marker was calculated based on the origin point or location point information, and on the orientation. When the system detects a marker, the user’s position, direction to other markers, arrival time, and others, are updated and enhanced. They improved the testing OpenCV algorithms to recognize static objects such as doors, walls, etc.
Dang et al. [94] created an assistance system in which the camera, IMU, and world coordinate frames were combined (Figure 5). Since the system combines multiple sensors, a calibration step was done first to estimate the relative position and orientation of each sensor. The height of the system was estimated based on the orientation of the IMU and the laser stripe distance. The motion of the hand was tracked using a Kalman filter. There is one moving interval and two stationary intervals with respect to the person’s body. The stationary interval of the IMU sensor was detected when gyroscope values for all three coordinates were approximately zero. The inertial measure was needed for the Kalman filter-based motion tracking algorithm, since the sensors provide the acceleration and angular rate of the system while moving. This algorithm uses the system’s initial orientation as the initial value to track the orientation of the system in each swing. Once the orientation of the system was tracked, it was used to determine the pointing angle of the system. For this, the pitch angle was estimated using gravity data. The height also was used to estimate the distance between the person and any obstacle detected.
Using a sensory substitution device (SSD) for VI, Botezatu et al. [95] proposed a 3D representation on the space conveyed by means of the hearing and tactile sensors. The IMU sensor (LPMS-CURS2) is used to track the head and body movement. The data acquired from the stereo and structure modules are synchronized with the data provided by the IMU sensor in order to make corrections in the stereo and depth frames. This fusion allows the system to identify the ground plane, doors, and other objects. The inertial measurement was essential to determine device orientation and track gravity orientation so that the camera frames are processed only when the device was parallel to the bottom lines for a correct estimation.
“blindBike” is an application that uses IMU sensor data to assist VIP who bike [96]. The “Road Following” module of this Android application uses 2D computer vision and statistical techniques to create a turn-by-turn route based on GPS map indications. Sensing in this application consists of a smartphone camera, location (GPS) services, microphone, audio output, accelerometer, gyroscope, and compass. The authors do not provide much information about how the inertial measurement units are used in this system. However, they do explain that sensing units are used to detect the right edge of the road and to direct the biker to maintain their route along the right edge as needed. The measured distance of the biker from the estimated right edge of the road determines if the user is on course or not; compass data are analyzed to determine if the bike’s heading conforms to what it should be.
Mahida et al. [99] proposed to map the smartphone IMU measurements into 2D local coordinates using the regression-based training of Multi-Layer Perceptron (MLP). Within the three-axis values of the accelerometer, gyroscope, and magnetometer, plus the roll, pitch, and azimuth values from IMU, they trained the algorithm to predict the position of the VI user when holding the phone. They used a previous database that included two types of rooms dividing the spaces into microcells (x,y); the resultant output, predicts an x,y position for indoor location through an smartphone app, as shown in Figure 6.
Finally, a wearable low-cost system was developed by [97]. In this system, the authors calculate the magnitude of the acceleration, remove the gravity of the acceleration, and filter the resulting magnitude. The peaks of the magnitude that are above the standard deviation of one were counted as steps. The magnetometer is used to calculate the heading angle. Both are complemented with the obstacle detection system and provide real-time voice command instructions to the user to avoid obstacles. The physical obstacles and the people were constantly being detected by the Pi camera and ultrasonic sensors, respectively.

4. Discussion

4.1. Technical Analysis of the IMU’s Roles

4.1.1. Motion Measurement, Angular Velocity, and Fall Detection

In the first section, most of the authors reported using ActiGraph wearable accelerometers to measure PA. These sensors have been used by the medical community for a long time as activity monitors in clinical trials such as analysis of health psychology and sleeping disorders [102,103], but most of them uses this sensor to measure physical activity [104]. The sensor provides raw three-axis acceleration with a sample ranging from 30 to 100 Hz, which is recorded and then processed. In the processing, these raw data can provide information about the motion activity, such as step counting and positional data (standing, sitting, or lying down). It has been proven that this type of accelerometer may not be sensitive enough to measure very low motion or low physical activity (which is sometimes the case of VIP), suggesting that for more accurate measurement, new classification of activity counts should be developed [105]. Motion detection by accelerometer readings, to establish if a person is moving or not in order to eradicate false vibration, was also applied by Trivedi et al. [59]. Motion readings can also be used for fall detection. Two different methods for fall detection excel in this revision due to the simplicity of the algorithms. Chen et al. [77] propose a threshold-based method where they calculate the acceleration vector sum (AVS) and the angular velocity sum (AVVS) to detect the movement of the user using the equations:
AVS = ( a x 2 + a y 2 + a z   2 )   ,   AVSS = ( v x   2 + v y 2 + v z 2 )
The data are collected consecutively while the system is active; it provides a fall alarm if both thresholds exceed the previously settled parameter, and a similar method is reported by Nkechinyere [67]. On the other hand, Ref. [78] used the orientation angles of a placed inertial sensor in order to measure the angle between the crutches (and canes) and the vertical of the world coordinate system, so that when a crutch or cane falls on the ground, an alarm is sent to the main developed system. Another simple method regarding angular velocity was reported by [72]. The sweeping velocity of the long cane was obtained using a gyroscope Z-axis signal (due to configuration of the placed sensor). They used this signal to calculate the frequency and speed of the sweeping. By establishing the quotient between the number of zero-crossings (determine a change of direction in the sweeping cycle) and the product of two times the diagonal duration in seconds, since a complete cycle was calculated from the initial position at the right most point and back from the left most point to the initial position of the subjects, in which the angular velocity is zero.

4.1.2. Orientation/Attitude Estimation and Heading

The relative orientation of two devices can be estimated by computing the transformation matrix output of the IMUs placed in these devices. It starts by initializing the absolute orientation (orientation with respect to the world coordinate system). From this point, the orientation of one device is transformed into the device’s coordinate systems, which results in relative orientation. The azimuth and elevation angles can be computed from this matrix representation. Bai et al. [88] used the attitude angles to create an adaptative ground segmentation using a rotation matrix. For this goal, they compute a similar algorithm to that mentioned above, with the difference that the authors harness a depth image to create a 3D point cloud in the world coordinate system, where the reconstructed points xw, yw, and zw are calculated by:
x w ,   y w ,   z w = z EK 1   [ u v 1 ] ,   where   E = C o s   γ S i n   γ 0 S i n   γ C o s   γ 0 0 0 1 1 0 0 0 C o s   α S i n   α 0 S i n   α C o s   α
The pixel value of point p(u,v) in the depth image represents the distance between the final point and the camera, which is equal to z. K is the camera intrinsic parameter matrix. The rotation angles of interest are the pitch α and roll γ angles corresponding to the X and Z-axis of the camera.
Absolute orientation can be directly obtained by sensor fusion in 9DOF sensors through Euler output angles [92,98] or quaternion estimation using inertial and magnetic observations, as in the case of [91], where the tilt or inclination components (roll and pitch) were determined separately from the heading component (yaw), so that there would be magnetic disturbances only in the heading value.
The gravity direction estimation has an important role in most of the developments that use inertial sensors for PDR or for absolute orientation. The direction of gravity can be considered as the unit vector perpendicular to the local horizontal in a typically northeast plane, pointing vertically downwards. This vertical direction vector is generally time-varying due to the rotation motion when expressed in the device’s coordinate system [106]. When the gravity direction is estimated in the device’s frame, it can be utilized to decompose any vector. In the case of [70], the gravity direction was used to determine if the device placed in the abdomen of the swimmer was orthogonal to the floor of the pool; in these intervals, the user’s position is estimated in frames captured by camera.
On the other hand, in the case of [82], the initial orientation of an object was calculated by decomposing gravity from three-axis accelerations, which was represented as:
Pitch = tan 1 ( A X , O U T 2 + A Y , O U T 2 A Z , O U T )   Roll = tan 1 ( A X , O U T A Y , O U T 2 + A Z , O U T 2 )
A X , Y , Z   O U T are the accelerations along the X, Y, and Z-axis, subsequently. Since, in theory for real-time orientation, the estimation can be obtained by integrating the output of the gyroscope, as mentioned before, the estimation suffers from the integration of drift over time. The authors used a complementary filter to mitigate the noise and the horizontal acceleration dependency in real-time orientation.
The heading estimation (yaw angle) can be obtained either from inertial sensors only (accelerometer and gyroscope) or accelerometer fusion with magnetometer. In any method, a more precise measure of the roll and pitch angles is easier to obtain than a precise heading measure while calculating orientation angles, which is due to the magnetic disturbance when using a magnetometer or accumulated drift error when using gyroscope values [57,107,108]. These effects can be limited with magnetic perturbation compensation algorithms or with the domain of specific assumptions when treating the sources of error [109].
Although both methods are used according to the hardware selection of the authors, a more precise heading can be obtained when using magnetometer, since the orientation can be estimated based on the direction of the magnetic field [108]. In [97], this method is applied using the next equation for yaw (heading) estimation:
Angle = 180   x   t a n 1   ( m y m x ) / π
where my and mx are the magnetometer reading of the Y and X-axis, respectively; to capture the magnetic energy around the surface of a sensor, mechanical calibration is needed.

4.1.3. Positioning and Tracking

Pose estimation is referred to the estimation of both orientation and position by modeling the accelerometer and gyroscope measurements to the dynamics [108]. In dynamic models, for the estimation of states from multiple sensors, the most widely used technique is the Kalman filter, which uses an optimization method for estate estimation [110]. The difference of this method with the Extended Kalman filter is that the Extended method computes filtering estimates in terms of the conditional probability distribution, while the other method can be interpreted as Gauss–Newton optimization of the filtering, using normal distribution in the processed and measured noises of the sensors. The Kalman (KF) and Extended Kalman filtering (EKF) algorithms are used to compute pose and tracking on linear and non-linear models [57,108,110,111].
A Kalman filter-based algorithm was implemented by [76] using the angular rate and accelerations in order to estimate the pose of the system, with the assumption that an accelerometer measures only the gravity when three coordinates of gyroscope values are all near zero, since according to the authors, the acceleration of movement of the visually impaired is small during normal walking. With this presupposition, the pose of the system with respect to the temporary world coordinate frame can be calculated. The Kalman filter-based motion tracking algorithm was also applied by [94]. On the other hand, Croce et al. [73] implemented the Extended Kalman filter. For the state model, the position estimation is calculated on the IMU-based PDR algorithm, using only accelerometer and gyroscope measurements. The user heading, absolute velocity, and coordinates were considered in the measurement model.
This KF-based algorithms may not apply to all positioning estimation scenarios; some other non-linear models that are derivatives of the EKF are also suggested by authors to face the estimation accuracy problems of the sensor states [112]. However, the KF-based algorithm is the most frequent method used by the authors of the selected articles discussed in this review.
The positioning models can be improved with hardware implementation when fused with local markers as in the graph-based PDR model presented by [91] or the proximity based on visual pattern model by [93]. It can also be improved by using local markers and the implementation of deep learning models [30,99].

4.2. Usability

As mentioned in the introduction, an important factor constraining the development of systems to aid the VI is the acceptance of these technologies by the visually impaired community. The following section summarizes information from the research articles presented in this systematic review that pertains to the participation of VIP in tests of proposed systems.
Apart from the six articles focused on the measurement of PA, which are based on the participation of VI; only eleven of the 34 remaining articles summarized in the review tested proposed systems with VI volunteers or include VI during their experimental phase [64,67,72,73,79,80,91,92,93,95,98]. Three reported tests used blindfolded (BF) volunteers [59,89,97], and one used both VIP and BF [90]. There is elevated participation of VI in the Bai et al. [80] and Meshram et al. [79] papers. However, most of the authors did not include (or mention) usability questionnaires or user feedback after the usability testing; comments about the user-centered approach are shown in Table 5.
Concerning usability questionnaires after prototypes testing, only three of the 40 articles reported questionnaires of experience or qualitative feedback after VI participation in the tests, and two reported grading performance or obtaining qualitative feedback from BF participants. The relatively few questionnaires mentioned solicited feedback information such as ease of use or wearability usefulness; response time; independence and localization; sense of safety; and advice for future modifications [93]. Nor did they mention the extent to which the system was helpful and general usability [80].
The paucity of user feedback reported is alarming. VIP and volunteer feedback are important to the development of such systems; targeted users must be considered during the development process or usability may be compromised.

4.3. Field of Applications

“Navigation and Object recognition” was the most prevalent VIP application and was cited in 38% of systematic review articles (15 articles), as shown in Figure 7. A high percentage of articles (40%) in this application did not specify which kind of IMU was used as sensor input; this was likely because algorithms for identifying obstacles were based on camera sensing (67%) and/or ultrasonic or laser sensing (67%), so the authors provided more details about the specification of these sensors. However, IMU fusion sensors (accelerometers and gyroscopes or accelerometers, gyroscopes, and magnetometers) played essential roles in the articles selected in these field of application. The roles vary from motion detection and tracking to pose, attitude, or orientation estimation and step counting or fall detection (see Table 3 and Table 4).
“Navigation only” was the second most cited field of application and was represented in 23% of the reviewed articles. Researchers tended to use IMUs for several tracking purposes in the systems described: primarily PDR, pose estimation, and step detection. Smartphone IMUs were used as input to algorithms in more than half of the articles to provide “simple architecture” solutions. Only one article reported the use of a single IMU measurement (gyroscope) for input, while 78% developed devices or systems that used three measurement inputs: accelerometer, gyroscope, and magnetometer. The authors of one article did not declare the type of sensor input but referred only to “IMU” input. However, given the nature of the extracted data (absolute orientation), it can be assumed input was from at least two sensors. Although the IMUs listed (ADXL, MPU, LPMS-CURS2, and Xsense) are reported to be more accurate when compared to smartphone IMUs, one article reported the use of smartphone inertial sensing for “Navigation and Object Detection” as compared to five articles in the “Navigation only” application field.
About 33% of the articles were categorized as dealing with “Sports/daily activity” application. We included articles that described or reported research focused on measures of physical activity of VIP, monitoring or improving QoL activities, and systems developed to aid athletes with visual impairments. Research that focuses on monitoring physiological and behavioral patterns with wearable devices, such as inertial sensors, has become more prevalent due to the increasing availability of wearable small devices [113]. Wearable accelerometers are widely used in adapted PA research, since physical inactivity is a serious health issue in VIP [61]. Six articles regarding the measurement of PA among VIP are included, which represent 46% of the articles cited in this section. On the other hand, participation in sports is proven to benefit the VI, not only physically but also personally and socially [114]. The International Blind Sport Federation includes athletics, chess, goalball, soccer, judo, bowling, powerlifting, shooting, showdown, swimming, and Torball as sports. However, climbing, baseball, cricket, golf, sailing, and rowing are also practiced by VIP [115], but only four aid systems related to running, swimming, biking, and roller skating were found in the systematic review.
The IMUs in this application field section were used basically to measure PA by wrist-worn accelerometers to monitor daily life activity by using accelerometer sensor input only. For sports, the roles of the inertial sensors depend on the four identified sports: running (sense the foot movements), biking (detect the right edge of the road), swimming (track the direction of gravity), and roller skating (body orientation and head tracking). Fifteen percent of the articles in this section reported the use of smartphone IMU sensing; 46% reported the use of ActiGraph accelerometers. The Bosch BNO055 and BMI055, the KXR94-2050, and the TDK Inversense MPU-9250 were used in different systems in this application field. For input, 61% of the review articles in this application field used accelerometer data exclusively; one reported using a gyroscope, while four used sensor fusions of acceleration and gyroscope.
Rehabilitation is the systematic process in which VIP are provided with tools to help them deal with their visual impairments with greater independence and self-confidence. These tools include activities such as learning Braille, learning how to use a cane, sightless feeding of themselves, and optimizing the use of residual vision and teaching skills in order to improve visual functioning in daily life [116], as well as other daily activities such as orientation and mobility trained by specialists in rehabilitation [2,4,117]. Only two of the reviewed articles seem to have a rehabilitation approach, although many other designs included in the review can be used in rehabilitation [6,118,119]. However, advances in inertial sensor technology have been critical in assisting in the rehabilitation processes of other physical disabilities such as orthopedic [7,120,121,122,123,124,125]; no further developments have been found to have the specific approach of this important stage in the life of a visually impaired person, even though the importance of this stage has been proven [117,126]. Since the loss of vision leads to functional disabilities and restrains the participation in everyday activities, it limits the individual’s autonomy and QoL [127]. Only 13% of the selected articles featured systems to aid the visually impaired people when practicing sports; this is a fact that deserves attention, because as mentioned before, there is a need to promote PA with visual impairments, since inactivity is an alarming and common health issue along them. One last article was considered as “Other” according to the application field division; the authors proposed a system to aid pedestrian signals through a “Virtual Guide Dog” app. Sixty percent of the reviewed articles reported having an indoor and outdoor focus, which relates to the number of articles in the navigation and obstacle detection and the navigation-only applications, while 33% of the articles were focused on developments for indoor only and 8% were focused on outdoor only.

4.4. New Avenues of Research and Missing Elements

4.4.1. Artificial Intelligence Integration

The integration of IA architectures was found in twelve of the reviewed assistive technologies; nevertheless, only 17% of these articles (2) used the IMU sensor as input (data feeder). Instead, the rest of the authors used optical sensors from camaras as an algorithm’s input. This is because most of the developments were focused on object detection, object recognition, and obstacle avoidance. In this case, several machine learning architectures were tested, such as decision tress [100] and class labeling on computer vision [93]. Deep learning, which consists of networks that have the particularity of extracting features automatically, were applied to the rest of the articles cited in this section, of which Convolutional Neural Networks (CNN) was the most tested architecture [73,77,82,87]. In this review, the authors that implement inertial sensors-based IA applied the architectures to obtain the prediction of a position in 2D local coordinates (x, y) [99] and for recognition of human activities [67]. In the case of the position prediction, an MLP neural network based on regression was tested, reaching an accuracy of 94.51%, which represented a 0.65 m positioning error by using accelerometer, gyroscope, and magnetometer input. This deep neural network model was suggested by the authors as a complementary system for the previous indoor navigation framework, which is also discussed in this review [30]. For human activity recognition, the authors validate a method of neural network regression, achieving 100% accuracy in activity classification using accelerometer input only. IMU data processed as time series is a method for preprocessing the raw data from inertial sensors that has emerged lately, and that helps improve the accuracy of the predictions on human activity recognition [128,129,130,131]. On the other hand, AI based in inertial sensing can be used to improve the parameter estimations in the geometric motion models; also, it can be used to replace the filtering complex models in the nonlinearity scenarios, for instance to estimate pose and tracking [132]. This would improve the accuracy problems in navigation, where most of the assistive devices of the review are focused. Having said that, it is important to mention that a new avenue of research is the artificial intelligence-based inertial sensing—for instance, for navigation applications.

4.4.2. Biomechanical Analysis

The biomechanical research of visually impaired people is an important research field for injury prevention and for evidence-based rehabilitation methodology. Most of the medical research found in the review was focused on evaluating the physical activity of people with visual impairment; however, visual impairment is usually accompanied by age-related degenerative diseases. That is why a part of the literature focused on visual impairment and blindness is dedicated to study the biomechanical analysis, such as parameters of gait and posture [133,134,135,136].
Analysis of gait parameters using inertial sensors, on the other hand, has been an important topic found in the literature for more than a decade [137]. This literature is focused on the development of algorithms to calculate the stride length and gait velocity as well as analyzing the gait cycles to identify abnormalities, diseases, or changes over time [138,139,140,141] for different medical applications. However, no studies in this review were found linking this technical study area to visual impairment. In fact, in the general literature, a research paper was found featuring a dataset of inertial sensor time series collected from blind walkers created by Manducci & Flores [130]. The authors provided these data to be used by researchers who are interested in personal mobility, since as the authors claimed, there are peculiar characteristics in visually impaired gait. For what we considered this topic, it is a missing element in the literature.
In addition, biomechanics and quantification of the long cane for analysis of the motion parameters and performance using inertial sensors is also a research field that can be considered as a new avenue of research and a missing element. Since as shown in the few studies that are found in the literature regarding this topic, a sophisticated 3D motion analysis equipment has been required in both studies to conduct the motion acquisitions for further analysis [72,142]. However, due to the advances in sensor fusion, by using inertial sensors instead of the 3D motion systems and with the right pre-processing and proper interpretation, motion analysis and quantification of the long cane characteristics can be obtained.

4.5. Limitations

A limitation to the present review may be the fact that only three databases were included in the systematic article search. We selected three databases that we consider are relevant to the research topic; perhaps more articles addressing the use of inertial sensors in assistive technologies for visually impaired people are found in the literature but did not comply with the eligibility criteria and thus are not included because they are indexed in other databases.
Another limitation lies in the fact that there is no qualitative assessment of the selected articles; instead, all the articles found in the review that met the criteria were included for in-depth analysis, including conferences and short papers. As a result, this review provides a comprehensive systematic review of the recent literature, focusing on the discussion of the use of the inertial sensors.

5. Conclusions

A systematic review of research articles was conducted to find system developments that used inertial measurement unit sensors to aid the visually impaired people in assistive technologies. Reviewed articles were categorized according to the type of IMU input used and the role of these IMUs, including pose estimation (position and orientation of a body, an object of a white cane), identification of human motion, i.e., PA and falling, but also specific roles for each application field such as measurement of the sweeping velocity of a white cane, detection of the right edge of the road for blind bikers, or tracking the direction of gravity when swimming. The major approach of the findings was sensor fusion of accelerometers, gyroscopes, and magnetometers (35%), while the less common approach of the findings was the use of gyroscope input (10%). In addition to the IMU data, sensor fusion in most of the articles included GPS (20%) and optical sensors such as RGB-D (16%) and RGB (15%) cameras as ultrasonic sensors (16%). Second, better precision in navigation and positioning estimation can be achieved when there is a fusion of UWB, line lasers, and velocity sensors in hardware when implementing local markers and deep learning architecture in software development. The smaller errors in navigation reported were due to IMU sensor fusion with an RGB camera sensor and the use of external inertial sensors, for instance, MPU-6050 and VN-100 IMU/AHRS instead of a smartphone inertial sensor, which is less precise. In many of the articles summarized, “simple” architecture of the systems to aid the athletes is observed, suggesting that the use of inertial sensors is quite applicable in this area, only more knowledge of the specific activity is needed to create an assistive system. In addition, by using accelerometers only, ActiGraph IMUs were the most commonly used, due to the function of measure of PA. The results indicate that there are new avenues of research within the integration of AI with the use of inertial sensors as feeders to improve the accuracy of the assistive devices developed as navigation assistance. There are also missing elements in the literature such as technological developments to aid the rehabilitation process, the use of inertial sensors for biomechanical analysis in gait and posture parameters, and also biomechanics of usage of the long cane within VIP. In addition, the results have shown that it is necessary to promote as well the inclusion of technology in these biomedical research areas. Finally, a significant limitation evidenced by this review is the fact that the designed aids for the visually impaired lack user-centered designs. Most of the authors used blindfolded persons instead of actual blind persons during the validation of the developments, and only 8% of the reviewed developments included a usability questionnaire for the VI, which should be considered for future research.

Author Contributions

Conceptualization, K.M.R.L. and B.C.; methodology, K.M.R.L. and J.J.S.O.; validation, K.M.R.L. and J.J.S.O.; formal analysis, K.M.R.L., M.J.-V., B.C. and J.J.S.O.; investigation, K.M.R.L.; resources, J.J.S.O.; data curation, K.M.R.L. and M.J.-V.; writing—original draft preparation, K.M.R.L.; writing—review and editing, M.J.-V., B.C. and J.J.S.O.; visualization, K.M.R.L. and M.J.-V.; supervision, J.J.S.O.; funding acquisition, J.J.S.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially financed by the Ministerio de Ciencia, Innovación y Universidades, Ref.: PGC2018-097531-B-I00.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

English Language editing services provided by Robert Swett are greatly appreciated. The author Karla Miriam Reyes Leiva acknowledges scholarship support from the Fundación Carolina FC and the Universidad Tecnológica Centroamericana UNITEC. The author Milagros Jaén-Vargas would like to thank the Secretaria Nacional de Ciencia y Tecnología SENACYT for her scholarship in the IFARHU-SENACYT program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. World Report on Vision; World Health Organization: Geneva, Switzerland, 2019; Volume 214, ISBN 9789241516570. [Google Scholar]
  2. Brady, E.; Morris, M.R.; Zhong, Y.; White, S.; Bigham, J.P. Visual challenges in the everyday lives of blind people. Conf. Hum. Factors Comput. Syst. Proc. 2013, 2117–2126. [Google Scholar] [CrossRef]
  3. Real, S.; Araujo, A. Navigation systems for the blind and visually impaired: Past work, challenges, and open problems. Sensors 2019, 19, 3404. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Aciem, T.M.; da Silveira Mazzotta, M.J. Personal and social autonomy of visually impaired people who were assisted by rehabilitation services. Rev. Bras. Oftalmol. 2013, 72, 261–267. [Google Scholar] [CrossRef] [Green Version]
  5. Kacorri, H.; Kitani, K.M.; Bigham, J.P.; Asakawa, C. People with visual impairment training personal object recognizers: Feasibility and challenges. Conf. Hum. Factors Comput. Syst. Proc. 2017, 5839–5849. [Google Scholar] [CrossRef]
  6. Pigeon, C.; Li, T.; Moreau, F.; Pradel, G.; Marin-Lamellet, C. Cognitive load of walking in people who are blind: Subjective and objective measures for assessment. Gait Posture 2019. [Google Scholar] [CrossRef]
  7. Nweke, H.F.; Teh, Y.W.; Al-garadi, M.A.; Alo, U.R. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 2018, 105, 233–261. [Google Scholar] [CrossRef]
  8. Duman, S.; Elewi, A.; Yetgin, Z. In Design and Implementation of an Embedded Real-Time System for Guiding Visually Impaired Individuals. In Proceedings of the 2019 International Conference on Artificial Intelligence and Data Processing Symposium, IDAP 2019, Malatya, Turkey, 21–22 September 2019. [Google Scholar] [CrossRef]
  9. Borelli, E.; Paolini, G.; Antoniazzi, F.; Barbiroli, M.; Benassi, F.; Chesani, F.; Chiari, L.; Fantini, M.; Fuschini, F.; Galassi, A.; et al. HABITAT: An IoT solution for independent elderly. Sensors 2019, 19, 1258. [Google Scholar] [CrossRef] [Green Version]
  10. Kale, H.; Mandke, P.; Mahajan, H.; Deshpande, V. Human posture recognition using artificial neural networks. In Proceedings of the 2018 IEEE 8th International Advance Computing Conference (IACC), Greater Noida, India, 14–15 December 2018; pp. 272–278. [Google Scholar] [CrossRef]
  11. Syed, S.; Morseth, B.; Hopstock, L.; Horsch, A. A novel algorithm to detect non-wear time from raw accelerometer data using convolutional neural networks. Sci. Rep. 2020. [Google Scholar] [CrossRef]
  12. Murad, A.; Pyun, J.Y. Deep recurrent neural networks for human activity recognition. Sensors 2017, 17, 2556. [Google Scholar] [CrossRef] [Green Version]
  13. Zheng, X.; Wang, M.; Ordieres-Meré, J. Comparison of data preprocessing approaches for applying deep learning to human activity recognition in the context of industry 4.0. Sensors 2018, 18, 2146. [Google Scholar] [CrossRef] [Green Version]
  14. Niemann, F.; Reining, C.; Rueda, F.M.; Nair, N.R.; Steffens, J.A.; Fink, G.A.; Hompel, M. Ten Lara: Creating a dataset for human activity recognition in logistics using semantic attributes. Sensors 2020, 20, 4083. [Google Scholar] [CrossRef] [PubMed]
  15. Zheng, Y. Miniature inertial measurement unit. In Space Microsystems and Micro/Nano Satellites; Butterworth Heinemann—Elsevier: Oxford, UK, 2018; pp. 233–293. ISBN 9780128126721. [Google Scholar]
  16. Zhou, H.; Hu, H. Inertial sensors for motion detection of human upper limbs. Sens. Rev. 2007. [Google Scholar] [CrossRef] [Green Version]
  17. Langfelder, G.; Tocchio, A. Microelectromechanical Systems Integrating Motion and Displacement Sensors; Elsevier Ltd.: Amsterdam, The Netherlands, 2018; ISBN 9780081020562. [Google Scholar]
  18. Bernieri, G.; Faramondi, L.; Pascucci, F. Augmenting white cane reliability using smart glove for visually impaired people. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 8046–8049. [Google Scholar] [CrossRef]
  19. Chaccour, K.; Eid, J.; Darazi, R.; El Hassani, A.H.; Andres, E. Multisensor guided walker for visually impaired elderly people. In Proceedings of the 2015 International Conference on Advances in Biomedical Engineering (ICABME), Beirut, Lebanon, 16–18 September 2015; pp. 158–161. [Google Scholar] [CrossRef]
  20. Basso, S.; Frigo, G.; Giorgi, G. A smartphone-based indoor localization system for visually impaired people. In Proceedings of the 2015 IEEE International Symposium on Medical Measurements and Applications (MeMeA) Proceedings, Turin, Italy, 7–9 May 2015; pp. 543–548. [Google Scholar] [CrossRef]
  21. Li, B.; Pablo Muñoz, J.; Rong, X.; Xiao, J.; Tian, Y.; Arditi, A. ISANA: Wearable context-aware indoor assistive navigation with obstacle avoidance for the blind. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2016; Volume 9914. [Google Scholar]
  22. Yang, G.; Saniie, J. Indoor navigation for visually impaired using AR markers. In Proceedings of the IEEE International Conference on Electro Information Technology, Lincoln, NE, USA, 14–17 May 2017. [Google Scholar]
  23. Al-Khalifa, S.; Al-Razgan, M. Ebsar: Indoor guidance for the visually impaired. Comput. Electr. Eng. 2016, 54. [Google Scholar] [CrossRef]
  24. Ahmetovic, D.; Mascetti, S.; Oh, U.; Asakawa, C. Turn right: Analysis of rotation errors in turn-by-turn navigation for individuals with visual impairments. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, Galway, Ireland, 22–24 October 2018; pp. 333–339. [Google Scholar] [CrossRef]
  25. Ahmetovic, D.; Mascetti, S.; Bernareggi, C.; Guerreiro, J.; Oh, U.; Asakawa, C. Deep learning compensation of rotation errors during navigation assistance for people with visual impairments or blindness. ACM Trans. Access. Comput. 2019, 12. [Google Scholar] [CrossRef] [Green Version]
  26. Sato, D.; Oh, U.; Guerreiro, J.; Ahmetovic, D.; Naito, K.; Takagi, H.; Kitani, K.M.; Asakawa, C. Navcog3 in the wild: Large-scale Blind Indoor Navigation Assistant with Semantic Features. ACM Trans. Access. Comput. 2019, 12. [Google Scholar] [CrossRef]
  27. Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K.; Takagi, H.; Asakawa, C. NavCog: A navigational cognitive assistant for the blind. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, Florence, Italy, 6–9 September 2016; pp. 90–99. [Google Scholar] [CrossRef]
  28. Kayukawa, S.; Ishihara, T.; Takagi, H.; Morishima, S.; Asakawa, C. Guiding Blind Pedestrians in Public Spaces by Understanding Walking Behavior of Nearby Pedestrians. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–22. [Google Scholar] [CrossRef]
  29. Kayukawa, S.; Higuchi, K.; Guerreiro, J.; Morishima, S.; Sato, Y.; Kitani, K.; Asakawa, C. BBEEP: A sonic collision avoidance system for blind travellers and nearby pedestrians. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019. [Google Scholar] [CrossRef]
  30. Mahida, P.T.; Shahrestani, S.; Cheung, H. Indoor positioning framework for visually impaired people using Internet of Things. In Proceedings of the 2019 13th International Conference on Sensing Technology (ICST), Sydney, NSW, Australia, 2–4 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  31. Amirgholy, M.; Golshani, N.; Schneider, C.; Gonzales, E.J.; Gao, H.O. An advanced traveler navigation system adapted to route choice preferences of the individual users. Int. J. Transp. Sci. Technol. 2017, 6, 240–254. [Google Scholar] [CrossRef]
  32. Asakawa, S.; Guerreiro, J.; Sato, D.; Takagi, H.; Ahmetovic, D.; Gonzalez, D.; Kitani, K.M.; Asakawa, C. An independent and interactive museum experience for blind people. In Proceedings of the 16th International Web for All Conference, San Francisco, CA, USA, 13–15 May 2019. [Google Scholar] [CrossRef]
  33. Guerreiro, J.; Ahmetovic, D.; Kitani, K.M.; Asakawa, C. Virtual navigation for blind people: Building sequential representations of the real-world. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA, 20 October–1 November 2017; pp. 280–289. [Google Scholar] [CrossRef]
  34. Cobo, A.; Guerrón, N.E.; Martín, C.; del Pozo, F.; Serrano, J.J. Differences between blind people’s cognitive maps after proximity and distant exploration of virtual environments. Comput. Hum. Behav. 2017, 77, 294–308. [Google Scholar] [CrossRef]
  35. Real, S.; Araujo, A. VES: A mixed-reality system to assist multisensory spatial perception and cognition for blind and visually impaired people. Appl. Sci. 2020, 10, 523. [Google Scholar] [CrossRef] [Green Version]
  36. Elmannai, W.M.; Elleithy, K.M. A Highly Accurate and Reliable Data Fusion Framework for Guiding the Visually Impaired. IEEE Access 2018, 6. [Google Scholar] [CrossRef]
  37. Cheraghi, S.A.; Namboodiri, V.; Walker, L. GuideBeacon: Beacon-based indoor wayfinding for the blind, visually impaired, and disoriented. In Proceedings of the 2017 IEEE International Conference on Pervasive Computing and Communications (PerCom), Kona, HI, USA, 13–17 March 2017; pp. 121–130. [Google Scholar] [CrossRef] [Green Version]
  38. Mekhalfi, M.L.; Melgani, F.; Zeggada, A.; De Natale, F.G.B.; Salem, M.A.M.; Khamis, A. Recovering the sight to blind people in indoor environments with smart technologies. Expert Syst. Appl. 2016. [Google Scholar] [CrossRef] [Green Version]
  39. Martinez, M.; Roitberg, A.; Koester, D.; Stiefelhagen, R.; Schauerte, B. Using Technology Developed for Autonomous Cars to Help Navigate Blind People. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, ICCVW, Venice, Italy, 22–29 October 2017. [Google Scholar] [CrossRef]
  40. Guerreiro, J.; Sato, D.; Asakawa, S.; Dong, H.; Kitani, K.M.; Asakawa, C. Cabot: Designing and evaluating an autonomous navigation robot for blind people. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, 28–30 October 2019; pp. 68–82. [Google Scholar] [CrossRef] [Green Version]
  41. Adebiyi, A.; Sorrentino, P.; Bohlool, S.; Zhang, C.; Arditti, M.; Goodrich, G.; Weiland, J.D. Assessment of feedback modalities for wearable visual AIDS in blind mobility. PLoS ONE 2017, 12, e0170531. [Google Scholar] [CrossRef] [PubMed]
  42. Li, B.; Munoz, J.P.; Rong, X.; Chen, Q.; Xiao, J.; Tian, Y.; Arditi, A.; Yousuf, M. Vision-Based Mobile Indoor Assistive Navigation Aid for Blind People. IEEE Trans. Mob. Comput. 2019, 18. [Google Scholar] [CrossRef] [PubMed]
  43. Katzschmann, R.K.; Araki, B.; Rus, D. Safe local navigation for visually impaired users with a time-of-flight and haptic feedback device. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26. [Google Scholar] [CrossRef]
  44. Yang, Z.; Ganz, A. A Sensing Framework for Indoor Spatial Awareness for Blind and Visually Impaired Users. IEEE Access 2019, 7. [Google Scholar] [CrossRef]
  45. Foster, M.; Brugarolas, R.; Walker, K.; Mealin, S.; Cleghern, Z.; Yuschak, S.; Clark, J.C.; Adin, D.; Russenberger, J.; Gruen, M.; et al. Preliminary Evaluation of a Wearable Sensor System for Heart Rate Assessment in Guide Dog Puppies. IEEE Sens. J. 2020, 20, 9449–9459. [Google Scholar] [CrossRef]
  46. Islam, M.M.; Sadi, M.S.; Zamli, K.Z.; Ahmed, M.M. Developing Walking Assistants for Visually Impaired People: A Review. IEEE Sens. J. 2019, 19, 2814–2828. [Google Scholar] [CrossRef]
  47. Tapu, R.; Mocanu, B.; Zaharia, T. Wearable assistive devices for visually impaired: A state of the art survey. Pattern Recognit. Lett. 2018. [Google Scholar] [CrossRef]
  48. Filippeschi, A.; Schmitz, N.; Miezal, M.; Bleser, G.; Ruffaldi, E.; Stricker, D. Survey of motion tracking methods based on inertial sensors: A focus on upper limb human motion. Sensors 2017, 17, 1257. [Google Scholar] [CrossRef] [Green Version]
  49. Qi, J.; Yang, P.; Waraich, A.; Deng, Z.; Zhao, Y.; Yang, Y. Examining sensor-based physical activity recognition and monitoring for healthcare using Internet of Things: A systematic review. J. Biomed. Inform. 2018, 87, 138–153. [Google Scholar] [CrossRef]
  50. Bet, P.; Castro, P.C.; Ponti, M.A. Fall detection and fall risk assessment in older person using wearable sensors: A systematic review. Int. J. Med. Inform. 2019, 130, 103946. [Google Scholar] [CrossRef]
  51. Heinrich, S.; Springstübe, P.; Knöppler, T.; Kerzel, M.; Wermter, S. Continuous convolutional object tracking in developmental robot scenarios. Neurocomputing 2019, 342, 137–144. [Google Scholar] [CrossRef]
  52. Roetenberg, D.; Luinge, H.; Slycke, P. Xsens MVN: Full 6DOF human motion tracking using miniature inertial sensors. Xsens Motion Technol. BV. Tech. Rep. 2013, 3, 1–9. [Google Scholar]
  53. Hamzaid, N.A.; Mohd Yusof, N.H.; Jasni, F. Sensory Systems in Micro-Processor Controlled Prosthetic Leg: A Review. IEEE Sens. J. 2020, 20, 4544–4554. [Google Scholar] [CrossRef]
  54. Shaeffer, D.K. MEMS inertial sensors: A tutorial overview. IEEE Commun. Mag. 2013, 51, 100–109. [Google Scholar] [CrossRef]
  55. Simdiankin, A.; Byshov, N.; Uspensky, I. A method of vehicle positioning using a non-satellite navigation system. In Proceedings of the Transportation Research Procedia; Elsevier: Amsterdam, The Netherlands, 2018; Volume 36, pp. 732–740. [Google Scholar]
  56. Munoz Diaz, E.; Bousdar Ahmed, D.; Kaiser, S. A Review of Indoor Localization Methods Based on Inertial Sensors; Elsevier Inc.: Amsterdam, The Netherlands, 2019; ISBN 9780128131893. [Google Scholar]
  57. Yuan, Q.; Asadi, E.; Lu, Q.; Yang, G.; Chen, I.M. Uncertainty-Based IMU Orientation Tracking Algorithm for Dynamic Motions. IEEE/ASME Trans. Mechatron. 2019, 24, 872–882. [Google Scholar] [CrossRef]
  58. Shelke, S.; Aksanli, B. Static and dynamic activity detection with ambient sensors in smart spaces. Sensors 2019, 19, 804. [Google Scholar] [CrossRef] [Green Version]
  59. Trivedi, U.; Mcdonnough, J.; Shamsi, M.; Ochoa, A.I.; Braynen, A.; Krukauskas, C.; Alqasemi, R.; Dubey, R. A wearable device for assisting persons with vision impairment. In Proceedings of the ASME 2017 International Mechanical Engineering Congress and Exposition IMECE2017, Tampa, FL, USA, 3–9 November 2017; pp. 1–8. [Google Scholar]
  60. Zhu, X.; Haegele, J.A. Reactivity to accelerometer measurement of children with visual impairments and their family members. Adapt. Phys. Act. Q. 2019, 36, 492–500. [Google Scholar] [CrossRef]
  61. Da Silva, R.B.P.; Marques, A.C.; Reichert, F.F. Objectively measured physical activity in brazilians with visual impairment: Description and associated factors. Disabil. Rehabil. 2018, 40, 2131–2137. [Google Scholar] [CrossRef]
  62. Brian, A.; Pennell, A.; Haibach-Beach, P.; Foley, J.; Taunton, S.; Lieberman, L.J. Correlates of physical activity among children with visual impairments. Disabil. Health J. 2019, 12, 328–333. [Google Scholar] [CrossRef]
  63. Keay, L.; Dillon, L.; Clemson, L.; Tiedemann, A.; Sherrington, C.; McCluskey, P.; Ramulu, P.; Jan, S.; Rogers, K.; Martin, J.; et al. PrevenTing Falls in a high-risk, vision-impaired population through specialist ORientation and Mobility services: Protocol for the PlaTFORM randomised trial. Inj. Prev. 2017, 1–8. [Google Scholar] [CrossRef]
  64. Hirano, T.; Kanebako, J.; Saraiji, M.H.D.Y.; Peiris, R.L.; Minamizawa, K. Synchronized Running: Running Support System for Guide Runners by Haptic Sharing in Blind Marathon. In Proceedings of the 2019 IEEE World Haptics Conference (WHC), Tokyo, Japan, 9–12 July 2019; pp. 25–30. [Google Scholar] [CrossRef]
  65. Qi, J.; Xu, J.W.; De Shao, W. Physical activity of children with visual impairments during different segments of the school day. Int. J. Environ. Res. Public Health 2020, 17, 6897. [Google Scholar] [CrossRef]
  66. Haegele, J.A.; Zhu, X.; Kirk, T.N. Physical Activity among Children with Visual Impairments, Siblings, and Parents: Exploring Familial Factors. Matern. Child Health J. 2020. [Google Scholar] [CrossRef]
  67. Nkechinyere, N.M.; Washington, M.; Uche, O.R.; Gerald, N.I. Monitoring of the Aged and Visually Impaired for Ambulation and Activities of Daily Living. In Proceedings of the 2017 IEEE 3rd International Conference on Electro-Technology for National Development (NIGERCON) Monitoring, Owerri, Nigeria, 7–10 November 2017; pp. 634–638. [Google Scholar]
  68. Borenstein, J. The navbelt-a computerized multi-sensor travel aid for active guidance of the blind. In Proceedings of the CSUN’s Fifth Annual Conference on Technology and Persons with Disabilities, Los Angeles, CA, USA, 21–24 March 1990; pp. 107–116. [Google Scholar]
  69. Razavi, J.; Shinta, T. A novel method of detecting stairs for the blind. In Proceedings of the 2017 IEEE Conference on Wireless Sensors (ICWiSe), Miri, Malaysia, 13–14 November 2017; pp. 18–22. [Google Scholar] [CrossRef]
  70. Dastider, A.; Basak, B.; Safayatullah, M.; Shahnaz, C.; Fattah, S.A. Cost efficient autonomous navigation system (e-cane) for visually impaired human beings. In Proceedings of the 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, Bangladesh, 21–23 December 2017; pp. 650–653. [Google Scholar] [CrossRef]
  71. Oommen, J.; Bews, D.; Hassani, M.S.; Ono, Y.; Green, J.R. A wearable electronic swim coach for blind athletes. In Proceedings of the 2018 IEEE Life Sciences Conference (LSC), Montreal, QC, Canada, 28–30 October 2018; pp. 219–222. [Google Scholar] [CrossRef]
  72. Kim, Y.; Moncada-Torres, A.; Furrer, J.; Riesch, M.; Gassert, R. Quantification of long cane usage characteristics with the constant contact technique. Appl. Ergon. 2016, 55, 216–225. [Google Scholar] [CrossRef] [Green Version]
  73. Croce, D.; Giarré, L.; Pascucci, F.; Tinnirello, I.; Galioto, G.E.; Garlisi, D.; Lo Valvo, A. An indoor and outdoor navigation system for visually impaired people. IEEE Access 2019, 7, 170406–170418. [Google Scholar] [CrossRef]
  74. Weinberg, H. Using the ADXL202 in Pedometer and Personal Navigation Applications; Analog Devices: Norwood, MA, USA, 2002; Available online: https://www.analog.com/media/en/technical-documentation/application-notes/513772624AN602.pdf (accessed on 12 July 2021).
  75. Silva, C.S.; Wimalaratne, P. Towards a grid based sensor fusion for visually impaired navigation using sonar and vision measurements. In Proceedings of the 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, Bangladesh, 21–23 December 2017; pp. 784–787. [Google Scholar] [CrossRef]
  76. Fan, K.; Lyu, C.; Liu, Y.; Zhou, W.; Jiang, X.; Li, P.; Chen, H. Hardware implementation of a virtual blind cane on FPGA. In Proceedings of the 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR), Okinawa, Japan, 14–18 July 2017; pp. 344–348. [Google Scholar] [CrossRef]
  77. Chen, R.; Tian, Z.; Liu, H.; Zhao, F.; Zhang, S.; Liu, H. Construction of a voice driven life assistant system for visually impaired people. In Proceedings of the 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 26–28 May 2018; pp. 87–92. [Google Scholar] [CrossRef]
  78. Wang, B.; Xiang, W.; Ma, K.; Mu, Y.Q.; Wu, Z. Design and implementation of intelligent walking stick based on OneNET Internet of things development platform. In Proceedings of the 2019 28th Wireless and Optical Communications Conference (WOCC), Beijing, China, 9–10 May 2019; pp. 1–4. [Google Scholar] [CrossRef]
  79. Meshram, V.V.; Patil, K.; Meshram, V.A.; Shu, F.C. An astute assistive device for mobility and object recognition for visually impaired people. IEEE Trans. Hum. Mach. Syst. 2019, 49, 449–460. [Google Scholar] [CrossRef]
  80. Bai, J.; Liu, Z.; Lin, Y.; Li, Y.; Lian, S.; Liu, D. Wearable travel aid for environment perception and navigation of visually impaired people. Electronics 2019, 8, 697. [Google Scholar] [CrossRef] [Green Version]
  81. Bastaki, M.M.; Sobuh, A.A.; Suhaiban, N.F.; Almajali, E.R. Design and implementation of a vision stick with outdoor/indoor guiding systems and smart detection and emergency features. In Proceedings of the 2020 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 4 February–9 April 2020; pp. 15–18. [Google Scholar] [CrossRef]
  82. Li, Z.; Song, F.; Clark, B.C.; Grooms, D.R.; Liu, C. A Wearable Device for Indoor Imminent Danger Detection and Avoidance with Region-Based Ground Segmentation. IEEE Access 2020, 8, 184808–184821. [Google Scholar] [CrossRef]
  83. Zhong, Z.; Lee, J. Virtual Guide Dog: Next-generation pedestrian signal for the visually impaired. Adv. Mech. Eng. 2020, 12, 1–9. [Google Scholar] [CrossRef]
  84. Gill, S.; Seth, N.; Scheme, E. A multi-sensor cane can detect changes in gait caused by simulated gait abnormalities and walking terrains. Sensors 2020, 20, 631. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  85. Jin, L.; Zhang, H.; Shen, Y.; Ye, C. Human-Robot Interaction for Assisted Object Grasping by a Wearable Robotic Object Manipulation Aid for the Blind. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; pp. 15–20. [Google Scholar] [CrossRef]
  86. Orth, A.; Kwiatkowski, P.; Pohl, N. A Radar-Based Hand-Held Guidance Aid for the Visually Impaired. In Proceedings of the 2020 German Microwave Conference (GeMiC), Cottbus, Germany, 9–11 March 2020; pp. 180–183. [Google Scholar]
  87. Bai, J.; Lian, S.; Liu, Z.; Wang, K.; Liu, D. Virtual-Blind-Road Following-Based Wearable Navigation Device for Blind People. IEEE Trans. Consum. Electron. 2018, 64, 136–143. [Google Scholar] [CrossRef] [Green Version]
  88. Bai, J.; Lian, S.; Liu, Z.; Wang, K.; Liu, D. Smart guiding glasses for visually impaired people in indoor environment. IEEE Trans. Consum. Electron. 2017, 63, 258–266. [Google Scholar] [CrossRef] [Green Version]
  89. Zhang, H.; Ye, C. Human-Robot Interaction for Assisted Wayfinding of a Robotic Navigation Aid for the Blind. In Proceedings of the 2019 12th International Conference on Human System Interaction (HSI), Richmond, VA, USA, 25–27 June 2019; pp. 137–142. [Google Scholar] [CrossRef]
  90. Zegarra Flores, J.V.; Rasseneur, L.; Galani, R.; Rakitic, F.; Farcy, R. Indoor navigation with smart phone IMU for the visually impaired in university buildings. J. Assist. Technol. 2016, 10, 133–139. [Google Scholar] [CrossRef]
  91. Moder, T.; Reitbauer, C.R.; Wisiol, K.M.D.; Wilfinger, R.; Wieser, M. An Indoor Positioning and Navigation Application for Visually Impaired People Using Public Transport. In Proceedings of the 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Nantes, France, 24–27 September 2018; pp. 1–7. [Google Scholar] [CrossRef]
  92. Ferrand, S.; Alouges, F.; Aussal, M. An Augmented Reality Audio Device Helping Blind People Navigation; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; Volume 10897, ISBN 9783319942735. [Google Scholar]
  93. Simoes, W.C.S.S.; De Lucena, V.F. Blind user wearable audio assistance for indoor navigation based on visual markers and ultrasonic obstacle detection. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–11 January 2016; pp. 60–63. [Google Scholar] [CrossRef]
  94. Dang, Q.K.; Chee, Y.; Pham, D.D.; Suh, Y.S. A virtual blind cane using a line laser-based vision system and an inertial measurement unit. Sensors 2016, 16, 95. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  95. Botezatu, N.; Caraiman, S.; Rzeszotarski, D.; Strumillo, P. Development of a versatile assistive system for the visually impaired based on sensor fusion. In Proceedings of the 2017 21st International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 19–21 October 2017; pp. 540–547. [Google Scholar] [CrossRef]
  96. Grewe, L.; Overell, W. Road following for blindBike: An assistive bike navigation system for low vision persons. Signal Process. Sens. Inf. Fusion Target Recognit. XXVI 2017, 10200, 1020011. [Google Scholar] [CrossRef]
  97. Biswas, M.; Dhoom, T.; Pathan, R.K.; Sen Chaiti, M. Shortest Path Based Trained Indoor Smart Jacket Navigation System for Visually Impaired Person. In Proceedings of the 2020 IEEE International Conference on Smart Internet of Things (SmartIoT), Beijing, China, 14–16 August 2020; pp. 228–235. [Google Scholar] [CrossRef]
  98. Ferrand, S.; Alouges, F.; Aussal, M. An electronic travel aid device to help blind people playing sport. IEEE Instrum. Meas. Mag. 2020, 23, 14–21. [Google Scholar] [CrossRef]
  99. Mahida, P.; Shahrestani, S.; Cheung, H. Deep learning-based positioning of visually impaired people in indoor environments. Sensors 2020, 20, 6238. [Google Scholar] [CrossRef]
  100. Zhang, H.; Ye, C. A visual positioning system for indoor blind navigation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9079–9085. [Google Scholar] [CrossRef]
  101. Ciobanu, A.; Morar, A.; Moldoveanu, F.; Petrescu, L.; Ferche, O.; Moldoveanu, A. Real-time indoor staircase detection on mobile devices. In Proceedings of the 2017 21st International Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 29–31 May 2017; pp. 287–293. [Google Scholar] [CrossRef]
  102. Ong, J.C.; Arnedt, J.T.; Gehrman, P.R. Insomnia diagnosis, assessment, and evaluation. In Principles and Practice of Sleep Medicine; Elsevier: Amsterdam, The Netherlands, 2017; pp. 785–793. [Google Scholar]
  103. Manber, R.; Bootzin, R.R.; Loewy, D. Sleep Disorders. In Comprehensive Clinical Psychology; Elsevier: Amsterdam, The Netherlands, 1998; pp. 505–527. [Google Scholar]
  104. Ong, S.R.; Crowston, J.G.; Loprinzi, P.D.; Ramulu, P.Y. Physical activity, visual impairment, and eye disease. Eye 2018, 32, 1296–1303. [Google Scholar] [CrossRef] [Green Version]
  105. Khemthong, S.; Packer, T.L.; Dhaliwal, S.S. Using the Actigraph to measure physical activity of people with disabilities: An investigation into measurement issues. Int. J. Rehabil. Res. 2006, 29, 315–318. [Google Scholar] [CrossRef]
  106. Manos, A.; Klein, I.; Hazan, T. Gravity-based methods for heading computation in pedestrian dead reckoning. Sensors 2019, 19, 1170. [Google Scholar] [CrossRef] [Green Version]
  107. Ricci, L.; Taffoni, F.; Formica, D. On the orientation error of IMU: Investigating static and dynamic accuracy targeting human motion. PLoS ONE 2016, 11, e0161940. [Google Scholar] [CrossRef] [Green Version]
  108. Kok, M.; Hol, J.D.; Schön, T.B. Using Inertial Sensors for Position and Orientation Estimation. Found. Trends Signal Process. 2017, 11, 1–153. [Google Scholar] [CrossRef] [Green Version]
  109. Fernandes, H.; Costa, P.; Filipe, V.; Paredes, H.; Barroso, J. A review of assistive spatial orientation and navigation technologies for the visually impaired. Univers. Access Inf. Soc. 2019, 18, 155–168. [Google Scholar] [CrossRef]
  110. Yoon, P.K.; Zihajehzadeh, S.; Kang, B.S.; Park, E.J. Robust Biomechanical Model-Based 3-D Indoor Localization and Tracking Method Using UWB and IMU. IEEE Sens. J. 2017, 17, 1084–1096. [Google Scholar] [CrossRef]
  111. Huang, X.; Wang, F.; Zhang, J.; Hu, Z.; Jin, J. A posture recognition method based on indoor positioning technology. Sensors 2019, 19, 1464. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Gong, X.; Chen, L. A conditional cubature Kalman filter and its application to transfer alignment of distributed position and orientation system. Aerosp. Sci. Technol. 2019, 95, 105405. [Google Scholar] [CrossRef]
  113. Ramazi, R.; Perndorfer, C.; Soriano, E.; Laurenceau, J.P.; Beheshti, R. Multi-modal predictive models of diabetes progression. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, Niagara Falls, NY, USA, 7–10 September 2019; pp. 253–258. [Google Scholar] [CrossRef] [Green Version]
  114. Movahedia, A.; Mojtahedia, H.; Farazyanib, F. Differences in socialization between visually impaired student-athletes and non-athletes. Res. Dev. Disabil. 2011, 32, 58–62. [Google Scholar] [CrossRef]
  115. International Blind Sports Federation IBSA. Available online: https://www.ibsasport.org/ (accessed on 12 July 2021).
  116. Stelmack, J. Quality of life of low-vision patients and outcomes of low-vision rehabilitation. Optom. Vis. Sci. 2001, 78, 335–342. [Google Scholar] [CrossRef]
  117. Lopera, G.; Aguirre, Á.; Parada, P.; Baquet, J. Manual Tecnico De Servicios De Rehabilitacion Integral Para Personas Ciegas O Con Baja Vision En America Latina; Unión Latinoamericana De Ciegos-Ulac: Montevideo, Uruguay, 2010. [Google Scholar]
  118. Organización Nacional de Ciegos Españoles. Discapacidad Visual y Autonomía Personal. Enfoque Práctico de la Rehabilitación; Organización Nacional de Ciegos Españoles: Madrid, Spain, 2011; ISBN 978-84-484-0277-8. [Google Scholar]
  119. Health Vet VistA, Blind rehabilitation user manual, Version 5.0.29, Department of Veterans Affairs, USA. August 2011. Available online: https://www.va.gov/vdl/documents/Clinical/Blind_Rehabilitation/br_user_manual.pdf (accessed on 12 July 2021).
  120. Muzny, M.; Henriksen, A.; Giordanengo, A.; Muzik, J.; Grøttland, A.; Blixgård, H.; Hartvigsen, G.; Årsand, E. Wearable sensors with possibilities for data exchange: Analyzing status and needs of different actors in mobile health monitoring systems. Int. J. Med. Inform. 2020, 133, 104017. [Google Scholar] [CrossRef]
  121. Tamura, T. Wearable Inertial Sensors and Their Applications; Elsevier Inc.: Amsterdam, The Netherlands, 2014; ISBN 9780124186668. [Google Scholar]
  122. Lu, Y.S.; Wang, H.W.; Liu, S.H. An integrated accelerometer for dynamic motion systems. Meas. J. Int. Meas. Confed. 2018. [Google Scholar] [CrossRef]
  123. Chen, Y.; Abel, K.T.; Janecek, J.T.; Chen, Y.; Zheng, K.; Cramer, S.C. Home-based technologies for stroke rehabilitation: A systematic review. Int. J. Med. Inform. 2019, 123, 11–22. [Google Scholar] [CrossRef]
  124. Porciuncula, F.; Roto, A.V.; Kumar, D.; Davis, I.; Roy, S.; Walsh, C.J.; Awad, L.N. Wearable movement sensors for rehabilitation: A focused review of technological and clinical advances. PM R 2018, 10, S220–S232. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Vienne-Jumeau, A.; Quijoux, F.; Vidal, P.P.; Ricard, D. Wearable inertial sensors provide reliable biomarkers of disease severity in multiple sclerosis: A systematic review and meta-analysis. Ann. Phys. Rehabil. Med. 2019. [Google Scholar] [CrossRef]
  126. European Blind Union, Rehabilitation for blind and partially sighted people in Europe. 3 EBU Position Pap. Rehabil. Jt. 2015. Available online: http://www.euroblind.org/sites/default/files/media/position-papers/EBU-joint-position-paper-on-Rehabilitation.pdf (accessed on 12 July 2021).
  127. da Silva, M.R.; de Souza Nobre, M.I.R.; de Carvalho, K.M.; de Cássisa Letto Montilha, R. Visual impairment, rehabilitation and International Classification of Functioning, Disability and Health. Rev. Bras. Oftalmol. 2014, 73. [Google Scholar] [CrossRef]
  128. Xu, C.; Chai, D.; He, J.; Zhang, X.; Duan, S. InnoHAR: A deep neural network for complex human activity recognition. IEEE Access 2019, 7, 9893–9902. [Google Scholar] [CrossRef]
  129. Yang, C.; Chen, Z.; Yang, C. Classification Using Convolutional Neural Network by Encoding Multivariate Time Series as Two-Dimensional Colored Images. Sensors 2020, 20, 168. [Google Scholar] [CrossRef] [Green Version]
  130. Flores, G.H.; Manduchi, R. WeAllWalk: An Annotated Data Set of Inertial Sensor Time Series from Blind Walkers. ACM Trans. Access. Comput. 2018, 11, 1–28. [Google Scholar] [CrossRef]
  131. Ordóñez, F.J.; Roggen, D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [Green Version]
  132. Li, Y.; Chen, R.; Niu, X.; Zhuang, Y.; Gao, Z.; Hu, X.; El-Sheimy, N. Inertial Sensing Meets Artificial Intelligence: Opportunity or Challenge? arXiv 2020, arXiv:2007.06727, 1–14. [Google Scholar]
  133. Gazzellini, S.; Lispi, M.L.; Castelli, E.; Trombetti, A.; Carniel, S.; Vasco, G.; Napolitano, A.; Petrarca, M. The impact of vision on the dynamic characteristics of the gait: Strategies in children with blindness. Exp. Brain Res. 2016, 234, 2619–2627. [Google Scholar] [CrossRef] [PubMed]
  134. Morriën, F.; Taylor, M.J.D.; Hettinga, F.J. Biomechanics in paralympics: Implications for performance. Int. J. Sports Physiol. Perform. 2017, 12, 578–589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Mihailovic, A.; Swenor, B.K.; Friedman, D.S.; West, S.K.; Gitlin, L.N.; Ramulu, P.Y. Gait implications of visual field damage from glaucoma. Transl. Vis. Sci. Technol. 2017, 6, 23. [Google Scholar] [CrossRef]
  136. da Silva, E.S.; Fischer, G.; da Rosa, R.G.; Schons, P.; Teixeira, L.B.T.; Hoogkamer, W.; Peyré-Tartaruga, L.A. Gait and functionality of individuals with visual impairment who participate in sports. Gait Posture 2018, 62, 355–358. [Google Scholar] [CrossRef] [PubMed]
  137. Yang, S.; Li, Q. Inertial sensor-based methods in walking speed estimation: A systematic review. Sensors 2012, 12, 6102–6116. [Google Scholar] [CrossRef] [Green Version]
  138. Zrenner, M.; Gradl, S.; Jensen, U.; Ullrich, M.; Eskofier, B.M. Comparison of different algorithms for calculating velocity and stride length in running using inertial measurement units. Sensors 2018, 18, 4194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  139. Gill, S.; Seth, N.; Scheme, E. A multi-sensor matched filter approach to robust segmentation of assisted gait. Sensors 2018, 18, 2970. [Google Scholar] [CrossRef] [Green Version]
  140. Gill, S.; Hearn, J.; Powell, G.; Scheme, E. Design of a multi-sensor IoT-enabled assistive device for discrete and deployable gait monitoring. In Proceedings of the 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT), Bethesda, MD, USA, 6–8 November 2017; pp. 216–220. [Google Scholar] [CrossRef]
  141. Mannini, A.; Sabatini, A.M. Walking speed estimation using foot-mounted inertial sensors: Comparing machine learning and strap-down integration methods. Med. Eng. Phys. 2014, 36, 1312–1321. [Google Scholar] [CrossRef]
  142. Emerson, R.W.; Kim, D.S.; Naghshineh, K.; Myers, K.R. Biomechanics of Long Cane Use. J. Vis. Impair. Blind. 2019, 113, 235–247. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Deep analysis article selection process after database searching.
Figure 1. Deep analysis article selection process after database searching.
Sensors 21 04767 g001
Figure 2. Representation of the algorithm for synchronized running proposed by [64].
Figure 2. Representation of the algorithm for synchronized running proposed by [64].
Sensors 21 04767 g002
Figure 3. Adaptation diagram from the Robotic Navigation Aid system proposed by [89].
Figure 3. Adaptation diagram from the Robotic Navigation Aid system proposed by [89].
Sensors 21 04767 g003
Figure 4. Framework of an indoor navigation system for VIP path by [30].
Figure 4. Framework of an indoor navigation system for VIP path by [30].
Sensors 21 04767 g004
Figure 5. Adaptation diagram from the assistance system solution prototype and user’s swing motion by [94].
Figure 5. Adaptation diagram from the assistance system solution prototype and user’s swing motion by [94].
Sensors 21 04767 g005
Figure 6. MPL network structure for the prediction of x and y position for indoor navigation by [99].
Figure 6. MPL network structure for the prediction of x and y position for indoor navigation by [99].
Sensors 21 04767 g006
Figure 7. Distribution of the articles selected in the systematic review according to the fields of application.
Figure 7. Distribution of the articles selected in the systematic review according to the fields of application.
Sensors 21 04767 g007
Table 1. Summary of reviewed articles in the accelerometer section.
Table 1. Summary of reviewed articles in the accelerometer section.
RoleIMUSensor FusionRef.
Identify human movementADXL345RGB Camera, Ultrasonic Sensor[59]
Measures physical activityActiGraph GT3xN/A[60]
Measures physical activityActiGraph wGT3X-BTN/A[61]
Measures physical activityActiGraph wGT3X-BTN/A[62]
Measures physical activityActiGraph wGT3x+N/A[63]
Sense the foot movementsKXR94-2050N/A[64]
Measures physical activityActiGraph GT3xN/A[65]
Measures physical activityActiGraph GT3xN/A[66]
Monitoring of human activitiesNot specifiedN/A[67]
Table 2. Summary of reviewed articles in the gyroscope section.
Table 2. Summary of reviewed articles in the gyroscope section.
RoleIMUSensor FusionRERef.
Measure the tilt angleMPU-6050Ultrasonic sensor, GPS0.2–0.5 FA/m[69]
Detect rotation and movementsMPU 6050Ultrasonic sensor, GPS1–7 cm[70]
Track the direction of gravity, orientationSmartphone IMUCamera-[71]
Sweeping velocityRe SenseCamera-[72]
RE = Reported Errors.
Table 4. Summary reviewed articles in the accelerometer, gyroscope, and magnetometer fusion section.
Table 4. Summary reviewed articles in the accelerometer, gyroscope, and magnetometer fusion section.
RoleIMUSensor FusionRERef.
Device and pose estimationVN-100 IMU/AHRSRGB-D camera0.2 m[89]
Step counting, body orientationSmartphone IMUBarometer-[90]
Attitude estimation and orientationSmartphone IMUBeacons, GPS5–6 m[91]
Pedestrian dead reckoningSmartphone IMUBeacons1.5–2 m[30]
Body orientationBNO-055Optical flow sensors, UWB0.5 m[92]
Path calculationNot specifiedRGB camera, ultrasonic sensor-[93]
Acceleration and angular rateXsens IMURGB camera, line laser6.17 cm[94]
Tracking the head and body movementLPMS-CURS2RGB camera, structure sensor PS108025–104 cm[95]
Detect the right edge of the roadSmartphone IMURGB-D camera, GPS-[96]
Step counting and Heading estimationMPU-9250Ultrasonic sensor, Pi Camera-[97]
Head trackingMPU-9250GPS/GLONASS and UWB10–20 cm[98]
Positioning and step size estimationSmartphone IMUN/A-[99]
Pose estimationVN 100 IMU/AHRSRGB-D Camera1.5 m[100]
Absolute orientationSmartphone IMURGB-D Camera-[101]
Table 5. Summary of the visually impaired participation and usability discussion.
Table 5. Summary of the visually impaired participation and usability discussion.
Visually Impaired SubjectsUsability TestUsability QuestionnaireCommentaryRef.
7YesNoSubjects were participants on the blind marathon sponsored by Japan Blind Marathon Association.[64]
1NoNo-[67]
10N/ANoSubjects recruited through the foundation Access for All (Swiss nonprofit organization).[72]
No describedYesNo-[73]
60YesNoThere were 30 subjects who were totally blind and the others had low vision. In addition, the authors involved physiotherapists, rehabilitation workers, and social workers for the development of the usability test.[79]
20YesYesTen subjects were totally blind and the others were partially sighted. The authors followed the protocol approved by the Beijing Fangshan District Disabled Persons’ Federation for recruitment and experiments. [80]
3YesNoThe subjects were student volunteers from the university. A mobility and orientation instructor evaluated their traveling techniques with a long cane to use the application.[90]
11YesNoThe system was implemented and tested at the railway station in Graz, Austria.[91]
2YesNo-[92]
10YesYesThe navigation profile of the users was considerate (height, walking speed, and step distance).[93]
No describedYesYes-[95]
2YesNo-[98]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Reyes Leiva, K.M.; Jaén-Vargas, M.; Codina, B.; Serrano Olmedo, J.J. Inertial Measurement Unit Sensors in Assistive Technologies for Visually Impaired People, a Review. Sensors 2021, 21, 4767. https://doi.org/10.3390/s21144767

AMA Style

Reyes Leiva KM, Jaén-Vargas M, Codina B, Serrano Olmedo JJ. Inertial Measurement Unit Sensors in Assistive Technologies for Visually Impaired People, a Review. Sensors. 2021; 21(14):4767. https://doi.org/10.3390/s21144767

Chicago/Turabian Style

Reyes Leiva, Karla Miriam, Milagros Jaén-Vargas, Benito Codina, and José Javier Serrano Olmedo. 2021. "Inertial Measurement Unit Sensors in Assistive Technologies for Visually Impaired People, a Review" Sensors 21, no. 14: 4767. https://doi.org/10.3390/s21144767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop