Next Article in Journal
Size-Based Routing Policies: Non-Asymptotic Analysis and Design of Decentralized Systems
Next Article in Special Issue
Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures
Previous Article in Journal
Optical Voltage Transformer Based on FBG-PZT for Power Quality Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane

1
École D’optométrie, Université de Montréal, Montréal, QC H3T 1P1, Canada
2
Visual and Cognitive Neuroscience Laboratory (VCN Lab.), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ari’el 40700, Israel
3
Navigation and Accessibility Research Center of Ariel University (NARCA), Ari’el 40700, Israel
4
Department of Neuroscience, University of Copenhagen, 2200 Copenhagen, Denmark
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(8), 2700; https://doi.org/10.3390/s21082700
Submission received: 26 March 2021 / Revised: 7 April 2021 / Accepted: 9 April 2021 / Published: 12 April 2021
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)

Abstract

:
Vision loss has dramatic repercussions on the quality of life of affected people, particularly with respect to their orientation and mobility. Many devices are available to help blind people to navigate in their environment. The EyeCane is a recently developed electronic travel aid (ETA) that is inexpensive and easy to use, allowing for the detection of obstacles lying ahead within a 2 m range. The goal of this study was to investigate the potential of the EyeCane as a primary aid for spatial navigation. Three groups of participants were recruited: early blind, late blind, and sighted. They were first trained with the EyeCane and then tested in a life-size obstacle course with four obstacles types: cube, door, post, and step. Subjects were requested to cross the corridor while detecting, identifying, and avoiding the obstacles. Each participant had to perform 12 runs with 12 different obstacles configurations. All participants were able to learn quickly to use the EyeCane and successfully complete all trials. Amongst the various obstacles, the step appeared to prove the hardest to detect and resulted in more collisions. Although the EyeCane was effective for detecting obstacles lying ahead, its downward sensor did not reliably detect those on the ground, rendering downward obstacles more hazardous for navigation.

1. Introduction

While navigating in an environment, humans rely on visual information to identify obstacles, evaluate distances, and create a mental map of their surroundings. Thus, the loss of visual input as occurs in blindness has a proportionally greater effect on navigational abilities and independence as compared to other senses [1]. To overcome this deficit in safe locomotion, blind individuals must learn to use many aids and tools to obtain environmental information that is necessary for diverse everyday tasks such as wayfinding and circumventing obstacles [2]. As the most widespread of these mobility aids, the white cane functions as an extension of the hand and arm to enable obstacle detection and to furnish information about ground textures and level changes (i.e., drop-offs, steps, and curbs) encountered during locomotion. However, the white cane does not generally provide information about obstacles positioned higher than the user’s pelvis, thus leaving the upper body at risk of collisions and injuries [3,4] (Figure 1A). This significant limitation considerably impedes safety, which can be discouraging to blind individuals and ultimately leads to social isolation [5]. There is therefore a pressing need to develop and explore new technologies that could potentially improve their safety and autonomy in daily travels. One promising area of research is sensory substitution, which aims to convey visual information to blind individuals through touch and sound with sensory substitution devices (SSD) [6]. Many different SSDs have been designed to substitute for vision, while others provide guidance that is more task-based, meaning that they offer a specific aspect of visual information (e.g., color, shape, distance or letters) strictly pertinent to preforming a certain task (e.g., wayfinding, obstacle circumvention, or reading).
For instance, the tongue display unit (TDU) is a tactile-to-vision SSD that, in a laboratory environment, allows blind individuals to discriminate shapes [7,8], movement [9], pathways [10], and letters [11,12], while enabling users to detect and avoid obstacles in a life-size obstacle course [13]. However, the visually impaired community has not adopted the TDU, since the abundant information given by the device is complex and requires the user to attend constantly to maintain an adequate level of performance, which quickly leads to exhaustion (see cognitive load problem in [6,14,15]). Therefore, devices like minimalist SSDs and electronic travel aids (ETA) that convey a simpler signal could maintain safety while limiting fatigue [2,6].
ETA devices are equipped with sensors (ultrasonic, infrared, or electromagnetic), radars, or cameras to capture information about the environment and convey encoded signals to blind users by tactile (vibration) or auditory stimulation [2,16,17,18]. While ETAs fall within the general category of SSDs, they mostly provide simple feedback that is strictly relevant for navigation, such as the presence of obstacles in the path of travel, and their distance and location. Since the 20th century, researchers have developed multiple ETAs and other SSDs as mobility aids to assist blind individuals [2]. These devices generally qualify as either primary tools (that can be used independently from the white cane) [19] or secondary tools (that must be used in conjunction with the white cane) [20,21]. However, the broader blind population has not adopted such aids due to many factors such as training requirements, high cost, and low portability [6,22].
To circumvent these shortcomings, the EyeCane was developed as a torch-like ETA equipped with narrow field of view (FOV) infrared sensors that capture distance information about obstacles and convey it to the user’s hand through vibrations. This gives the possibility to “feel” the immediate environment without touching it directly. Several studies that evaluated the EyeCane’s potential in rehabilitation have concluded that its users were able easily to estimate distances, navigate, and detect and avoid obstacles in real [23] and virtual [24,25] environments after only a brief training.
In a recent study [26], the EyeCane was adapted with two narrow FOV infrared sensors with a 1.5 m sensing range (Figure 1B) to test its reliability as a primary and secondary aid to tackle the safety challenge posed by waist-up obstacles. The authors of that study stated the hypothesis that the EyeCane might suffice as a reliable primary mobility aid if it were equipped with a third sensor pointing toward the ground (Figure 1D), thus replacing the need for a physical cane for detecting foot-obstacles. The goal of the present study was to investigate the reliability of such a downward sensor. For this purpose, we used a single narrow FOV infrared sensor with a 2 m sensing range pointed toward the ground (Figure 1C), and we assessed its reliability in a life-size obstacle course presenting risks of collision, tripping, and falling. Furthermore, we compared the EyeCane navigational capacities of late (LB)- and early-blind individuals (EB) to blindfolded sighted controls (SC). We hypothesized that smaller obstacles at ground level would prove more difficult to detect for the three groups of subjects, and that EB would be more adept than the two other groups at learning to navigate the obstacle course.
Figure 1. Schematics of (A) an individual with the white cane facing daily-life obstacles, the cane protects him from low obstacles; however, the individual is at risk of head injury with hanging obstacles like tree branches and signage; (B) an individual handling the EyeCane design used in [26]; (C) our experimental design with the single-sensor EyeCane in a downward directed manner and (D) an individual handling the three-sensors EyeCane design suggested in [26].
Figure 1. Schematics of (A) an individual with the white cane facing daily-life obstacles, the cane protects him from low obstacles; however, the individual is at risk of head injury with hanging obstacles like tree branches and signage; (B) an individual handling the EyeCane design used in [26]; (C) our experimental design with the single-sensor EyeCane in a downward directed manner and (D) an individual handling the three-sensors EyeCane design suggested in [26].
Sensors 21 02700 g001

2. Materials and Methods

2.1. Participants and Ethics

Participants were recruited in Montreal (Canada) through the database of the Harland Sanders Research Chair in Vision Neuroscience, and in Denmark through the database of the BRAINlab at the University of Copenhagen. The experiment took place at the School of Optometry at the Université de Montréal and the University of Copenhagen. Ten EB (mean age: 44 ± 12 years; 2 females and 8 males) and nine LB (mean age: 48 ± 14 years; 4 females and 5 males) were recruited in this study (Table 1).
We also included 10 sighted controls (SC) with normal vision (mean age: 44 ± 13 years; 4 females and 4 males), who were age- and sex-matched to the blind participants. SC were blindfolded throughout the experiment. All blind participants were expert users of the white cane, and two had a guide dog. To evaluate the influence of experience-dependent plasticity in the LB group, we calculated the blindness duration index (BDI) according to the formula “(age–age onset blindness)/age” (as described in [27]). The BDI score can range from 0 to 1, expressing the proportion of his life that a person has been blind, with low scores indicating recent onset of blindness and high scores longer duration of blindness. The average BDI was 0.42 ± 0.20 (range: 0.03 to 0.64) while the mean onset of blindness was 28.2 ± 13.7 years. Participants had no associated neuropathy that could affect their navigational performance or mental representation. Before starting the experiment, participants completed a questionnaire regarding their blindness and spatialization abilities and signed a consent form. The protocol was approved by the Clinical Research Ethics Committee of the University of Montreal (CERC-19-097-P) and by the local ethics committee of the University of Copenhagen (Region Hovestaden; Protocol nr: H-6-2013-004) and was conducted in accordance with the Declaration of Helsinki.

2.2. Apparatus

The EyeCane is a small (4 × 5 × 13 cm) and light-weight (∼100 g) hand-held mobility device with a form similar to a flashlight (Figure 1 and Figure 2) and a long battery life (up to 24 h use, simple to charge) [23]. Equipped with an infrared emitter and sensor (Sharp GP2Y0A02YK0F), it emits a narrow light beam (<5°) in the direction at which it is aimed and detects the reflected signal. The EyeCane then determines the distance to the hit object and translates the information into vibration, encoded with varying intensity levels. Therefore, when an obstruction is detected at a range of 20 to 150 cm, the device vibrates in the user’s palm, and its intensity is inversely proportional to the distance of the obstacle: the closer the obstacle, the higher the intensity of the vibration. The full specification details have been described previously [23], and the sensors were previously shown to work on different materials and in various lighting conditions [23,28,29].

2.3. Experimental Procedure

The experiment consisted of three parts: the training phase, the test phase, and the post-test phase, when the participants’ experience and level of satisfaction were assessed.

2.3.1. Obstacle Course

The experiments were conducted in a life-size obstacle course simulating daily-life situations encountered during outdoor and indoor travel. This obstacle course consisted of a hallway measuring 21 m long and 2.4 m wide, equipped with differently sized obstacles to evaluate detection, avoidance, and identification performances with the use of the EyeCane as a standalone aid (Figure 2). To minimize the risk of injury, all obstacles were constructed from cardboard and foam.
Four types of obstacles were designed for the experiment. These included “cube” (height: 0.61 m; width: 0.45 m; depth: 0.45 m), “door frame” (height: 1.88 m; depth: 0.45 m; door width: 0.71 m; total width: 2.4 m), “step” (height: 0.15 m; depth: 0.15 m; width: 2.4 m), and “post” (height: 1.45 m; diameter: 0.10 m) (Figure 2). These four types of obstacles were chosen to represent a range of scenarios with differences in floor surfaces and other obstacles in the path of travel. The “cube” represented large knee-high obstacles; the “door frame” represented narrow passages such as door frames and any space between two obstacles or between two other pedestrians; the “step” represented possible changes in the floor denivelation (i.e., steps, curbs, and sidewalks); and the “post” represented thin obstacles often found on the sidewalk such as signage.

2.3.2. Training Phase

All participants underwent the same training procedures. They were first verbally introduced about the EyeCane’s principle of operation and then familiarized with the device in an otherwise empty room containing a single obstacle, i.e., a cardboard tower (0.4 × 0.4 × 2 m). This familiarization phase served to introduce the concepts of obstacle recognition, size, and distance estimation, while being guided and unguided by the experimenters. Participants were taught that object recognition is possible by scanning the object with the device (side-to-side and up–down) and detecting its edges to gain information about its shape. The participants then underwent a simulated detection and avoidance task in the experimental walkway (21 × 2.4 m), with placement of three cardboard towers at 3 m apart on the longitudinal axis and randomly positioned on the horizontal axis.
The participants were then introduced to the scanning technique used for the experiment. They were taught how to scan the environment with the device pointed toward the ground in front of them such that they felt a constant, low-intensity vibration emitted from the device. This technique ensures that users always detect the ground and are thus alerted to any changes in the floor surface or the space in front of them. In fact, the arm movements required for this technique were closely related to the “two-point touch technique” that blind individuals learn in the context of white cane orientation and mobility (O&M) lessons [30,31]. The goal of this specific technique was to simulate the use of a third sensor (pointed toward the ground) while isolating it from the two other sensors’ signals. Therefore, our experimental design allows us to assess the reliability of this specific sensor without any added complexity and, thus, to assess the suggested three-sensors EyeCane’s suitability as a primary mobility aid.
Finally, the participants were familiarized with the four types of obstacles used in the experiment. During this phase, the participants used the scanning technique and were guided by the experimenter toward each obstacle. They were taught to detect and identify each of the four types of obstacles. The complete training phase (device familiarization, scanning technique, and obstacles familiarization) averaged 15 min for most participants.

2.3.3. Test Phase

All participants navigated the same 12 configurations of the obstacle test in random order. For each configuration, six obstacles of random type were placed as in Figure 2, thus comprising a total of 18 encounters for each obstacle.
The assigned task was to cross the corridor as quickly as possible while detecting, identifying, and avoiding obstacles. The participants were monitored by two experimenters for safety reasons and data collection. Object detection, identification and avoidance are distinct processes that follow each other sequentially and serve the same goal: ensuring the individual’s safety during navigation. Detection is the first step toward attaining the navigation goal as it allows the individual to gain awareness about the presence of an obstacle in the path of travel, to adjust his/her pace for safety, and to anticipate contact or avoidance, thus decreasing the risk of a dangerous collision. The identification process occurs when the individual gains information about the nature and dimensions of an object. Avoidance is the culmination of both processes, occurring when the individual has to plan a deviation in his/her path of travel according to the obstacle’s position and dimensions, to successfully execute this new path, and finally to regain the initial track. Therefore, to serve effectively as a primary mobility aid, the EyeCane must prove reliable for obstacle detection, identification, and avoidance, serving for each type of obstacles, especially those at ground level (i.e., the task’s “step” obstacle, potholes, and curbs). Another important factor for efficiency and reliability is the amount of time needed to detect, identify, avoid obstacles and complete the course; thus, we also measured crossing time.

2.4. Statistical Analysis

The collected data consisted of the average crossing time and indices of three types of performance: obstacle detection, avoidance, and identification. Since avoidance performance also counted those obstacles that participants did not encounter (being too small or too peripheral to their track), we therefore separately evaluated participants’ performances in avoiding detected obstacles. Collision data was also calculated as the opposite scalar of avoidance performance.
Data were analyzed using JASP, an open-source graphical program for statistical analysis developed by the University of Amsterdam [32]. Two-way ANCOVA tests corrected for age and sex, or the nonparametric equivalent Kruskal–Wallis test, were used to determine the effect of group (EB, LB, and SC) on time and performance data. We then verified the effects of obstacle type (“cube”, “door frame”, “step”, and “post”) and group (EB, LB, and SC) on detection, as well as their interaction on collision rates, using two-way ANCOVA corrected for age and sex. Post hoc T-tests (or Mann–Whitney) with Bonferroni correction were performed to identify any significant differences. Results are presented as mean ± SD.

3. Results

Crossing time: In terms of the average time to cross the obstacle course through the 12 trials, EB were faster with 154.5 ± 39.6 s than LB with 240.7 ± 117.2 s and SC with 304.9 ± 143.8 s (Figure 3A). The ANCOVA corrected for age and sex test indicated that there was a statistically significant difference in crossing times between groups (F(2,22) = 7.290, p = 0.004, η2 = 0.384) but failed to show an effect of age or sex (p > 0.05). Post hoc comparison with Bonferroni correction revealed that EB were significantly faster than SC (p = 0.003), but no differences were found between EB and LB or between LB and SC (p > 0.05).
Obstacle detection, identification, and avoidance: the EB group detected 54.9 ± 8.3% of the obstacles, whereas LB detected 45.2 ± 11.3% and SC 51.1 ± 14.1% of the obstacles. A two-way ANCOVA with correction for age and sex did not indicate any significant differences between groups in detection rate (F(2,24) = 1.972, p = 0.428, η2 = 0.070). For obstacles that were successfully detected, the analyses did not find significant group differences for identification rate (F(2,22) = 0.452, p = 0.642, η2 = 0.039). EB identified 88.4 ± 11.3% of the detected obstacles, LB 81.4 ± 10.6% and SC 81.6 ± 16.1%. Moreover, EB avoided 86.3 ± 11.8% of detected obstacles, LB 87.7 ± 7.2% and SC 68.3 ± 15.4%. Since the Shapiro–Wilk test signaled a departure from normality in avoidance scores, we applied the nonparametric Kruskal–Wallis test for this contrast. The analysis indicated a statistically significant difference between the groups (H(2) = 6.844, p = 0.033, η2 = 0.336). Mann–Whitney post hoc tests with Bonferroni correction showed a significant advantage in obstacle avoidance for the EB (p < 0.01) and LB groups (p < 0.01) compared to SC, but no difference between EB and LB (p > 0.05) (Figure 3B). No effect of age nor sex was found in any of the analyses. For the LB group, there was no significant correlation with BDI in any of the measures (p > 0.05).
Detection according to the nature of the obstacles: The mean frequencies of cube detection were 55.7 ± 15.3% (EB), 43.0 ± 18.0% (LB), and 50.5 ± 21.8% (SC). The corresponding mean frequencies for postdetection were 32.1 ± 13.2% for EB, 25.9% ± 14.5% for LB and 33.3 ± 7.5% for SC. EB detected 35.1 ± 24.4% of steps, while LB and detected 16 ± 18.3% and 28.9 ± 19.8%, respectively. Doors were the most easily detected obstacles with mean performances of 97.2 ± 13.2% for EB, 95.0 ± 7.5% for LB, and 91.6 ± 10.6% for SC. The ANCOVA corrected for age and sex indicated a significant effect of type of obstacle F(3, 94) = 94.141, p < 0.01, η2 =0.730), but no group, age, or sex effects, nor for the interaction between groups and types of obstacles (p > 0.05). Post hoc tests with Bonferroni correction revealed that doors and cubes were significantly better detected than steps and posts (p < 0.001), but cubes were less well detected than doors (p < 0.001). No significant differences were found between detection of steps and posts (p > 0.05) (Figure 4A).
Collisions according to the nature of the obstacles: The collisions rates were about 20% per type of obstacles in each group, except for steps, which had up to 80% collision rates. Indeed, the EB group collided with 68.3 ± 22.8% of steps, but only 16.9 ± 21.5% of cubes, 14.6 ± 14.1% of posts, and 6.3 ± 4.0% of door obstacles. For LB, results were similar with 80.2 ± 17.2% collisions with steps, but only 23.4 ± 16.9% with cubes, 18.6 ± 8.6% with posts, and 8.7 ± 7.9% with doors. In SC, there were 77.5 ± 22.7% collisions with steps, but only 22.8 ± 17.3% for cubes, 17.9 ± 10.9% for posts, and 34.0 ± 22.6% for doors. Much like the detection performances, the ANCOVA corrected for age and sex revealed a significant effect in the type of obstacles (F(2, 94) = 79.811, p < 0.001, η2 = 0.680), but not in groups, age, or for the interaction between groups and type of obstacles (p > 0.05). Post hoc tests with Bonferroni correction indicated that collision with steps was significantly greater than for the other obstacles (p < 0.001). There was no significant difference in collision frequencies for cubes, posts, and doors. Moreover, the ANCOVA indicated a significant effect of sex in collision rates (F(1, 94) = 6.132, p = 0.015). However, the effect size (η2 = 0.017) was very low, and the gender difference did not survive the post hoc Mann–Whitney test (H(106) = 1140, p > 0.05) (Figure 4B).

4. Discussion

The goal of the present study was to investigate the reliability of the suggested EyeCane’s downward sensor by testing a single-sensor EyeCane pointed toward the ground in the presence of obstacles that could induce tripping and falling in daily-life situations. Participants’ performance in our obstacle course should serve as a reasonable indicator of the potential of the device as a primary mobility aid in various circumstances (i.e., indoor and outdoor travel). Furthermore, we wanted to investigate the differences in capacities and strategies between late- and early-blind individuals, to determine their implications for improving the design of the EyeCane and other mobility aids’ design. We hypothesized that participants would have particular difficulty in detecting and avoiding the “step” obstacle, which would thus present a major limitation for the reliance on the EyeCane as a primary mobility aid. We also hypothesized that EB individuals, given their superior abilities with SSDs [13], would show better navigation capacities with the device than LB and SC participants.
The experiment was centered around three navigational tasks—obstacle detection, identification, and avoidance—which require distinct but interlinked processes to sustain safe navigation. In fact, these three processes all require an ability to attribute the device signal to specific objects in the external environment. This phenomenon, called distal attribution, is mandatory for navigation using devices such as ETAs and SSDs, and is mainly achieved during the training phase when participants calibrate their internal representation by touching obstacles [33]. In using the EyeCane to execute successfully our navigation task, the user must analyze the vibration intensity to extract information about the obstacle’s presence and position and then analyze the vibration pattern generated by the scanning movement to extract the obstacle’s identity (i.e., form and dimensions) and plan an avoidance path.

4.1. Influence of Visual Experience

Overall detection, avoidance, and identification performances were not statistically different between groups. This result might be surprising given the established literature on sensory substitution showing superior abilities of EB [34], but some studies on minimalist SSDs (Guidance-SSD; Sound of Vision) have also observed equivalent performances of EB, LB, and SC [24,35,36]. One might thus suppose that our findings of efficient performances for every group despite only brief training may be an indicator of the device’s simplicity and ease of use [37]. However, both blind groups were significantly better at avoiding the obstacles they detected, while EB were significantly faster than LB and SC at doing the task. A possible explanation for this finding is that the EyeCane provides information on the relative distance between the user and the obstacle, which necessarily places the user at the center of his/her perceived space. This favors the use of an egocentric (body centered) spatial representation, which is known to predominate in EB, whereas LB and SC individuals are more used to working with allocentric (object centered) strategies [38]. Indeed, a normative visuocentric development favors spatial navigation behavior toward the use of an allocentric frame of reference [39]. We cannot exclude another possible explanation for the faster task performance of EB, namely the use of passive echolocation. Indeed, in O&M training, blind individuals are taught to use environmental sounds to obtain spatial information and detect objects [30]. Although we attempted to control for active echolocation by administering the test in a relatively nonechoing sonic and silent environment, we did not ask participants to wear earplugs. In addition, our blindfolding of sighted participants certainly placed them at a disadvantage, eliminating their usual visual inputs impairs their spatial abilities and postural control, even while using an SSD [6]. Therefore, these factors could have influenced the observed group differences in transit times for the obstacle course.
How the lack of visual experience developmentally affects the organization of the visual system and spatial representation could also be a factor in the present findings. Indeed, when deprived of vision early in life, the brain undergoes massive structural and functional reorganization that allows the visual cortex and its associative areas to recruit the other senses, often resulting in superior skills in these modalities (mainly touch and audition) [34]. Superior tactile acuity due to the rerouting of tactile input to the visual cortex in EB [40,41,42] could have facilitated their perception of vibration changes from the device, thus resulting in greater ease in obstacle detection and avoidance, leading to faster crossing times compared to the LB group. Due to these plastic phenomena, we expected EB to be more efficient in detecting the “step” obstacle and its faint vibration changes. However, their performance of this task did not differ significantly from that of the other groups, perhaps due to limitations in subject recruitment and the number of trials. Furthermore, several studies have shown that blind individuals can use the same structures of the navigational network that are normally involved in visually guided navigation (i.e., hippocampal formation, parahippocampal gyrus, posterior parietal cortex, and occipital areas) [10,43,44,45]. This circuit of brain regions seems to be vision dependent in sighted individuals, since it is not recruited when they navigate while blindfolded [10]. Moreover, a recent study with a tactile SSD reported that the EB recruit a sensorimotor circuit (e.g., inferior parietal cortex and areas 3a and 4p) when learning obstacle detection, while the sighted use mainly the medial temporal lobe (e.g., hippocampus and entorhinal cortex) [44]. Such crossmodal reorganization can also be present in the LB brain, albeit to a lesser scale and enhanced tactile acuity has also been observed in such individuals [46]. However, these processes are training dependent and, given that the cortex of LB is initially shaped by vision, the functional reallocation may not occur to the same extent as seen in the EB [47,48]. Although we found no significant effect of BDI on any of the results, this could reflect the considerable heterogeneity of the groups with respect to blindness onset, duration, and cause.

4.2. The EyeCane as a Primary Mobility Aid

The results obtained in this study are in line with previous studies on the EyeCane [23,24,25]: our participants were able to detect and avoid a variety of obstacles with good reliability (overall detection and avoidance performance) with little training (15 min). However, this study differed in the way subjects were instructed to scan with the device. Our participants scanned toward the ground with a technique resembling their daily use of the white cane, with the goal of detecting small obstacles or denivelations such as curbs on which they might trip and fall. While this technique allowed the efficient detection of obstacles such as walls and door frames, thin posts, and knee-high obstacles and was successful in sustaining good identification and avoidance performances for such obstacles, participants were less successful in detecting and avoiding the 0.15 m high “step” obstacle. In line with our hypothesis, this can be explained by the very faint vibration amplitude difference between the ground and the top of the “step” obstacle, which is difficult to perceive. A previous study with the EyeCane also using differently sized obstacles likewise demonstrated that bigger obstacles occupying the entire width of the hallway (similar to our “door frame” obstacle) were easier to detect than smaller and lower obstacles (like our “cube”, “step”, and “post”) [26].
Present evidence shows that the EyeCane is an effective tool for detecting high obstacles by providing simple and relevant feedback to the user during mobility, even when the device is pointed toward the ground. However, this instrumentation seems insufficient for properly detecting those obstacles located at ground level (“step” obstacle), which is a mandatory function for safe navigation in both indoor (i.e., stairs) and outdoor (i.e., curbs and potholes) environments. Therefore, our results suggest that the EyeCane in its present state does not suffice as a primary mobility aid but rather can serve as a secondary aid used as an attachment to the white cane. The tactile signal given by the device in addition to the information obtained with the white cane would greatly augment the spatial information the users can acquire, increase their understanding of their surroundings, and thus increase their safety and independence. Indeed, our results show that the EyeCane would likely complement the white cane as an added protection against obstacles higher than the waist but also as a tool to identify (i.e., size, shape, etc.) these looming obstacles, which is crucial information to devise an avoidance strategy.
However, we note that this study took place within an experimental period of only three hours. We can thus hypothesize that with greater training and usage time, participants would likely get better at detecting and avoiding ground level obstacles. Nonetheless, given the absence of physical contact with the ground (normally provided by the tip of the white cane), it is unlikely that a long-term user of the EyeCane alone would reach the proficiency of a white cane user. Indeed, the sensors used in the device have certain limitations with regard to their sensitivity to environmental conditions and accuracy.

4.3. Implications for ETA Design

This study tested the reliability of a single-sensor EyeCane pointed toward the ground. Since our experimental design allowed to test its reliability to detect ground obstacles as well as higher obstacles, our results show that this device as a sole aid would not be sufficient to support safe navigation. Here, we list three significant limitations not only for the single-sensor EyeCane but also for the suggested three-sensors EyeCane (Figure 1D) [26], with general implications for the design of ETAs:
(1)
Loss of physical contact with the ground. The white cane contact assures a high level of reliability in detecting ground obstacles and drop-offs such as curbs [49], and its replacement with the present downward sensor leads to less confidence in detecting drop-offs and ground obstacles.
(2)
Added complexity. While the EyeCane’s main advantage has always been its simplicity, ease of operation, and intuitive feedback, adding multiple sensors would add complexity and significantly increase the required training time. Indeed, a three-sensor design would necessitate feedback of a different nature (i.e., different modalities or frequency coding) for each sensor, as they each provide different spatial information that must be discriminated by the user.
(3)
User identification. As the white cane allows pedestrians and drivers to identify the blind user and to assure his/her safety [3], a three-sensor EyeCane might lead to the loss of user identification and impede safety.
Despite these design limitations, the EyeCane (with a single infrared sensor) has proved its reliability in obstacle detection and avoidance in multiple studies [23,26,50]. With its advantages with respect to ease-of-use, simple feedback, light weight, and long battery life [23], the EyeCane may prove to be most useful as an attachment to white cane to improve the individual’s safety, and reduce the risk of head injury [6]. In fact, all primary ETAs incorporate sensors attached to a physical cane (i.e., ultracane and WeWalk cane) [51,52], since blind individuals habitually rely heavily on the white cane’s physical contact with the environment, as shown in previous studies on mobility aids [20,36]. However, these devices are often clumsy and expensive [53]. Thus, the low cost and ergonomic light weight, design of the EyeCane presents it as an affordable alternative for those who cannot afford an “all-in-one” electronic cane.
Further work should investigate the reliability and user acceptance [54] of a slim, ergonomic EyeCane fitted to the grip of a modified white cane’s grip and equipped with an upward sensor. According to our findings, such an attachment would result in better obstacle identification and would substantially increase safety and route planning in the face of an environment encumbered by obstacles. We suppose that such a hybrid design would have great potential for adoption in the blind and visually impaired community worldwide.

5. Conclusions

This study aimed to investigate the EyeCane’s potential as a primary mobility aid to avoid obstacles and diminish the risk of injuries and falling. Our results showed that the EyeCane efficiently enabled the users to detect, identify, and avoid large and high obstacles but failed to provide efficient coverage for obstacles at ground level. Indeed, removing the white cane and its physical contact with the ground leads to inconsistent detection of ground obstacles (i.e., steps, curbs, and holes) and higher risks of tripping and falling. Thus, the EyeCane is a potentially beneficial and low-cost attachment to the white cane that can significantly improve the individual’s safety during mobility by providing coverage to obstacles above the waist.

Author Contributions

Conceptualization: M.B., S.P., I.D., R.K., and M.P.; methodology: M.B., S.P., and I.D.; validation, M.P. and R.K.; formal analysis, M.B. and S.P.; investigation, M.B. and S.P.; resources, D.R.C.; writing—original draft preparation, M.B. and S.P.; writing—review and editing, I.D., R.K., D.R.C., and M.P.; supervision, R.K. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Harland Sanders Chair in Vision Science (Université de Montréal, to M.P.) and the Canadian Institute of Health Research (PJT-9175018 to M.P.). M.B. was supported by Vision Health Research Network (VHRN, Quebec, Canada).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Comité d’éthique de la recherche clinique of Université de Montréal (CERC-19-097-P, 05-02-2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available at [email protected].

Acknowledgments

The authors wish to thank Tomer Behor from RenewSenses Inc. for developing the EyeCane, Amir Amedi from the Baruch Ivcher Institute for Brain, Cognition, & Technology, Israël for making the instrument available, Paul Cumming from Inglewood Biomedical Editing (608 Millwood Rd, Toronto, ONT Canada) for excellent revision of the text and all the participants.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

References

  1. Schinazi, V.R.; Thrash, T.; Chebat, D.R. Spatial navigation by congenitally blind individuals. Wires Cogn. Sci. 2016, 7, 37–58. [Google Scholar] [CrossRef] [Green Version]
  2. Ptito, M.; Bleau, M.; Djerourou, I.; Paré, S.; Schneider, F.C.; Chebat, D.-R. Brain-Machine Interfaces to Assist the Blind. Front. Hum. Neurosci. 2021, 15, 638887. [Google Scholar] [CrossRef]
  3. Suterko, S. Long cane training: Its advantages and problems. In Proceedings of the Conference for Mobility Trainers and Technologists; Massachusetts Institute of Technology: Cambridge, MA, USA, 1967; pp. 13–18. [Google Scholar]
  4. Manduchi, R.; Kurniawan, S. Mobility-related accidents experienced by people with visual impairment. AER J. Res. Pract. Vis. Impair. Blind. 2011, 4, 44–54. [Google Scholar]
  5. Beggs, W. Coping, adjustment, and mobility-related feelings of newly visually impaired young adults. J. Vis. Impair. Blind. 1992, 86, 136–140. [Google Scholar] [CrossRef]
  6. Chebat, D.-R.; Harrar, V.; Kupers, R.; Maidenbaum, S.; Amedi, A.; Ptito, M. Sensory substitution and the neural correlates of navigation in blindness. In Mobility of Visually Impaired People; Springer: Berlin/Heidelberg, Germany, 2018; pp. 167–200. [Google Scholar]
  7. Kaczmarek, K.A. The tongue display unit (TDU) for electrotactile spatiotemporal pattern presentation. Sci. Iran. 2011, 18, 1476–1485. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Ptito, M.; Matteau, I.; Zhi Wang, A.; Paulson, O.B.; Siebner, H.R.; Kupers, R. Crossmodal Recruitment of the Ventral Visual Stream in Congenital Blindness. Neural Plast. 2012, 2012, 304045. [Google Scholar] [CrossRef] [PubMed]
  9. Matteau, I.; Kupers, R.; Ricciardi, E.; Pietrini, P.; Ptito, M. Beyond visual, aural and haptic movement perception: hMT+ is activated by electrotactile motion stimulation of the tongue in sighted and in congenitally blind individuals. Brain Res. Bull. 2010, 82, 264–270. [Google Scholar] [CrossRef] [PubMed]
  10. Kupers, R.; Chebat, D.R.; Madsen, K.H.; Paulson, O.B.; Ptito, M. Neural correlates of virtual route recognition in congenital blindness. Proc. Natl. Acad. Sci. USA 2010, 107, 12716–12721. [Google Scholar] [CrossRef] [Green Version]
  11. Pamir, Z.; Canoluk, M.U.; Jung, J.-H.; Peli, E. Poor resolution at the back of the tongue is the bottleneck for spatial pattern recognition. Sci. Rep. 2020, 10, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Richardson, M.L.; Lloyd-Esenkaya, T.; Petrini, K.; Proulx, M.J. Reading with the Tongue: Individual Differences Affect the Perception of Ambiguous Stimuli with the BrainPort. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–10. [Google Scholar]
  13. Chebat, D.R.; Schneider, F.C.; Kupers, R.; Ptito, M. Navigation with a sensory substitution device in congenitally blind individuals. Neuroreport 2011, 22, 342–347. [Google Scholar] [CrossRef] [PubMed]
  14. Elli, G.V.; Benetti, S.; Collignon, O. Is there a future for sensory substitution outside academic laboratories? Multisens. Res. 2014, 27, 271–291. [Google Scholar] [CrossRef]
  15. Pissaloux, E.; Velázquez, R. On spatial cognition and mobility strategies. In Mobility of Visually Impaired People; Springer: Berlin/Heidelberg, Germany, 2018; pp. 137–166. [Google Scholar]
  16. Islam, M.M.; Sadi, M.S.; Zamli, K.Z.; Ahmed, M.M. Developing walking assistants for visually impaired people: A review. IEEE Sens. J. 2019, 19, 2814–2828. [Google Scholar] [CrossRef]
  17. Cardillo, E.; Caddemi, A. Insight on electronic travel aids for visually impaired people: A review on the electromagnetic technology. Electronics 2019, 8, 1281. [Google Scholar] [CrossRef] [Green Version]
  18. Mattia, V.D.; Manfredi, G.; Leo, A.D.; Russo, P.; Scalise, L.; Cerri, G.; Caddemi, A.; Cardillo, E. A feasibility study of a compact radar system for autonomous walking of blind people. In Proceedings of the 2016 IEEE 2nd International Forum on Research and Technologies for Society and Industry Leveraging a Better Tomorrow (RTSI), Bologna, Italy, 7–9 September 2016; pp. 1–5. [Google Scholar]
  19. Islam, M.M.; Sadi, M.S.; Bräunl, T. Automated Walking Guide to Enhance the Mobility of Visually Impaired People. IEEE Trans. Med. Robot. Bionics 2020, 2, 485–496. [Google Scholar] [CrossRef]
  20. Hersh, M.; Johnson, M.A. Assistive Technology for Visually Impaired and Blind People; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  21. Cardillo, E.; Mattia, V.D.; Manfredi, G.; Russo, P.; Leo, A.D.; Caddemi, A.; Cerri, G. An Electromagnetic Sensor Prototype to Assist Visually Impaired and Blind People in Autonomous Walking. IEEE Sens. J. 2018, 18, 2568–2576. [Google Scholar] [CrossRef]
  22. Maidenbaum, S.; Abboud, S.; Amedi, A. Sensory substitution: Closing the gap between basic research and widespread practical visual rehabilitation. Neurosci. Biobehav. Rev. 2014, 41, 3–15. [Google Scholar] [CrossRef] [Green Version]
  23. Maidenbaum, S.; Hanassy, S.; Abboud, S.; Buchs, G.; Chebat, D.R.; Levy-Tzedek, S.; Amedi, A. The “EyeCane”, a new electronic travel aid for the blind: Technology, behavior & swift learning. Restor. Neurol. Neurosci. 2014, 32, 813–824. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Chebat, D.R.; Maidenbaum, S.; Amedi, A. Navigation using sensory substitution in real and virtual mazes. PLoS ONE 2015, 10, e126307. [Google Scholar] [CrossRef] [Green Version]
  25. Maidenbaum, S.; Levy-Tzedek, S.; Chebat, D.-R.; Amedi, A. Increasing Accessibility to the Blind of Virtual Environments, Using a Virtual Mobility Aid Based On the “EyeCane”: Feasibility Study. PLoS ONE 2013, 8, e72555. [Google Scholar] [CrossRef]
  26. Buchs, G.; Simon, N.; Maidenbaum, S.; Amedi, A. Waist-up protection for blind individuals using the EyeCane as a primary and secondary mobility aid. Restor. Neurol. Neurosci. 2017, 35, 225–235. [Google Scholar] [CrossRef] [Green Version]
  27. Meaidi, A.; Jennum, P.; Ptito, M.; Kupers, R. The sensory construction of dreams and nightmare frequency in congenitally blind and late blind individuals. Sleep Med. 2014, 15, 586–595. [Google Scholar] [CrossRef]
  28. Innet, S.; Ritnoom, N. An application of infrared sensors for electronic white stick. In Proceedings of the 2008 International Symposium on Intelligent Signal Processing and Communications Systems, Bangkok, Thailand, 8–11 February 2009; pp. 1–4. [Google Scholar]
  29. Chebat, D.-R.; Maidenbaum, S.; Amedi, A. The transfer of non-visual spatial knowledge between real and virtual mazes via sensory substitution. In Proceedings of the 2017 International Conference on Virtual Rehabilitation (ICVR), Montreal, QC, Canada, 19–22 June 2017; pp. 1–7. [Google Scholar]
  30. LaGrow, S.J. Improving Perception for Orientation and Mobility. In Foundations of Orientation and Mobility, 3rd ed.; Wiener, W.R., Welsh, R.L., Blasch, B.B., Eds.; American Foundation for the Blind Press: New York, NY, USA, 2010; Volume 2, pp. 3–26. [Google Scholar]
  31. La Grow, S.J.; Long, R. Orientation and Mobility: Techniques for Independence; Association for Education and Rehabilitation of the Blind and Visually Impaired: Alexandria, VA, USA, 2011. [Google Scholar]
  32. JASP Team. JASP (Version 0.14.1) [Computer Software]; JASP Team: Amsterdam, The Netherlands, 2020. [Google Scholar]
  33. Hartcher-O’Brien, J.; Auvray, M. The Process of Distal Attribution Illuminated Through Studies of Sensory Substitution. Multisens. Res. 2014, 27, 421–441. [Google Scholar] [CrossRef]
  34. Kupers, R.; Ptito, M. Compensatory plasticity and cross-modal reorganization following early visual deprivation. Neurosci. Biobehav. Rev. 2014, 41, 36–52. [Google Scholar] [CrossRef]
  35. Paré, S.; Bleau, M.; Djerourou, I.; Malotaux, V.; Kupers, R.; Ptito, M. Spatial navigation with horizontally spatialized sounds in early and late blind individuals. PLoS ONE 2021, 16, e0247448. [Google Scholar] [CrossRef]
  36. Hoffmann, R.; Spagnol, S.; Kristjánsson, Á.; Unnthorsson, R. Evaluation of an audio-haptic sensory substitution device for enhancing spatial awareness for the visually impaired. Optom. Vis. Sci. 2018, 95, 757. [Google Scholar] [CrossRef]
  37. Stronks, H.C.; Nau, A.C.; Ibbotson, M.R.; Barnes, N. The role of visual deprivation and experience on the performance of sensory substitution devices. Brain Res. 2015, 1624, 140–152. [Google Scholar] [CrossRef]
  38. Iachini, T.; Ruggiero, G.; Ruotolo, F. Does blindness affect egocentric and allocentric frames of reference in small and large scale spaces? Behav. Brain Res. 2014, 273, 73–81. [Google Scholar] [CrossRef]
  39. Pasqualotto, A.; Spiller, M.J.; Jansari, A.S.; Proulx, M.J. Visual experience facilitates allocentric spatial representation. Behav. Brain Res. 2013, 236, 175–179. [Google Scholar] [CrossRef] [Green Version]
  40. Ptito, M.; Fumal, A.; de Noordhout, A.M.; Schoenen, J.; Gjedde, A.; Kupers, R. TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers. Exp. Brain Res. 2008, 184, 193–200. [Google Scholar] [CrossRef]
  41. Goldreich, D.; Kanics, I.M. Performance of blind and sighted humans on a tactile grating detection task. Percept. Psychophys. 2006, 68, 1363–1371. [Google Scholar] [CrossRef] [Green Version]
  42. Stevens, J.C.; Foulke, E.; Patterson, M.Q. Tactile acuity, aging, and braille reading in long-term blindness. J. Exp. Psychol. Appl. 1996, 2, 91–106. [Google Scholar] [CrossRef]
  43. Gagnon, L.; Schneider, F.C.; Siebner, H.R.; Paulson, O.B.; Kupers, R.; Ptito, M. Activation of the hippocampal complex during tactile maze solving in congenitally blind subjects. Neuropsychologia 2012, 50, 1663–1671. [Google Scholar] [CrossRef]
  44. Chebat, D.R.; Schneider, F.C.; Ptito, M. Neural Networks Mediating Perceptual Learning in Congenital Blindness. Sci. Rep. 2020, 10, 495. [Google Scholar] [CrossRef] [Green Version]
  45. Chan, C.C.H.; Wong, A.W.K.; Ting, K.-H.; Whitfield-Gabrieli, S.; He, J.; Lee, T.M.C. Cross auditory-spatial learning in early-blind individuals. Hum. Brain Mapp. 2012, 33, 2714–2727. [Google Scholar] [CrossRef]
  46. Reislev, N.H.; Dyrby, T.B.; Siebner, H.R.; Lundell, H.; Ptito, M.; Kupers, R. Thalamocortical Connectivity and Microstructural Changes in Congenital and Late Blindness. Neural Plast. 2017, 2017, 9807512. [Google Scholar] [CrossRef] [Green Version]
  47. Sadato, N.; Pascual-Leone, A.; Grafman, J.; Ibañez, V.; Deiber, M.-P.; Dold, G.; Hallett, M. Activation of the primary visual cortex by Braille reading in blind subjects. Nature 1996, 380, 526–528. [Google Scholar] [CrossRef] [PubMed]
  48. Ptito, M.; Moesgaard, S.M.; Gjedde, A.; Kupers, R. Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain 2005, 128, 606–614. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Blasch, B.; LaGrow, S.J.; De l’Aune, W. Three aspects of coverage provided by the long cane: Object, surface, and foot-placement preview. J. Vis. Impair. Blind. 1996, 90, 295–301. [Google Scholar] [CrossRef]
  50. Buchs, G.; Maidenbaum, S.; Amedi, A. Obstacle identification and avoidance using the ‘EyeCane’: A tactile sensory substitution device for blind individuals. In Proceedings of the International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, Versailles, France, 24–26 June 2014; pp. 96–103. [Google Scholar]
  51. Hoyle, B.; Waters, D. Mobility AT: The Batcane (UltraCane). In Assistive Technology for Visually Impaired and Blind People; Hersh, M.A., Johnson, M.A., Eds.; Springer London: London, UK, 2008; pp. 209–229. [Google Scholar]
  52. WeWALK. WeWALK Smart Cane. Available online: https://wewalk.io/en/ (accessed on 11 April 2021).
  53. Li, K. Electronic Travel Aids for Blind Guidance—An Industry Landscape Study; ECS: Berkeley, CA, USA, 2015. [Google Scholar]
  54. Phillips, B.; Zhao, H. Predictors of assistive technology abandonment. Assist. Technol. 1993, 5, 36–45. [Google Scholar] [CrossRef] [PubMed]
Figure 2. The experimental setup and procedure. (A) A participant handling the EyeCane in a downward directed manner; four type of obstacles: D= door, P = post, C = cube, and S = step. (B) At the top, a photograph of the empty obstacle course and at the bottom an example of obstacle placement. (C) Three schematic representations of the twelve different trial configurations.
Figure 2. The experimental setup and procedure. (A) A participant handling the EyeCane in a downward directed manner; four type of obstacles: D= door, P = post, C = cube, and S = step. (B) At the top, a photograph of the empty obstacle course and at the bottom an example of obstacle placement. (C) Three schematic representations of the twelve different trial configurations.
Sensors 21 02700 g002
Figure 3. The average crossing time (A), and the mean rates of detection, identification, and avoidance by the three subject groups (B). Significant differences are indicated by asterisks (* = p < 0.05; ** = p < 0.01). EB = early blind; LB = late blind; and SC = sighted control.
Figure 3. The average crossing time (A), and the mean rates of detection, identification, and avoidance by the three subject groups (B). Significant differences are indicated by asterisks (* = p < 0.05; ** = p < 0.01). EB = early blind; LB = late blind; and SC = sighted control.
Sensors 21 02700 g003
Figure 4. The average rates of obstacle detection (A) and collision (B). Significant differences are indicated by asterisks (*** = p < 0.001). EB = early blind; LB = late blind; and SC = sighted control.
Figure 4. The average rates of obstacle detection (A) and collision (B). Significant differences are indicated by asterisks (*** = p < 0.001). EB = early blind; LB = late blind; and SC = sighted control.
Sensors 21 02700 g004
Table 1. Blind participants’ characteristics.
Table 1. Blind participants’ characteristics.
ParticipantAgeSexAge of Onset of BlindnessBlindness Duration IndexCause of BlindnessResidual Perception
LB155F 1240.56Retinitis pigmentosayes
LB225M 2170.32Retinitis pigmentosa-
LB370M380.46Meningitis-
LB438F200.64Retinal cancer-
LB546M400.13Meningitis-
LB656F200.47Retinal cancer-
LB747F220.53Diabetic retinopathy-
LB844F170.61Glaucoma-
LB959F570.03Retinitis pigmentosayes
EB148MPerinatal-Retinopathy of prematurity-
EB233MPerinatal-Retinopathy of prematurity-
EB363MPerinatal-Retinopathy of prematurity-
EB454MPerinatal-Retinopathy of prematurity-
EB556MPerinatal-Retinopathy of prematurity-
EB636MPerinatal-Retinopathy of prematurity-
EB731MPerinatal-Retinopathy of prematurity-
EB840MPerinatal-Retinopathy of prematurity-
EB933FPerinatal-Retinopathy of prematurity-
EB1051MPerinatal-Meningitis-
1 Female, 2 Male.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bleau, M.; Paré, S.; Djerourou, I.; Chebat, D.R.; Kupers, R.; Ptito, M. Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane. Sensors 2021, 21, 2700. https://doi.org/10.3390/s21082700

AMA Style

Bleau M, Paré S, Djerourou I, Chebat DR, Kupers R, Ptito M. Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane. Sensors. 2021; 21(8):2700. https://doi.org/10.3390/s21082700

Chicago/Turabian Style

Bleau, Maxime, Samuel Paré, Ismaël Djerourou, Daniel R. Chebat, Ron Kupers, and Maurice Ptito. 2021. "Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane" Sensors 21, no. 8: 2700. https://doi.org/10.3390/s21082700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop