Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = Braille reading

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 4555 KB  
Article
Discrimination Accuracy of Sequential Versus Simultaneous Vibrotactile Stimulation on the Forearm
by Nashmin Yeganeh, Ivan Makarov, Árni Kristjánsson and Runar Unnthorsson
Appl. Sci. 2024, 14(1), 43; https://doi.org/10.3390/app14010043 - 20 Dec 2023
Cited by 5 | Viewed by 2994
Abstract
We examined discrimination accuracy of vibrotactile patterns on the upper forearm using a 2 × 3 array of voice coil actuators to generate 100 Hz vibrotactile stimulation. We evaluated participants’ ability to recognize distinct vibrotactile patterns presented both simultaneously (1000 ms) and sequentially [...] Read more.
We examined discrimination accuracy of vibrotactile patterns on the upper forearm using a 2 × 3 array of voice coil actuators to generate 100 Hz vibrotactile stimulation. We evaluated participants’ ability to recognize distinct vibrotactile patterns presented both simultaneously (1000 ms) and sequentially (500 ms with a 450 ms interval). Recognition accuracy was significantly higher for sequential (93.24%) than for simultaneous presentation (26.15%). Patterns using 2–3 actuators were recognized more accurately than those using 4–5 actuators. During sequential presentation, there were primacy and recency effects; accuracy was higher for the initial and final stimulations in a sequence. Over time, participants also demonstrated a learning effect, becoming more adept at recognizing and interpreting vibrotactile patterns. This underscores the potential for skill development and emphasizes the value of training for wearable vibrotactile devices. We discuss the implications of these findings for the design of tactile communication devices and wearable technology. Full article
Show Figures

Figure 1

16 pages, 5040 KB  
Article
Design and Implementation of a Semantic Information Expression Device Based on Vibrotactile Coding
by Zhiyu Shao, Xin Mei, Yanjv Wu, Jiatong Bao and Hongru Tang
Appl. Sci. 2023, 13(21), 11756; https://doi.org/10.3390/app132111756 - 27 Oct 2023
Cited by 1 | Viewed by 1847
Abstract
In recent years, research on new technologies for expressing and exchanging information through tactile vibration has been the focus of researches. In this paper, by choosing a suitable coding scheme and a vibrating motor arrangement, we designed a device to express semantic information [...] Read more.
In recent years, research on new technologies for expressing and exchanging information through tactile vibration has been the focus of researches. In this paper, by choosing a suitable coding scheme and a vibrating motor arrangement, we designed a device to express semantic information through vibrotactile. Three types of experiments were designed to test the usability of the encoding scheme and the device. Firstly, the vibration intensity was experimented when designing the encoding scheme, and the results showed that the encoding scheme was better with Braille units of 0.2 and 0.3 vibration intensities. In addition, the learning experiment and sentence recognition accuracy experiment were carried out to verify the usability of the device. The learning experiment results show that subjects were able to memorize Braille characters with an accuracy more than 90%, and to recognize a Chinese character (consisting of two Braille cells) with an average of 90.8% accuracy. The sentence recognition accuracy test experiment results show that the average recognition rate of the three poems used for the test was 93.33%. The device can be used for semantic information expression and touch-reading of Braille, and it can realize the reading experience of paper Braille. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

19 pages, 6173 KB  
Article
Preparation of 3D Models of Cultural Heritage Objects to Be Recognised by Touch by the Blind—Case Studies
by Jerzy Montusiewicz, Marcin Barszcz and Sylwester Korga
Appl. Sci. 2022, 12(23), 11910; https://doi.org/10.3390/app122311910 - 22 Nov 2022
Cited by 33 | Viewed by 5575
Abstract
Providing access to and the protection of cultural goods—intangible and tangible heritage—is carried out primarily by institutions such as museums, galleries or local cultural centres where temporary exhibitions are shown. The international community also attempts to protect architectural objects or entire urban layouts, [...] Read more.
Providing access to and the protection of cultural goods—intangible and tangible heritage—is carried out primarily by institutions such as museums, galleries or local cultural centres where temporary exhibitions are shown. The international community also attempts to protect architectural objects or entire urban layouts, raising their status by inscribing them on the UNESCO World Heritage List. Contemporary museums, however, are not properly prepared to make museum exhibits available to the blind and visually impaired, which is confirmed by both the literature studies on the subject and the occasional solutions that are put in place. The development of various computer graphics technologies allows for the digitisation of cultural heritage objects by 3D scanning. Such a record, after processing, can be used to create virtual museums accessible via computer networks, as well as to make copies of objects by 3D printing. This article presents an example of the use of scanning, modelling and 3D printing to prepare prototypes of copies of museum objects from the Silk Road area, dedicated to blind people and to be recognised by touch. The surface of an object has information about it written in Braille before the copy-making process is initiated. The results of the pilot studies carried out on a group of people with simulated visual impairment and on a person who is blind from birth indicate that 3D models printed on 3D replicators with the fused filament fabrication technology are useful for sharing cultural heritage objects. The models are light—thanks to which they can be freely manipulated, as well as having the appropriate smoothness—which enables the recognition of decorative details present on them, as well as reading texts in Braille. Integrating a copy of an exhibit with a description about it in Braille into one 3D object is an innovative solution that should contribute to a better access to cultural goods for the blind. Full article
Show Figures

Figure 1

5 pages, 1141 KB  
Proceeding Paper
An IoT Braille Display towards Assisting Visually Impaired Students in Mexico
by Oscar I. Ramos-García, Anuar A. Vuelvas-Alvarado, Néstor A. Osorio-Pérez, Miguel Á. Ruiz-Torres, Fermín Estrada-González, Laura S. Gaytan-Lugo, Silvia B. Fajardo-Flores and Pedro C. Santana-Mancilla
Eng. Proc. 2022, 27(1), 11; https://doi.org/10.3390/ecsa-9-13194 - 1 Nov 2022
Cited by 7 | Viewed by 4634
Abstract
According to the World Health Organization, 2.2 billion people globally have some vision impairment. Blind and vision impairment children can undergo poor motor, language, and cognitive evolution, bringing lower levels of educational success. Our proposal aims to design and develop a one-character refreshable [...] Read more.
According to the World Health Organization, 2.2 billion people globally have some vision impairment. Blind and vision impairment children can undergo poor motor, language, and cognitive evolution, bringing lower levels of educational success. Our proposal aims to design and develop a one-character refreshable braille display that is affordable and easy to use through the Internet of Things (IoT) technology. Reading is essential to acquire knowledge by allowing an affordable form of reading based on braille, a handy tool for teaching and training blind and visually impaired people can be reached. Full article
Show Figures

Figure 1

22 pages, 3603 KB  
Article
Deep Learning Reader for Visually Impaired
by Jothi Ganesan, Ahmad Taher Azar, Shrooq Alsenan, Nashwa Ahmad Kamal, Basit Qureshi and Aboul Ella Hassanien
Electronics 2022, 11(20), 3335; https://doi.org/10.3390/electronics11203335 - 16 Oct 2022
Cited by 51 | Viewed by 10225
Abstract
Recent advances in machine and deep learning algorithms and enhanced computational capabilities have revolutionized healthcare and medicine. Nowadays, research on assistive technology has benefited from such advances in creating visual substitution for visual impairment. Several obstacles exist for people with visual impairment in [...] Read more.
Recent advances in machine and deep learning algorithms and enhanced computational capabilities have revolutionized healthcare and medicine. Nowadays, research on assistive technology has benefited from such advances in creating visual substitution for visual impairment. Several obstacles exist for people with visual impairment in reading printed text which is normally substituted with a pattern-based display known as Braille. Over the past decade, more wearable and embedded assistive devices and solutions were created for people with visual impairment to facilitate the reading of texts. However, assistive tools for comprehending the embedded meaning in images or objects are still limited. In this paper, we present a Deep Learning approach for people with visual impairment that addresses the aforementioned issue with a voice-based form to represent and illustrate images embedded in printed texts. The proposed system is divided into three phases: collecting input images, extracting features for training the deep learning model, and evaluating performance. The proposed approach leverages deep learning algorithms; namely, Convolutional Neural Network (CNN), Long Short Term Memory (LSTM), for extracting salient features, captioning images, and converting written text to speech. The Convolution Neural Network (CNN) is implemented for detecting features from the printed image and its associated caption. The Long Short-Term Memory (LSTM) network is used as a captioning tool to describe the detected text from images. The identified captions and detected text is converted into voice message to the user via Text-To-Speech API. The proposed CNN-LSTM model is investigated using various network architectures, namely, GoogleNet, AlexNet, ResNet, SqueezeNet, and VGG16. The empirical results conclude that the CNN-LSTM based training model with ResNet architecture achieved the highest prediction accuracy of an image caption of 83%. Full article
(This article belongs to the Special Issue Deep Learning Algorithm Generalization for Complex Industrial Systems)
Show Figures

Figure 1

23 pages, 3437 KB  
Article
Characterization of English Braille Patterns Using Automated Tools and RICA Based Feature Extraction Methods
by Sana Shokat, Rabia Riaz, Sanam Shahla Rizvi, Inayat Khan and Anand Paul
Sensors 2022, 22(5), 1836; https://doi.org/10.3390/s22051836 - 25 Feb 2022
Cited by 12 | Viewed by 5395
Abstract
Braille is used as a mode of communication all over the world. Technological advancements are transforming the way Braille is read and written. This study developed an English Braille pattern identification system using robust machine learning techniques using the English Braille Grade-1 dataset. [...] Read more.
Braille is used as a mode of communication all over the world. Technological advancements are transforming the way Braille is read and written. This study developed an English Braille pattern identification system using robust machine learning techniques using the English Braille Grade-1 dataset. English Braille Grade-1 dataset was collected using a touchscreen device from visually impaired students of the National Special Education School Muzaffarabad. For better visualization, the dataset was divided into two classes as class 1 (1–13) (a–m) and class 2 (14–26) (n–z) using 26 Braille English characters. A position-free braille text entry method was used to generate synthetic data. N = 2512 cases were included in the final dataset. Support Vector Machine (SVM), Decision Trees (DT) and K-Nearest Neighbor (KNN) with Reconstruction Independent Component Analysis (RICA) and PCA-based feature extraction methods were used for Braille to English character recognition. Compared to PCA, Random Forest (RF) algorithm and Sequential methods, better results were achieved using the RICA-based feature extraction method. The evaluation metrics used were the True Positive Rate (TPR), True Negative Rate (TNR), Positive Predictive Value (PPV), Negative Predictive Value (NPV), False Positive Rate (FPR), Total Accuracy, Area Under the Receiver Operating Curve (AUC) and F1-Score. A statistical test was also performed to justify the significance of the results. Full article
(This article belongs to the Special Issue Big Data Analytics in Internet of Things Environment)
Show Figures

Figure 1

6 pages, 702 KB  
Article
Age-Related Changes in the Response of Finger Skin Blood Flow during a Braille Character Discrimination Task
by Jun Murata, Shin Murata, Takayuki Kodama, Hideki Nakano, Masayuki Soma, Hideyuki Nakae, Yousuke Satoh, Haruki Kogo and Naho Umeki
Healthcare 2021, 9(2), 143; https://doi.org/10.3390/healthcare9020143 - 1 Feb 2021
Cited by 2 | Viewed by 3057
Abstract
We hypothesized that age-related changes in sensory function might be reflected by a modulation of the blood flow response associated with tactile sensation. The aim of the present study was to clarify how the blood flow response of the fingers during concentrated finger [...] Read more.
We hypothesized that age-related changes in sensory function might be reflected by a modulation of the blood flow response associated with tactile sensation. The aim of the present study was to clarify how the blood flow response of the fingers during concentrated finger perception is affected by aging. We measured the tactile-pressure threshold of the distal palmar pad of the index finger and skin blood flow in the finger (SBF) during Braille reading performed under blind conditions in young (n = 27) and older (n = 37) subjects. As a result, the tactile-pressure threshold was higher in older subjects (2.99 ± 0.37 log10 0.1 mg) than in young subjects (2.76 ± 0.24 log10 0.1 mg) (p < 0.01). On the other hand, the SBF response was markedly smaller in older subjects (−4.9 ± 7.0%) than in young subjects (−25.8 ± 15.4%) (p < 0.01). Moreover, the peak response arrival times to Braille reading in older and young subjects were 12.5 ± 3.1 s and 8.8 ± 3.6 s, respectively (p < 0.01). A decline in tactile sensitivity occurs with aging. Blood flow responses associated with tactile sensation are also affected by aging, as represented by a decrease in blood flow and a delay in the reaction time. Full article
(This article belongs to the Collection Health Care and Services for Elderly Population)
Show Figures

Figure 1

15 pages, 1144 KB  
Article
Braille Recognition for Reducing Asymmetric Communication between the Blind and Non-Blind
by Bi-Min Hsu
Symmetry 2020, 12(7), 1069; https://doi.org/10.3390/sym12071069 - 30 Jun 2020
Cited by 23 | Viewed by 9808
Abstract
Assistive braille technology has existed for many years with the purpose of aiding the blind in performing common tasks such as reading, writing, and communicating with others. Such technologies are aimed towards helping those who are visually impaired to better adapt to the [...] Read more.
Assistive braille technology has existed for many years with the purpose of aiding the blind in performing common tasks such as reading, writing, and communicating with others. Such technologies are aimed towards helping those who are visually impaired to better adapt to the visual world. However, an obvious gap exists in current technology when it comes to symmetric two-way communication between the blind and non-blind, as little technology allows non-blind individuals to understand the braille system. This research presents a novel approach to convert images of braille into English text by employing a convolutional neural network (CNN) model and a ratio character segmentation algorithm (RCSA). Further, a new dataset was constructed, containing a total of 26,724 labeled braille images, which consists of 37 braille symbols that correspond to 71 different English characters, including the alphabet, punctuation, and numbers. The performance of the CNN model yielded a prediction accuracy of 98.73% on the test set. The functionality performance of this artificial intelligence (AI) based recognition system could be tested through accessible user interfaces in the future. Full article
(This article belongs to the Special Issue Symmetry in Artificial Visual Perception and Its Application)
Show Figures

Figure 1

24 pages, 11899 KB  
Article
Multimedia Vision for the Visually Impaired through 2D Multiarray Braille Display
by Seondae Kim, Eun-Soo Park and Eun-Seok Ryu
Appl. Sci. 2019, 9(5), 878; https://doi.org/10.3390/app9050878 - 1 Mar 2019
Cited by 7 | Viewed by 4952
Abstract
Visual impairments cause very limited and low vision, leading to difficulties in processing information such as obstacles, objects, multimedia contents (e.g., video, photographs, and paintings), and reading in outdoor and indoor environments. Therefore, there are assistive devices and aids for visually impaired (VI) [...] Read more.
Visual impairments cause very limited and low vision, leading to difficulties in processing information such as obstacles, objects, multimedia contents (e.g., video, photographs, and paintings), and reading in outdoor and indoor environments. Therefore, there are assistive devices and aids for visually impaired (VI) people. In general, such devices provide guidance or some supportive information that can be used along with guide dogs, walking canes, and braille devices. However, these devices have functional limitations; for example, they cannot help in the processing of multimedia contents such as images and videos. Additionally, most of the available braille displays for the VI represent the text as a single line with several braille cells. Although these devices are sufficient to read and understand text, they have difficulty in converting multimedia contents or massive text contents to braille. This paper describes a methodology to effectively convert multimedia contents to braille using 2D braille display. Furthermore, this research also proposes the transformation of Digital Accessible Information SYstem (DAISY) and electronic publication (EPUB) formats into 2D braille display. In addition, it introduces interesting research considering efficient communication for the VI. Thus, this study proposes an eBook reader application for DAISY and EPUB formats, which can correctly render and display text, images, audios, and videos on a 2D multiarray braille display. This approach is expected to provide better braille service for the VI when implemented and verified in real-time. Full article
Show Figures

Figure 1

Back to TopTop