Next Article in Journal
Factors Influencing Students’ Acceptance of M-Learning in Higher Education: An Application and Extension of the UTAUT Model
Previous Article in Journal
Memory-Based LT Codes for Efficient 5G Networks and Beyond
Previous Article in Special Issue
Multi-Sensory Color Code Based on Sound and Scent for Visual Art Appreciation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Multi-Sensory Interaction for Blind and Visually Impaired People

Department of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon 16419, Korea
Electronics 2021, 10(24), 3170; https://doi.org/10.3390/electronics10243170
Submission received: 15 December 2021 / Accepted: 17 December 2021 / Published: 20 December 2021
(This article belongs to the Special Issue Multi-Sensory Interaction for Blind and Visually Impaired People)

1. Introduction

Multi-sensory interaction aids learning, inclusion, and collaboration because it accommodates the diverse cognitive and perceptual needs. Multi-sensory integration is an essential part of information processing, by which various forms of sensory information such as sight, hearing, touch, and proprioception (also called kinesthesia, the sense of self-movement and body position) are combined into a single experience. Information is typically integrated across sensory modalities when the sensory inputs share certain common features. Cross-modality refers to the interaction between two different sensory channels. Cross-modal correspondence is defined as the surprising associations that people experience between seemingly unrelated features, attributes, or dimensions of experience in different sensory modalities. For visually impaired people, conventional human–computer interaction devices are inconvenient, as these devices rely heavily on visual information. Though many studies introduce the use of other modalities of sensation like haptic, sound, and scent for the user interface to act as a supplement for the absence of vision, they are still not close to what vision is to the people. The topics of interest include but are not limited to the following:
  • Universal access in human–computer interaction;
  • Haptic interfaces for accessibility;
  • Tactile artworks and interactions;
  • Flexible haptic displays;
  • Ambient assistive intelligence;
  • Human-centered user accessibility for people with visual impairments;
  • Assistive technology;
  • Multi-sensory color coding.
This Special Issue contains 10 research papers [1,2,3,4,5,6,7,8,9,10] and two review papers [11,12].

2. Research Papers

The development of assistive technologies for art appreciation for visually impaired people can enhance their cultural and perceptual appreciation. These opportunities result in better comprehension and accessibility at museums and exhibitions and in everyday life. These multi-sensory interactions may also offer enhanced usability and understanding and promote educational tools aiding synesthetic capabilities to promote creative thinking. Color associations with aspects such as symbolism, culture, and preferences play an influential role, demanding the promotion of color comprehension in the daily lives of virtually impaired people as well. Despite the availability of tactile graphics and audio guides, the visually impaired still face challenges in experiencing and understanding visual artworks.
Visually impaired people can take advantage of multimodal systems in which visual information is communicated through different modes of interaction and types of feedback. Among the possible interaction modes, thermal interaction in the context of assistive devices for visually impaired people lacks research despite its potential.
Bartolomé et al. [1] proposed a temperature depth mapping algorithm and a thermal display system to convey the depth and color depth of artworks’ features in the context of tactile exploration by visually impaired people. Tests were performed both during the mapping algorithm’s design and after developing a tactile temperature prototype artwork model to assess the potentials of thermal interaction for recognizing depth and color depth in tactile art appreciation. These tests showed both an existing correlation between depth and temperature and that the mapping based on that correlation was appropriate for conveying depth during tactile artwork exploration. Throughout this work, two types of tests were performed to assess a method to convey depth information by using temperature cues. The first test showed that warm and cold temperatures could be used as cues to communicate to the user how near or far the features of an artwork are. Based on these results, a complete thermal display prototype was designed and developed. Similarly, a relief artwork was designed and installed on top of the prototype, which was used for performing the final test. This final test’s results proved that thermal interaction is a proper way of conveying the depth information of the artwork to visually impaired people. This is an addition to the current technologies which, as demonstrated above, used to communicate the depth of the features of an artwork either by using audio or by adding depth into a tactile model by extruding the features. The addition of thermal interaction as a way of communicating depth can open the door to many new ways of experiencing art for visually impaired people. Moreover, the developed thermal display system can be used for adding thermal interaction to any type of paper-based relief artwork not only by using the thermal cues as a substitute for depth, but also by giving them another role, such as expressing color warmness and color coolness or for making hot objects (such as the sun) warm and cold objects (such as water) cold.
There are many ways in which this work can be continued or improved on, such as the following:
(1)
Increasing the number of peltiers, adding the possibility of creating more complex temperature regions on the artwork;
(2)
Finding a way to make the system smaller and more portable;
(3)
Changing the use of temperature cues from depth representation to an artwork featuring semantic mapping, such as making the water feel cold. A necessary addition for this semantic mapping would be to be able to make the prototype work with 2.5D relief artworks, which present depth by extruding the features in the z direction and not only with tactile paper artworks. In this way, visually impaired people could be aware of depth through tactile exploration while also feeling the temperature of the different artwork features while exploring. For that, a method to make the peltier temperature reach all the way to the surface of the 2.5D relief model should be found.
The recent development of color coding in tactile pictograms helps visually impaired people to appreciate the visual arts. The auditory sense, in conjunction with (or possibly as an alternative to) the tactile sense, would allow people with visual impairments to perceive colors in a way that would be difficult to achieve with just a tactile stimulus. Sound color coding [2] can replicate three characteristics of colors (i.e., hue, chroma, and value) by matching them with three characteristics of sound (i.e., timbre, intensity, and pitch). This paper examines the relationships between sound (melody) and color and provides color coding with sound for the hue, chroma, and value for deepening their relationship with visual art. Two sets of proposed methods for coding colors with sound use melodies to improve upon the current method in use by adding more colors (18 colors in 6 hues). User experience and identification tests were conducted with 12 visually impaired and 8 sighted adults, and the results suggest that the sound color coding was helpful for the participants.
Despite the use of tactile graphics and audio guides, blind and visually impaired people still face challenges in experiencing and understanding visual artworks independently at art exhibitions. Art museums and other art places are increasingly exploring the use of interactive guides to make their collections more accessible. In [3], Quero et al. presented an interactive multimodal guide prototype that used audio and tactile modalities to improve the autonomous access to information and experience of visual artworks. The prototype was composed of a touch-sensitive 2.5D artwork relief model that could be freely explored by touch. Users could access localized verbal descriptions and audio by performing touch gestures on the surface while listening along to themed background music. Eighteen participants evaluated and compared the multimodal and tactile graphic accessible exhibits. The results from a usability survey indicated that the presented multimodal approach was simple, easy to use, and improved confidence and independence when exploring visual artworks.
Feedback collected during the multiple exhibition points in new directions for this work. The presented interactive multimodal guide is sometimes used as a collaboration tool to socially interact with art. Moreover, the presented prototype was designed for use in an exhibition environment. Art educators at schools have expressed their interest in using the guide as an educational tool in class. The current prototypes only make use of tactile and audio modalities. As a future work, new experiences with other modalities such as smell need to be explored for how they might improve visual artwork exploration.
Tactile perception enables people with visual impairments to engage with artworks and real-life objects at a deeper abstraction level. The development of tactile and multi-sensory assistive technologies has expanded their opportunities to appreciate visual arts. Tactile color pictograms [4] using tactile sensing attain sensational conception along with the other physical properties of artwork, such as contour, size, texture, geometry, and orientation. Although several tactile color patterns [4] have been developed, they are limited to fixed tactile color interpretation, which requires outgoing resources. Jabbar et al. [5] have proposed the design for ColorWatch, integrating colors from Goethe’s color triangle and the Munsell color system with an analog wristwatch, allowing spatial color-to-tactile interpretation. They developed a tactile interface based on the proposed concept design under the considerations of people with visual impairments with tactile actuation, color perception, and learnability. The proposed interface automatically translates reference colors into spatial tactile patterns like a timepiece wristwatch. A range of achromatic colors and six prominent basic colors with three levels of chroma and values are considered for the cross-modular association. In addition, a simplified and affordable tactile color watch design has been proposed. This scheme enables people with visual impairments to explore artwork or real-life object colors by identifying the reference colors through a color sensor and translating them to the tactile interface. They have associated achromatic and monochromatic colors with chroma and value levels to the cross-modular tactile interface. The tactile interface manifests angular positions of tactile patterns. These patterns can be transformed automatically, corresponding to the reference color. The arrangement of the tactile pattern is based on intuitive learning, which is translated through the analog wristwatch’s tactile interface. This integrated approach offers ease of learnability to provide the essence of particularly emotional or psychological states. The color identification tests using this scheme on the developed prototype exhibit good recognition accuracy. The workload assessment and usability evaluation for people with visual impairments demonstrated promising results. Usability tests based on a system usability scale and workload assessment by NASA-TLX tests suggest that the proposed ColorWatch system can help people with visual impairments in color identification and reduce a factor that hinders their museums’ accessibility and real-life color perception. The function of ColorWatch may be expanded to represent a color gamut of 42 colors with 12 color hues, based on the RYK color wheel originally described by Issac Newton. The 6 additional color hues can be represented at uniform 30° angular distances, alternating between existing chromatic color hues.
Playing board games is important for people with a visual impairments, as it promotes interactive socialization and communication skills. However, some board games are not accessible to them at present. In [6], Miyakawa et al. proposed an auditory card game system that presents a card’s contents with auditory stimuli to all players, geared toward playing equally with others regardless of whether they have a visual impairment or not, as one of the solutions to make board games accessible. This proposal contributes significantly to expanding the range of inclusive board games for the visually impaired. The purpose of this paper is to determine whether the game allows for fair competition for people with visual impairments and to clarify the effects of the valuable parameters of the system on the players. The effectiveness of the proposed system was verified by having the experiment’s participants play “Auditory Uta-Karuta”. The results suggest that the proposed system has the potential for an accessible board game design regardless of visual impairment. In the following experiment, the impact of each valuable parameter of the system on the player’s perception of the board games is investigated to clarify the appropriate audio cue design method. The results of this experiment will greatly assist in designing an appropriate board game using the proposed system.
Although this proposal cannot improve the accessibility of all board games, it contributes to expanding the range of inclusive board games for the visually impaired. Furthermore, clarifying the impact of the element of the system on the players will greatly assist in designing an appropriate board game using the proposed system.
Contemporary art is evolving beyond simply looking at works, and the development of various sensory technologies has had a great influence on culture and art. Accordingly, opportunities for the visually impaired to appreciate visual artworks through various senses such as auditory and tactile senses are expanding. However, insufficient sound expression and a lack of portability make it less understandable and accessible. Lee et al. [7] attempted to convey a color and depth coding scheme for the visually impaired based on alternative sensory modalities, such as hearing (by encoding the color and depth information with 3D sounds for audio description) and touch (to be used for interface-triggering information such as color and depth). The proposed color coding scheme represents light, saturated, and dark colors for red, orange, yellow, yellow-green, green, blue-green, blue, and purple. The paper’s proposed system can be used for both mobile platforms and 2.5D (relief) models. They presented a methodology of 3D sound color coding using HRTF. The color hue is represented by the sound simulation of the position of the color wheel, and the lightness of the color is reflected by using the pitch of the sound. The correlation between a sound’s loudness and depth was found through experiments on the correlation between the sound variables and depth, and the correlation was used to represent depth by changing the sound’s loudness and increasing the reverberation in addition to the original sound codes. Additionally, an identification test and system usability test were conducted in this study. A total of 97.88% of the identification test results showed that the system had excellent recognition. The results of the NASA TLX test and user experience test also showed the good usability of the system. Experiments with visually impaired subjects will be implemented in future studies. This is a new attempt to express color. Although there are many ways to use sound to express color, there are few ways to use changes in a sound’s position to express color accurately. The variable of sound position is very common and familiar to the visually impaired. The use of this method also opens a new direction in the way that art can be experienced by the visually impaired. However, there is still room for improvement in this method. Further refinements will increase the accuracy and usability. Future improvements in sound processing will also make recognition easier. Neither sighted people nor people with visual impairments had experienced the proposed 3D sound coding colors before. Therefore, it was judged that there were no significant differences in the perception ratings between the sighted and visually impaired test people. However, future extended testing will be necessary to analyze the differences in the speed of perception between those two groups.
Visually impaired visitors experience many limitations when visiting museum exhibits, such as a lack of cognitive and sensory access to exhibits or replicas. Contemporary art is evolving in the direction of appreciation beyond simply looking at works, and the development of various sensory technologies has had a great influence on culture and art. Thus, opportunities for people with visual impairments to appreciate visual artworks through various senses such as hearing, touch, and smell are expanding. However, it is uncommon to provide a multi-sensory interactive interface for color recognition, such as integrating patterns, sounds, temperatures, and scents. Cho at al. [8] attempted to convey color cognition to the visually impaired, taking advantage of multi-sensory color coding. In previous works, musical melodies with different combinations of pitch, timbre, velocity, and tempo were used to distinguish vivid (i.e., saturated), light, and dark colors. However, it was rather difficult to distinguish among warm, cool, light, and dark colors using only sound cues. The method of using poems together when appreciating works has the advantage of enhancing the expressiveness of one work. Although several researchers have previously performed art–poetry matching, to the best of the author’s knowledge, there is no art–poetry matching for conveying different color dimensions such as warmth, coolness, lightness, and darkness. Moreover, there are no previous works that suggest a method of providing poems suitable for the various senses (sight, touch, hearing, and smell) of the artwork. These motivated the authors to develop a systematic algorithm to automate the generation of poetry which could be applied consistently to artworks, especially to help visually impaired users perceive colors in the artworks. Therefore, this paper aims to build a multi-sensory color coding system by combining sound and poem such that the poem leads to representing more color dimensions, such as including warm and cool colors for red, orange, yellow, green, blue, and purple. To this end, an implicit association test was performed to identify the most suitable poem among the candidate poems to represent colors in artwork by finding the common semantic directivity between the given candidate poem with voice modulation and the artwork in terms of light, dark, and warm color dimensions. Finally, a system usability test confirmed that the poem will be an effective supplement for distinguishing between vivid, light, and dark colors with different color appearance dimensions, such as warm and cold colors.
Kim et al. [9] attempted to improve the user experience when appreciating visual artworks with soundscape music chosen by a deep neural network based on weakly supervised learning. The baseline was improved by different perspectives, such as modeling methods, learning methods, and domain adaptation methods. They also proposed a multi-faceted approach to measuring ambiguous concepts, such as the subjective fitness, implicit senses, immersion, and availability. They showed improvements in the appreciation experience, such as the metaphorical and psychological transferability, time distortion, and cognitive absorption, with in-depth experiments involving 70 participants. The proposed method can also help spread soundscape-based media art by supporting traditional soundscape design. Furthermore, the proposed method will help people with visual impairments to appreciate artworks through its application to a multi-modal media art guide platform. However, their research suffered from three major limitations. First, this was not state-of-the-art performance from the perspective of the deep neural network. In particular, models with low accuracy from the feature extraction perspective were selected for mutual learning. Therefore, the development of models with improved audio feature representation is required. Second, the author’s current database comprised only 2000 music items. In addition, these 2000 music pieces were limited from the genre perspective. A wide spectrum of databases containing music from various cultures and times should be used in future studies. Third, they conducted experiments using 70 individuals, but each experiment was conducted on a group of only 10 people. Therefore, the author’s experimental results cannot be generalized. Large-scale experiments need to be conducted to generalize these experimental results.
For years, the HCI community’s research has been focused on the hearing and sight senses. However, in recent times, there has been an increased interest in using other types of senses, such as smell or touch. Moreover, this has been accompanied by growing research related to sensory substitution techniques and multi-sensory systems. Similarly, contemporary art has also been influenced by this trend, and the number of artists interested in creating novel multi-sensory works of art has increased substantially. As a result, the opportunities for visually impaired people to experience artworks in different ways are also expanding. Despite all of this, the research focusing on multimodal systems for experiencing visual arts is not extensive, and user tests comparing different modalities and senses, particularly in the field of art, are insufficient. Baltimore et al. [10] attempted to design multi-sensory mapping to convey color to visually impaired people by employing musical sounds and temperature cues. Through user tests and surveys with a total of 18 participants, a multi-sensory system was properly designed to allow the user to distinguish and experience a total of 24 colors (6 hues and 4 different color dimensions of dark, bright, warm, and cold). The authors attempted to find out the best way to mix previous unisensory temperature colors with sound color codes to reach a satisfactory multi-sensory cross-modal code. In addition, a semantic study of the musical sound cues and temperatures from those methods were acquired. The tests consisted of several semantic correlational adjective-based surveys for comparing the different modalities to find out the best way to express colors through musical sounds and temperature cues based on previously well-established sound color and temperature color coding algorithms. In addition, the resulting final algorithm was also tested with 12 more users. The results showed that the musical sounds and temperatures could be used as a substitute for color hues and color dimensions. Additionally, the data from the results guided us in designing the optimum multi-sensory temperature sound method based on those musical sounds and temperatures. This work can encourage researchers to consider thermal and sound multi-sensory interaction both as a substitute for color and to improve accessibility for visually impaired people in visual artworks and color experience.
The development of assistive technologies is improving the independent access of blind and visually impaired people to visual artworks through non-visual channels. Current single-modality tactile and auditory approaches to communicate color contents must compromise between conveying a broad color palette, ease of learning, and suffer from limited expressiveness. Quero [11] proposed a multi-sensory color code system that uses sounds and scents to represent up to 30 colors. It uses melodies to express each color’s hue and scents to express the saturation, lightness (light or dark), and temperature (warm or cool) color dimensions.
In collaboration with 18 participants, the color identification rate was evaluated when using the multi-sensory approach. Seven (39%) of the participants improved their identification rate, five (28%) remained the same, and six (33%) performed worse when compared with an audio-only color code alternative. The multi-sensory color code system improved the convenience and confidence of the participants. The scent selection and pairing were made through a semantic correspondence survey in collaboration with 18 participants. The color code system was evaluated to determine if using a multi-sensory approach eased the effort used to recognize the encoded colors and help improve the color identification compared with the commonly used unisensory method. The results from the evaluation suggest that the multi-sensory approach did improve color identification, but not for everyone. The cause of this seemed to be the extra cognitive effort and sensory overload experienced by some. In addition, the authors integrated the color code into a sensory substitution device prototype to determine if the color code could be more suitable and expressive when exploring visual art color content compared with a tactile graphics alternative. The results from this evaluation indicate that the multi-sensory-based prototype is more convenient and improves the confidence for visual artwork color content exploration. Moreover, the results also suggest suitability for artwork exploration, since the multi-sensory stimuli improve the experience and reaction to the artwork. In the future, the authors would like to expand the color code to include more color audio–scent pairs to study the applicability across different styles of visual artworks. In this work, two non-visual sensory reproductions of artworks were used to evaluate the proposed color coding system. As for future work, experiments can be carried out using more diverse styles of non-visual sensory reproductions to further support the results proposed in this paper. In addition, it is relevant to explore the effect of the semantic incoherence that could happen due to color code usage and its influence on the artwork experience and interpretation. While experiencing color is just a fraction of the art appreciation process, this work might contribute toward designing and studying accessible art appreciation frameworks for all.

3. Review Papers

Visually impaired visitors experience many limitations when visiting museum exhibits, such as a lack of cognitive and sensory access to exhibits or replicas. Contemporary art is evolving in the direction of appreciation beyond simply looking at works, and the development of various sensory technologies has had a great influence on culture and art. Thus, opportunities for people with visual impairments to appreciate visual artworks through various senses such as hearing, touch, and smell are expanding. However, it is uncommon to provide an interactive interface for color recognition, such as applying patterns, sounds, temperatures, or scents. Cho [12] conveyed the visual elements of the work to the visually impaired through various sensory elements. In addition, to open a new perspective on appreciation of the works, the technique of expressing a color coded by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented.
In this review, a holistic experience using synesthesia acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. In addition, pictograms, temperatures, scents, music, and new forms incorporating them were explored to find a new way of conveying colors in artworks to the visually impaired. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept of post-visual art appreciation aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art at the level of weak synesthesia and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this paper may lead to more vivid visual imageries or seeing with the mind’s eye.
Several studies have been conducted to improve the accessibility of images using touchscreen devices for screen reader users. Oh et al. [13] conducted a systematic review of 33 papers to get a holistic understanding of existing approaches and to suggest a research road map given identified gaps. As a result, the authors identified the types of images, visual information, input device, and feedback modalities that were studied for improving image accessibility using touchscreen devices. The findings also revealed that there is little research on how the generation of image-related information can be automated. Moreover, the involvement of screen reader users is mostly limited to evaluations, while input from target users during the design process is particularly important for the development of assistive technologies. Then, two recent studies on the accessibility of artwork and comics (AccessArt and AccessComics) are presented. Based on the identified key challenges, the authors suggested a research agenda for improving image accessibility for screen reader users. To have complete understanding of the existing approaches and identify challenges to be solved as the next step, a systematic review of 33 papers was conducted on touchscreen-based image accessibility for screen reader users. The results revealed that image types other than maps, graphs, and geometric shapes such as artwork and comics were rarely studied. Furthermore, the authors found that only about one third of the papers provided audio and haptic multi-modal feedback. Moreover, ways to collect image descriptions were out of the scope of interest for most studies, suggesting that automatic retrievals of image-related information are one of the bottlenecks for making images accessible on a large scale. Finally, while most previous studies did not involve people who were blind or had low vision during the system design process, future studies should consider inviting target users early in advance and reflecting on their comments when making design decisions.

4. Conclusions and Future Works

Synesthesia appears in all forms of art and provides a multisensory form of knowledge and communication. It is not subordinated but can expand the aesthetic through science and technology. Science and technology could thus function as a true multidisciplinary fusion project that expands the practical possibilities of theory through art. Synesthesia is divided into strong synesthesia and weak synesthesia. Strong synesthesia is characterized by a vivid image in one sensory modality in response to the stimulation of another sense. Weak synesthesia, on the other hand, is characterized by cross-sensory correspondences expressed through language or by perceptual similarities or interactions. Weak synesthesia is common, easily identified, remembered, and can be manifested by learning. Therefore, weak synesthesia could be a new educational method using multisensory techniques. Synesthetic experience is the result of a unified sense of mind; therefore, all experiences are synesthetic to some extent. The most prevalent form of synesthesia is the conversion of sound into color. In art, synesthesia and metaphor are combined. Through art, the co-sensory experience became communicative. The origin of the co-sensory experience can be also found in painting, poetry, and music (visual, literary, and musical). To some extent, all forms of art are co-sensory. The core of an artwork is its spirit, but grasping that spirit requires a medium which can be perceived not only by the one sense intended but also through various senses. In other words, the human brain creates an image by integrating multiple nonvisual senses and using a matching process with previously stored images to find and store new things through association. So-called intuition thus appears mostly in synesthesia. To understand reality as much as possible, it is necessary to experience reality in as many forms as possible. Thus, synesthesia offers a richer reality experience than the separate senses, and that can generate unusually strong memories. An intensive review on multi-sensory experiences and color recognition in visual arts appreciation in persons with visual impairment and promising future works can be found in Cho’s work [12].

Funding

This research was funded by the Science Technology and Humanity Converging Research Program of the National Research Foundation of Korea, grant number 2018M3C1B6061353.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Iranzo Bartolomé, J.; Cho, J.-D.; Cavazos Quero, L.; Jo, S.; Cho, G. Thermal Interaction for Improving Tactile Artwork Depth and Color-Depth Appreciation for Visually Impaired People. Electronics 2020, 9, 1939. [Google Scholar] [CrossRef]
  2. Cho, J.-D.; Jeong, J.; Kim, J.H.; Lee, H. Sound Coding Color to Improve Artwork Appreciation by People with Visual Impairments. Electronics 2020, 9, 1981. [Google Scholar] [CrossRef]
  3. Cavazos Quero, L.; Iranzo Bartolomé, J.; Cho, J. Accessible Visual Artworks for Blind and Visually Impaired People: Comparing a Multimodal Approach with Tactile Graphics. Electronics 2021, 10, 297. [Google Scholar] [CrossRef]
  4. Cho, J.-D.; Quero, L.C.; Bartolomé, J.I.; Lee, D.W.; Oh, U.; Lee, I. Tactile colour pictogram to improve artwork appreciation of people with visual impairments. Color Res. Appl. 2021, 46, 103–116. [Google Scholar] [CrossRef]
  5. Jabbar, M.S.; Lee, C.-H.; Cho, J.-D. ColorWatch: Color Perceptual Spatial Tactile Interface for People with Visual Impairments. Electronics 2021, 10, 596. [Google Scholar] [CrossRef]
  6. Miyakawa, H.; Kuratomo, N.; Salih, H.E.B.; Zempo, K. Auditory Uta-Karuta: Development and Evaluation of an Accessible Card Game System Using Audible Cards for the Visually Impaired. Electronics 2021, 10, 750. [Google Scholar] [CrossRef]
  7. Lee, Y.; Lee, C.-H.; Cho, J.-D. 3D Sound Coding Color for the Visually Impaired. Electronics 2021, 10, 1037. [Google Scholar] [CrossRef]
  8. Cho, J.-D.; Lee, Y. ColorPoetry: Multi-Sensory Experience of Color with Poetry in Visual Arts Appreciation of Persons with Visual Impairment. Electronics 2021, 10, 1064. [Google Scholar] [CrossRef]
  9. Kim, Y.; Jeong, H.; Cho, J.-D.; Shin, J. Construction of a Soundscape-Based Media Art Exhibition to Improve User Appreciation Experience by Using Deep Neural Networks. Electronics 2021, 10, 1170. [Google Scholar] [CrossRef]
  10. Bartolome, J.I.; Cho, G.; Cho, J.-D. Multi-Sensory Color Expression with Sound and Temperature in Visual Arts Appreciation for People with Visual Impairment. Electronics 2021, 10, 1336. [Google Scholar] [CrossRef]
  11. Cavazos Quero, L.; Lee, C.-H.; Cho, J.-D. Multi-Sensory Color Code Based on Sound and Scent for Visual Art Appreciation. Electronics 2021, 10, 1696. [Google Scholar] [CrossRef]
  12. Cho, J.-D. A Study of Multi-Sensory Experience and Color Recognition in Visual Arts Appreciation of People with Visual Impairment. Electronics 2021, 10, 470. [Google Scholar] [CrossRef]
  13. Oh, U.; Joh, H.; Lee, Y. Image Accessibility for Screen Reader Users: A Systematic Review and a Road Map. Electronics 2021, 10, 953. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cho, J.-D. Multi-Sensory Interaction for Blind and Visually Impaired People. Electronics 2021, 10, 3170. https://doi.org/10.3390/electronics10243170

AMA Style

Cho J-D. Multi-Sensory Interaction for Blind and Visually Impaired People. Electronics. 2021; 10(24):3170. https://doi.org/10.3390/electronics10243170

Chicago/Turabian Style

Cho, Jun-Dong. 2021. "Multi-Sensory Interaction for Blind and Visually Impaired People" Electronics 10, no. 24: 3170. https://doi.org/10.3390/electronics10243170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop