User Experience for Advanced Human-Computer Interaction II

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 43004
Related Special Issue: User Experience for Advanced Human–Computer Interaction

Special Issue Editors

Division of Future Convergence (HCI Science Major), Dongduk Women's University, Seoul 02748, Republic of Korea
Interests: HCI (human-computer interaction); UX (user experience); UCD (user-centered design); ergonomic design
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Division of Future Convergence (HCI Science Major), Dongduk Women's University, Seoul 02748, Korea
Interests: deep-learning; unstructured data analysis; affective engineering, human factors; UX (user experience)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Industrial Engineering, Seoul National University, Seoul 08826, Republic of Korea
Interests: ergonomics; human factors; user-centered design; human interface design; affective engineering; Kansei Engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Because of recent advances in technology, new high technology-based products are being developed that interact with users in various ways (e.g., tactile, gesture interaction, voice, and motion recognition), such as smart products, artificial intelligence speakers, virtual reality, and augmented reality devices. Moreover, because the Internet of Things (IoT), which is a new technology paradigm envisioned as a global network of devices capable of interacting with each other, has been widely adopted, consumers can now control multiple devices at the same time. However, because of the appearance of these various interactions, users may be rather uncomfortable using the product. In particular, products that have been developed without considering the user’s needs and behavior will be difficult for consumers to get used to, and they will be dissatisfied with them. Furthermore, emotions are one of the essential elements in the process of communication, cognition, learning, and rational decision making. However, there is still a lack of studies on human–computer interactions for understanding emotions and reacting to humans accordingly. Therefore, in the future, there will be more research on the user experience (UX) or user-centered design (UCD) of interactions between new products/devices/services and users, based on various kinds of advanced technologies that understanding human emotions.

This Special Issue welcomes original, unpublished research contributions including, but not limited to, methodological studies, quantitative, qualitative, and mixed methods studies focusing on issues around consumer interaction with new technologies.

Prof. Ilsun Rhiu
Prof. Wonjoon Kim
Prof. Myung Hwan Yun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • user experience
  • user-centered design
  • usability
  • human–computer interaction
  • smart product
  • virtual reality
  • augmented reality
  • Internet of Things
  • affective computing/engineering
  • Kansei design
  • sentiment analysis

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2356 KiB  
Article
Comparative Analysis of Usability and Accessibility of Kiosks for People with Disabilities
by Yuryeon Lee, Sunyoung Park, Jaehyun Park and Hyun K. Kim
Appl. Sci. 2023, 13(5), 3058; https://doi.org/10.3390/app13053058 - 27 Feb 2023
Cited by 2 | Viewed by 3334
Abstract
Owing to technological advancements, kiosks have become more prevalent in public places. When using such kiosks, elderly persons and people with disabilities face problems related to accessibility and usability, such as difficulties in kiosk operations such as menu selection and in accessing the [...] Read more.
Owing to technological advancements, kiosks have become more prevalent in public places. When using such kiosks, elderly persons and people with disabilities face problems related to accessibility and usability, such as difficulties in kiosk operations such as menu selection and in accessing the kiosk space. Previous studies have usually included accessibility as a subset of usability. However, in this study, we aim to redefine the relationship between these two concepts with a focus on newly emerging kiosk devices. First, we performed a literature review to thoroughly analyze these concepts. Then, we conducted a focus group interview (FGI) targeting people with visual, hearing, and physical impairments to learn about the difficulties that these people face when using kiosks. Finally, we analyzed the characteristics of accessibility and usability related to kiosks and designed a diagram that illustrated the relationship between them. While accessibility and usability shared similarities regarding consistency and user control, they differed deeply regarding their subcategory items; many opinions on accessibility were related to essential functions, whereas many on usability were related to psychological factors such as additional functions or personal preferences. These results can be useful when creating laws and guidelines regarding the accessibility and usability of kiosks or when developing kiosk functions. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

19 pages, 10928 KiB  
Article
Communication in Human–AI Co-Creation: Perceptual Analysis of Paintings Generated by Text-to-Image System
by Yanru Lyu, Xinxin Wang, Rungtai Lin and Jun Wu
Appl. Sci. 2022, 12(22), 11312; https://doi.org/10.3390/app122211312 - 8 Nov 2022
Cited by 21 | Viewed by 10340
Abstract
In recent years, art creation using artificial intelligence (AI) has started to become a mainstream phenomenon. One of the latest applications of AI is to generate visual artwork from natural language descriptions where anyone can interact with it to create thousands of artistic [...] Read more.
In recent years, art creation using artificial intelligence (AI) has started to become a mainstream phenomenon. One of the latest applications of AI is to generate visual artwork from natural language descriptions where anyone can interact with it to create thousands of artistic images with minimal effort, which provokes the questions: what is the essence of artistic creation, and who can create art in this era? Considering that, in this study, the theoretical communication framework was adopted to investigate the difference in the interaction with the text-to-image system between artists and nonartists. In this experiment, ten artists and ten nonartists were invited to co-create with Midjourney. Their actions and reflections were recorded, and two sets of generated images were collected for the visual question-answering task, with a painting created by the artist as a reference sample. A total of forty-two subjects with artistic backgrounds participated in the evaluated experiment. The results indicated differences between the two groups in their creation actions and their attitude toward AI, while the technology blurred the difference in the perception of the results caused by the creator’s artistic experience. In addition, attention should be paid to communication on the effectiveness level for a better perception of the artistic value. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

17 pages, 2585 KiB  
Article
Can Gestural Filler Reduce User-Perceived Latency in Conversation with Digital Humans?
by Junyeong Kum and Myungho Lee
Appl. Sci. 2022, 12(21), 10972; https://doi.org/10.3390/app122110972 - 29 Oct 2022
Viewed by 1798
Abstract
The demand for a conversational system with digital humans has increased with the development of artificial intelligence. Latency can occur in such conversational systems because of natural language processing and network issues, which can deteriorate the user’s performance and the availability of the [...] Read more.
The demand for a conversational system with digital humans has increased with the development of artificial intelligence. Latency can occur in such conversational systems because of natural language processing and network issues, which can deteriorate the user’s performance and the availability of the systems. There have been attempts to mitigate user-perceived latency by using conversational fillers in human–agent interaction and human–robot interaction. However, non-verbal cues, such as gestures, have received less attention in such attempts, despite their essential roles in communication. Therefore, we designed gestural fillers for the digital humans. This study examined the effects of whether the conversation type and gesture filler matched or not. We also compared the effects of the gestural fillers with conversational fillers. The results showed that the gestural fillers mitigate user-perceived latency and affect the willingness, impression, competence, and discomfort in conversations with digital humans. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

20 pages, 4763 KiB  
Article
Research on Sound Imagery of Electric Shavers Based on Kansei Engineering and Multiple Artificial Neural Networks
by Zhe-Hui Lin, Jeng-Chung Woo, Feng Luo and Yu-Tong Chen
Appl. Sci. 2022, 12(20), 10329; https://doi.org/10.3390/app122010329 - 13 Oct 2022
Cited by 5 | Viewed by 1377
Abstract
The electric shaver market in China reach 26.3 billion RMB by 2021. Nowadays, in addition to functional satisfaction, consumers are increasingly focused on the emotional imagery conveyed by products with multiple-senses, and electric shavers are not only shaped to attract consumers, but their [...] Read more.
The electric shaver market in China reach 26.3 billion RMB by 2021. Nowadays, in addition to functional satisfaction, consumers are increasingly focused on the emotional imagery conveyed by products with multiple-senses, and electric shavers are not only shaped to attract consumers, but their product sound also conveys a unique emotional imagery. Based on Kansei engineering and artificial neural networks, this research explored the emotional imagery conveyed by the sound of electric shavers. First, we collected a wide sample of electric shavers in the market (230 types) and obtained the consumers’ perceptual vocabulary (85,710 items) through a web crawler. The multidimensional scaling method and cluster analysis were used to condense the sample into 34 representative samples and 3 groups of representative Kansei words; then, the semantic differential method was used to assess the users’ emotional evaluation values. The sound design elements (including item and category) of the samples were collected and classified using Heardrec Devices and ArtemiS 13.6 software, and, finally, multiple linear and non-linear correlation prediction models (four types) between the sound design elements of the electric shaver and the users’ emotional evaluation values were established by the quantification theory type I, general regression neural network, back propagation neural network, and genetic algorithm-based BPNN. The models were validated by paired-sample t-test, and all of them had good reliability, among which the genetic algorithm-based BPNN had the best accuracy. In this research, four linear and non-linear Kansei prediction models were constructed. The aim was to apply higher accuracy prediction models to the prediction of electric shaver sound imagery, while giving specific and accurate sound design metrics and references. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

16 pages, 3151 KiB  
Article
Comparison of Cognitive Differences of Artworks between Artist and Artistic Style Transfer
by Yikang Sun, Yanru Lyu, Po-Hsien Lin and Rungtai Lin
Appl. Sci. 2022, 12(11), 5525; https://doi.org/10.3390/app12115525 - 29 May 2022
Cited by 2 | Viewed by 2397
Abstract
This study explores how audiences responded to perceiving and distinguishing the paintings created by AI or human artists. The stimuli were six paintings which were completed by AI and human artists. A total of 750 subjects participated to identify which ones were completed [...] Read more.
This study explores how audiences responded to perceiving and distinguishing the paintings created by AI or human artists. The stimuli were six paintings which were completed by AI and human artists. A total of 750 subjects participated to identify which ones were completed by human artists or by AI. Results revealed that most participants could correctly distinguish between paintings made by AI or human artists and that accuracy was higher for those who used “intuition” as the criterion for judgment. The participants preferred the paintings created by human artists. Furthermore, there were big differences in the perception of the denotation and connotation of paintings between audiences of different backgrounds. The reasons for this will be analyzed in subsequent research. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

15 pages, 6141 KiB  
Article
From Pigments to Pixels: A Comparison of Human and AI Painting
by Yikang Sun, Cheng-Hsiang Yang, Yanru Lyu and Rungtai Lin
Appl. Sci. 2022, 12(8), 3724; https://doi.org/10.3390/app12083724 - 7 Apr 2022
Cited by 9 | Viewed by 7337
Abstract
From entertainment to medicine and engineering, artificial intelligence (AI) is now being used in a wide range of fields, yet the extent to which AI can be effectively applied to the creative arts remains to be seen. In this research, a neural algorithm [...] Read more.
From entertainment to medicine and engineering, artificial intelligence (AI) is now being used in a wide range of fields, yet the extent to which AI can be effectively applied to the creative arts remains to be seen. In this research, a neural algorithm of artistic style was used to generate six AI paintings and these were compared with six paintings on the same theme by an amateur painter. Two sets of paintings were compared by 380 participants, 70 percent of whom had previous painting experience. Results indicate that color and line are the key elements of aesthetic appreciation. Additionally, the style transfer had a marked effect on the viewer when there was a close correspondence between the painting and the style transfer but not when there was little correspondence, indicating that AI is of limited effectiveness in modifying an existing style. Although the use of neural networks simulating human learning has come a long way in narrowing the gap between paintings produced by AI and those produced in the traditional fashion, there remains a fundamental difference in terms of aesthetic appreciation since paintings generated by AI are based on technology, while those produced by humans are based on emotion. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

18 pages, 5407 KiB  
Article
What Does the Ideal Built-In Car Navigation System Look Like?—An Investigation in the Central European Region
by Fanni Vörös, Georg Gartner, Michael P. Peterson and Béla Kovács
Appl. Sci. 2022, 12(8), 3716; https://doi.org/10.3390/app12083716 - 7 Apr 2022
Cited by 4 | Viewed by 2280
Abstract
Driving is based on effective navigation. When using a navigation device, the user interface, the amount and quality of the underlying data and its representation all effect the quality of navigation. This study evaluates whether drivers in three different countries consider these devices [...] Read more.
Driving is based on effective navigation. When using a navigation device, the user interface, the amount and quality of the underlying data and its representation all effect the quality of navigation. This study evaluates whether drivers in three different countries consider these devices to be useful and what functionality they would prefer. An online questionnaire was used to assess built-in navigation systems. The findings from 213 respondents show that current car GPSs are overloaded with features. Regardless of country, drivers simply require more basic functionality in the interface. It was also noted that the embedded functions in these devices are not fully utilized. In addition, many people use the navigation service to enter a new address while the car is moving. It may be worth examining how this option can be better implemented. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

37 pages, 21822 KiB  
Article
Comparing the Effectiveness of Speech and Physiological Features in Explaining Emotional Responses during Voice User Interface Interactions
by Danya Swoboda, Jared Boasen, Pierre-Majorique Léger, Romain Pourchon and Sylvain Sénécal
Appl. Sci. 2022, 12(3), 1269; https://doi.org/10.3390/app12031269 - 25 Jan 2022
Cited by 7 | Viewed by 3113
Abstract
The rapid rise of voice user interface technology has changed the way users traditionally interact with interfaces, as tasks requiring gestural or visual attention are swapped by vocal commands. This shift has equally affected designers, required to disregard common digital interface guidelines in [...] Read more.
The rapid rise of voice user interface technology has changed the way users traditionally interact with interfaces, as tasks requiring gestural or visual attention are swapped by vocal commands. This shift has equally affected designers, required to disregard common digital interface guidelines in order to adapt to non-visual user interaction (No-UI) methods. The guidelines regarding voice user interface evaluation are far from the maturity of those surrounding digital interface evaluation, resulting in a lack of consensus and clarity. Thus, we sought to contribute to the emerging literature regarding voice user interface evaluation and, consequently, assist user experience professionals in their quest to create optimal vocal experiences. To do so, we compared the effectiveness of physiological features (e.g., phasic electrodermal activity amplitude) and speech features (e.g., spectral slope amplitude) to predict the intensity of users’ emotional responses during voice user interface interactions. We performed a within-subjects experiment in which the speech, facial expression, and electrodermal activity responses of 16 participants were recorded during voice user interface interactions that were purposely designed to elicit frustration and shock, resulting in 188 analyzed interactions. Our results suggest that the physiological measure of facial expression and its extracted feature, automatic facial expression-based valence, is most informative of emotional events lived through voice user interface interactions. By comparing the unique effectiveness of each feature, theoretical and practical contributions may be noted, as the results contribute to voice user interface literature while providing key insights favoring efficient voice user interface evaluation. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

23 pages, 35469 KiB  
Article
Visual Sentiment Analysis Using Deep Learning Models with Social Media Data
by Ganesh Chandrasekaran, Naaji Antoanela, Gabor Andrei, Ciobanu Monica and Jude Hemanth
Appl. Sci. 2022, 12(3), 1030; https://doi.org/10.3390/app12031030 - 19 Jan 2022
Cited by 25 | Viewed by 8998
Abstract
Analyzing the sentiments of people from social media content through text, speech, and images is becoming vital in a variety of applications. Many existing research studies on sentiment analysis rely on textual data, and similar to the sharing of text, users of social [...] Read more.
Analyzing the sentiments of people from social media content through text, speech, and images is becoming vital in a variety of applications. Many existing research studies on sentiment analysis rely on textual data, and similar to the sharing of text, users of social media share more photographs and videos. Compared to text, images are said to exhibit the sentiments in a much better way. So, there is an urge to build a sentiment analysis model based on images from social media. In our work, we employed different transfer learning models, including the VGG-19, ResNet50V2, and DenseNet-121 models, to perform sentiment analysis based on images. They were fine-tuned by freezing and unfreezing some of the layers, and their performance was boosted by applying regularization techniques. We used the Twitter-based images available in the Crowdflower dataset, which contains URLs of images with their sentiment polarities. Our work also presents a comparative analysis of these pre-trained models in the prediction of image sentiments on our dataset. The accuracies of our fine-tuned transfer learning models involving VGG-19, ResNet50V2, and DenseNet-121 are 0.73, 0.75, and 0.89, respectively. When compared to previous attempts at visual sentiment analysis, which used a variety of machine and deep learning techniques, our model had an improved accuracy by about 5% to 10%. According to the findings, the fine-tuned DenseNet-121 model outperformed the VGG-19 and ResNet50V2 models in image sentiment prediction. Full article
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)
Show Figures

Figure 1

Back to TopTop