sensors-logo

Journal Browser

Journal Browser

Vision Science and Technology in Human Computer Interaction Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 July 2024) | Viewed by 24301

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy
Interests: natural and artificial vision perception; neuromorphic computing; perception and action; translational vision science

E-Mail Website
Guest Editor
Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy
Interests: neuromorphic computing; neurosensory engineering; VR neural rehabilitation; cognitive neuroscience

Special Issue Information

Dear Colleagues,

Vision is the predominant human sense, in terms of accuracy and reliability. Through it we extract contactless information from the outer world, we build cognitive models, we develop spatial relationships, we learn situations, and we even exploit our eyes to communicate intentions or goals with our interaction counterparts in shared workspaces. On the artificial side, theoretical and computational advances in vision technology have reached levels of reliability, such as to enable fast and efficient human–machine interactions for natural and intuitive bidirectional communications. Current interaction systems have greatly benefitted from the inclusion of advanced sensing and computing capabilities that are adapted to how humans perceive and interact with the real world. This has challenged the traditional device-centric view of facilitating human–computer interaction by overcoming the traditional concepts of input and output. HMI has now reached a level of maturity to go even one step further: to extend the boundaries of human–machine engagement from the focus of attention to the periphery of our senses, fulfilling the true interaction potential in a shared space.

The goal of this Special Issue is to collect contributions that demonstrate how evidence from vision science coupled with vision-based technology can extend, adapt, and smooth users’ experience of HCI (e.g., by including multiuser or social interaction, covert attention, and prediction).

Contributions on real-world applications, novel and nonconventional vision sensors, and HCI based on human perception models are encouraged.

Topics relevant to this Special Issue include, but are not limited to, the following:

  • RGB-D cameras, TOF systems.
  • Nonconventional vision sensors, such as event-based cameras.
  • Virtual and augmented physical reality.
  • Algorithms and techniques for scene as well as action understanding, in addition to situation awareness.
  • Gesture recognition.
  • Ambient intelligence.
  • Emotion detection and simulation.
  • Models of human perception.
  • Anticipatory user interfaces based on human models.
  • Affective and social signaling.
  • Human–technology symbiosis.

Dr. Silvio P. Sabatini
Dr. Andrea Canessa
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • vision-based sensing
  • perceptual rendering
  • human-centered design
  • cognitive ergonomics
  • evaluating interactive systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 10301 KiB  
Article
The Effects of Layout Order on Interface Complexity: An Eye-Tracking Study for Dashboard Design
by Nuowen Zhang, Jing Zhang, Shangsong Jiang and Weijia Ge
Sensors 2024, 24(18), 5966; https://doi.org/10.3390/s24185966 - 14 Sep 2024
Cited by 5 | Viewed by 1994
Abstract
This study investigated the effect of layout order on the complexity of the dashboard interface based on screen-based eye trackers. By simplifying and abstracting dashboard interfaces and incorporating subjective ratings (symmetry and unity calculations), we successfully manipulated the levels of complexity and layout [...] Read more.
This study investigated the effect of layout order on the complexity of the dashboard interface based on screen-based eye trackers. By simplifying and abstracting dashboard interfaces and incorporating subjective ratings (symmetry and unity calculations), we successfully manipulated the levels of complexity and layout order of the interface materials. Using four types of eye movement data (total fixation count, total gaze duration, scanning paths, and hotspot maps) and behavioral data, we compared participants’ visual search behavior on interfaces with different layout orders and complexity levels. Experiment 1 revealed a significant interaction between layout order and interface complexity, with participants performing significantly better in the high-level layout order condition. Experiment 2 confirmed that the position of the core chart plays a crucial role in users’ visual search behavior and that the optimal layout order for the dashboard is to place the core chart on the left side of the interface’s horizontal axis, with partial symmetry in the no-core chart areas. This study highlights the effectiveness of eye-tracking techniques in user interface design research and provides valuable insights into optimizing dashboard interface design. Designers should adopt the design principle of “order is more” in addition to “less is more” and consider designing the core chart in the left-center position. Full article
(This article belongs to the Special Issue Vision Science and Technology in Human Computer Interaction Systems)
Show Figures

Figure 1

22 pages, 2263 KiB  
Article
The Effect of Ambient Illumination and Text Color on Visual Fatigue under Negative Polarity
by Qiangqiang Fan, Jinhan Xie, Zhaoyang Dong and Yang Wang
Sensors 2024, 24(11), 3516; https://doi.org/10.3390/s24113516 - 30 May 2024
Cited by 3 | Viewed by 2923
Abstract
This study investigates the effects of ambient illumination and negatively polarized text color on visual fatigue, exploring the issue of visual fatigue when using visual display terminals in low-illumination environments. The research methodology utilizes an experimental design to collect data on changes in [...] Read more.
This study investigates the effects of ambient illumination and negatively polarized text color on visual fatigue, exploring the issue of visual fatigue when using visual display terminals in low-illumination environments. The research methodology utilizes an experimental design to collect data on changes in pupil accommodation and blink rate through an eye tracker. Participants completed a reading task while exposed to various text colors and ambient light conditions to evaluate visual fatigue and cognitive performance. The study’s findings suggest that text color significantly affects visual fatigue, with red text causing the highest level of visual fatigue and yellow text causing the lowest level of visual fatigue. Improvements in ambient lighting reduce visual fatigue, but the degree of improvement varies depending on the text color. Additionally, cognitive performance is better when using yellow and white text but worse when using red text. Yellow text is the most effective choice for reducing visual fatigue under negative polarity. Increasing ambient lighting can also improve visual fatigue in low-illumination conditions. These findings will offer valuable guidance for designing visual terminal device interfaces, especially for low-illumination or night environments, to minimize visual fatigue and improve user experience. Full article
(This article belongs to the Special Issue Vision Science and Technology in Human Computer Interaction Systems)
Show Figures

Figure 1

28 pages, 16347 KiB  
Article
Visual Attention and Emotion Analysis Based on Qualitative Assessment and Eyetracking Metrics—The Perception of a Video Game Trailer
by Eva Villegas, Elisabet Fonts, Marta Fernández and Sara Fernández-Guinea
Sensors 2023, 23(23), 9573; https://doi.org/10.3390/s23239573 - 2 Dec 2023
Cited by 3 | Viewed by 3214
Abstract
Video game trailers are very useful tools for attracting potential players. This research focuses on analyzing the emotions that arise while viewing video game trailers and the link between these emotions and storytelling and visual attention. The methodology consisted of a three-step task [...] Read more.
Video game trailers are very useful tools for attracting potential players. This research focuses on analyzing the emotions that arise while viewing video game trailers and the link between these emotions and storytelling and visual attention. The methodology consisted of a three-step task test with potential users: the first step was to identify the perception of indie games; the second step was to use the eyetracking device (gaze plot, heat map, and fixation points) and link them to fixation points (attention), viewing patterns, and non-visible areas; the third step was to interview users to understand impressions and questionnaires of emotions related to the trailer’s storytelling and expectations. The results show an effective assessment of visual attention together with visualization patterns, non-visible areas that may affect game expectations, fixation points linked to very specific emotions, and perceived narratives based on the gaze plot. The innovation in the mixed methodological approach has made it possible to obtain relevant data regarding the link between the emotions perceived by the user and the areas of attention collected with the device. The proposed methodology enables developers to understand the strengths and weaknesses of the information being conveyed so that they can tailor the trailer to the expectations of potential players. Full article
(This article belongs to the Special Issue Vision Science and Technology in Human Computer Interaction Systems)
Show Figures

Figure 1

25 pages, 3079 KiB  
Article
Binocular Rivalry Impact on Macroblock-Loss Error Concealment for Stereoscopic 3D Video Transmission
by Md Mehedi Hasan, Md. Azam Hossain, Naif Alotaibi, John F. Arnold and AKM Azad
Sensors 2023, 23(7), 3604; https://doi.org/10.3390/s23073604 - 30 Mar 2023
Cited by 1 | Viewed by 2218
Abstract
Three-dimensional video services delivered through wireless communication channels have to deal with numerous challenges due to the limitations of both the transmission channel’s bandwidth and receiving devices. Adverse channel conditions, delays, or jitters can result in bit errors and packet losses, which can [...] Read more.
Three-dimensional video services delivered through wireless communication channels have to deal with numerous challenges due to the limitations of both the transmission channel’s bandwidth and receiving devices. Adverse channel conditions, delays, or jitters can result in bit errors and packet losses, which can alter the appearance of stereoscopic 3D (S3D) video. Due to the perception of dissimilar patterns by the two human eyes, they can not be fused into a stable composite pattern in the brain and hence try to dominate by suppressing each other. Thus, a psychovisual sensation that is called binocular rivalry occurs. As a result, undetectable changes causing irritating flickering effects are seen, leading to visual discomforts such as eye strain, headache, nausea, and weariness. This study addresses the observer’s quality of experience (QoE) by analyzing the binocular rivalry impact on the macroblock (MB) losses in a frame and its error propagation due to predictive frame encoding in stereoscopic video transmission systems. To simulate the processing of experimental videos, the Joint Test Model (JM) reference software has been used as it is recommended by the International Telecommunication Union (ITU). Existing error concealing techniques were then applied to the contiguous lost MBs for a variety of transmission impairments. In order to validate the authenticity of the simulated packet loss environment, several objective evaluations were carried out. Standard numbers of subjects were then engaged in the subjective testing of common 3D video sequences. The results were then statistically examined using a standard Student’s t-test, allowing the impact of binocular rivalry to be compared to that of a non-rivalry error condition. The major goal is to assure error-free video communication by minimizing the negative impacts of binocular rivalry and boosting the ability to efficiently integrate 3D video material to improve viewers’ overall QoE. Full article
(This article belongs to the Special Issue Vision Science and Technology in Human Computer Interaction Systems)
Show Figures

Figure 1

15 pages, 46397 KiB  
Article
An Augmented Reality Serious Game for Learning Intelligent Wheelchair Control: Comparing Configuration and Tracking Methods
by Rafael Maio, Bernardo Marques, João Alves, Beatriz Sousa Santos, Paulo Dias and Nuno Lau
Sensors 2022, 22(20), 7788; https://doi.org/10.3390/s22207788 - 13 Oct 2022
Cited by 5 | Viewed by 2510
Abstract
This work proposes an augmented reality serious game (ARSG) for supporting individuals with motor disabilities while controlling robotic wheelchairs. A racing track was used as the game narrative; this included restriction areas, static and dynamic virtual objects, as well as obstacles and signs. [...] Read more.
This work proposes an augmented reality serious game (ARSG) for supporting individuals with motor disabilities while controlling robotic wheelchairs. A racing track was used as the game narrative; this included restriction areas, static and dynamic virtual objects, as well as obstacles and signs. To experience the game, a prior configuration of the environment, made through a smartphone or a computer, was required. Furthermore, a visualization tool was developed to exhibit user performance while using the ARSG. Two user studies were conducted with 10 and 20 participants, respectively, to compare (1) how different devices enable configuring the ARSG, and (2) different tracking capabilities, i.e., methods used to place virtual content on the real-world environment while the user interacts with the game and controls the wheelchair in the physical space: C1—motion tracking using cloud anchors; C2—offline motion tracking. Results suggest that configuring the environment with the computer is more efficient and accurate, in contrast to the smartphone, which is characterized as more engaging. In addition, condition C1 stood out as more accurate and robust, while condition C2 appeared to be easier to use. Full article
(This article belongs to the Special Issue Vision Science and Technology in Human Computer Interaction Systems)
Show Figures

Figure 1

Review

Jump to: Research

14 pages, 1083 KiB  
Review
An Overview of Wearable Haptic Technologies and Their Performance in Virtual Object Exploration
by Myla van Wegen, Just L. Herder, Rolf Adelsberger, Manuela Pastore-Wapp, Erwin E. H. van Wegen, Stephan Bohlhalter, Tobias Nef, Paul Krack and Tim Vanbellingen
Sensors 2023, 23(3), 1563; https://doi.org/10.3390/s23031563 - 1 Feb 2023
Cited by 23 | Viewed by 10526
Abstract
We often interact with our environment through manual handling of objects and exploration of their properties. Object properties (OP), such as texture, stiffness, size, shape, temperature, weight, and orientation provide necessary information to successfully perform interactions. The human haptic perception system plays a [...] Read more.
We often interact with our environment through manual handling of objects and exploration of their properties. Object properties (OP), such as texture, stiffness, size, shape, temperature, weight, and orientation provide necessary information to successfully perform interactions. The human haptic perception system plays a key role in this. As virtual reality (VR) has been a growing field of interest with many applications, adding haptic feedback to virtual experiences is another step towards more realistic virtual interactions. However, integrating haptics in a realistic manner, requires complex technological solutions and actual user-testing in virtual environments (VEs) for verification. This review provides a comprehensive overview of recent wearable haptic devices (HDs) categorized by the OP exploration for which they have been verified in a VE. We found 13 studies which specifically addressed user-testing of wearable HDs in healthy subjects. We map and discuss the different technological solutions for different OP exploration which are useful for the design of future haptic object interactions in VR, and provide future recommendations. Full article
(This article belongs to the Special Issue Vision Science and Technology in Human Computer Interaction Systems)
Show Figures

Figure 1

Back to TopTop