applsci-logo

Journal Browser

Journal Browser

New Technologies and Applications of Visual-Based Human-Computer Interactions

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 December 2025 | Viewed by 1787

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Communication and Environment, Rhine-Waal University of Applied Sciences, Kamp-Lintfort, Germany
Interests: eye tracking; human computer interaction; cognitive assistive systems; human factors; usability engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Visual-based HCI is at a turning point. With advances in computer vision, multimodal AI, and Visual Action Language (VLA) models, interactive systems can now infer what users see, intend, and are about to do — and translate those signals into meaningful system actions, without explicit commands.

Real-time gaze tracking, micro-expression analysis, gesture recognition, intention prediction, and AI-driven personalization are moving visual sensing from passive perception to active interaction logic. In parallel, LLMs and VLA models are creating unified representations that connect visual understanding → semantic interpretation → action execution, enabling natural, controller-free interfaces across XR, robotics, and assistive systems. This Special Issue aims to collect the work that is driving this shift forward.

We especially encourage submissions that:

  • Integrate vision with VLA / LLMs, speech, or physiological signals
  • Translate visual understanding into actionable interaction (control, adaptation, automation)
  • Advance robustness, latency, trust, and deployability
  • Demonstrate high-impact applications in XR, HRI, learning, or accessibility

Example topic areas (non-exhaustive):

  • Gaze/gesture/intention prediction for real-time interaction
  • VLA models linking perception, semantics, and action in HCI
  • Emotion & stress inference for adaptive or affect-aware interfaces
  • Multimodal fusion (vision + speech/EEG/EDA/robotics/LLMs)
  • Benchmarks, datasets, and usability evaluations for visual-based HCI
  • Privacy, ethics and secure handling of visual and biometric data
  • Applications in XR, teleoperation, education, therapy, and assistive tech

If your work pushes visual sensing from recognition to interaction — this Special Issue is the right venue.

Authors unsure about topic relevance are welcome to contact the Guest Editor (Kai Essig).

Prof. Dr. Kai Essig
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual-based HCI
  • AI
  • camera stream processing
  • multi-modal interfaces
  • movement
  • intention
  • emotion and face recognition
  • activity
  • intention and gesture recognition
  • presentation of visual data
  • interactive data visualizations
  • multi-modal databases

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 2723 KB  
Article
A Human-Centric, Uncertainty-Aware Event-Fused AI Network for Robust Face Recognition in Adverse Conditions
by Akmalbek Abdusalomov, Sabina Umirzakova, Elbek Boymatov, Dilnoza Zaripova, Shukhrat Kamalov, Zavqiddin Temirov, Wonjun Jeong, Hyoungsun Choi and Taeg Keun Whangbo
Appl. Sci. 2025, 15(13), 7381; https://doi.org/10.3390/app15137381 - 30 Jun 2025
Cited by 3 | Viewed by 1097
Abstract
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into [...] Read more.
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into a single framework. This study introduces HUE-Net—a Human-centric, Uncertainty-aware, Event-fused Network—designed specifically to thrive under severe environmental stress. HUE-Net marries the visible RGB band with near-infrared (NIR) imagery and high-temporal-event data through an early-fusion pipeline, proven more responsive than serial approaches. A custom hybrid backbone that couples convolutional networks with transformers keeps the model nimble enough for edge devices. Central to the architecture is the perturbed multi-branch variational module, which distills probabilistic identity embeddings while delivering calibrated confidence scores. Complementing this, an Adaptive Spectral Attention mechanism dynamically reweights each stream to amplify the most reliable facial features in real time. Unlike previous efforts that compartmentalize uncertainty handling, spectral blending, or computational thrift, HUE-Net unites all three in a lightweight package. Benchmarks on the IJB-C and N-SpectralFace datasets illustrate that the system not only secures state-of-the-art accuracy but also exhibits unmatched spectral robustness and reliable probability calibration. The results indicate that HUE-Net is well-positioned for forensic missions and humanitarian scenarios where trustworthy identification cannot be deferred. Full article
Show Figures

Figure 1

Back to TopTop