AI Models for Human-Centered Computer Vision and Signal Analysis

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 August 2026 | Viewed by 2044

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Electronics and Information Technology, Institute of Control and Computation Engineering, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
Interests: artificial intelligence; pattern recognition; signal, image, and video processing; machine learning; machine perception
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Signal, image, and video analysis dealing with the detection, recognition, and identification/verification of humans and human-related events, such as gestures, actions, or speech, has over the years moved from sole academic interest into industrial research and technological practice. This process results in steady improvements of sensors and fast-growing power of computational hardware and in maturing of computational techniques.

Aim of the Special Issue is to present AI-based techniques, both classic machine learning and deep learning models, for the analysis of human-related digital data (e.g., processing, segmentation, feature extraction, and object/fake/fraud/health detection/classification/identification) acquired by various sensors (cameras, microphones, or touch) and wearable sensors or medical devices.

Original research articles and reviews are welcome. Research areas may include (but not limited to) the following:

  • Biometrics.
  • Human pose and action classification.
  • Human–machine interaction by gestures and voice.
  • Human sensor acquisition and data analysis.
  • Fake detection in image, speech, and video.
  • Surveillance video analysis.
  • Security related data analysis.
  • Wearable signal analysis.
  • Health monitoring.

I look forward to receiving your contributions.

Prof. Dr. Włodzimierz Kasprzak
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biometrics
  • human pose estimation
  • human action classification
  • human–machine interaction
  • fake detection
  • data analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 910 KB  
Article
USGaze: Temporal Gaze Estimation via a Unified State-Space Modeling Framework
by Gefan Sun, Zhao Wang and Qinghua Xia
Electronics 2026, 15(7), 1430; https://doi.org/10.3390/electronics15071430 - 30 Mar 2026
Viewed by 367
Abstract
Existing appearance-based and video-based gaze estimation methods mainly rely on frame-wise prediction or local-window temporal fusion, which limits their ability to model long-range dependencies and to explicitly suppress output-level jitter. This leaves a gap in unified temporal gaze estimation frameworks that jointly address [...] Read more.
Existing appearance-based and video-based gaze estimation methods mainly rely on frame-wise prediction or local-window temporal fusion, which limits their ability to model long-range dependencies and to explicitly suppress output-level jitter. This leaves a gap in unified temporal gaze estimation frameworks that jointly address contextual feature aggregation and prediction-level stabilization. To address this limitation, we propose a unified state-space temporal gaze estimation framework to improve both angular accuracy and temporal consistency. Specifically, consecutive eye image sequences are mapped into a shared latent state space, where spatial appearance cues and inter-frame dynamics are jointly modeled. A feature-level temporal aggregation module is further designed to adaptively reweight historical observations for the current estimate, and a prediction-level temporal correction module is introduced to suppress short-term fluctuations while preserving rapid gaze shifts. On the TEyeD dataset after quality screening, the proposed method achieves a 3D gaze MAE of 0.533°, compared with 0.96° for Model-aware and 3.18°3.47° for the ResNet baselines reported in the original TEyeD paper, while maintaining manageable deployment overhead. These results indicate that the proposed framework provides a favorable balance between estimation accuracy, temporal stability, and practical efficiency. Full article
(This article belongs to the Special Issue AI Models for Human-Centered Computer Vision and Signal Analysis)
Show Figures

Figure 1

22 pages, 9837 KB  
Article
SSR-HMR: Skeleton-Aware Sparse Node-Based Real-Time Human Motion Reconstruction
by Linhai Li, Jiayi Lin and Wenhui Zhang
Electronics 2025, 14(18), 3664; https://doi.org/10.3390/electronics14183664 - 16 Sep 2025
Viewed by 1312
Abstract
The growing demand for real-time human motion reconstruction in Virtual Reality (VR), Augmented Reality (AR), and the Metaverse requires high accuracy with minimal hardware. This paper presents SSR-HMR, a skeleton-aware, sparse node-based method for full-body motion reconstruction from limited inputs. The approach incorporates [...] Read more.
The growing demand for real-time human motion reconstruction in Virtual Reality (VR), Augmented Reality (AR), and the Metaverse requires high accuracy with minimal hardware. This paper presents SSR-HMR, a skeleton-aware, sparse node-based method for full-body motion reconstruction from limited inputs. The approach incorporates a lightweight spatiotemporal graph convolutional module, a torso pose refinement design to mitigate orientation drift, and kinematic tree-based optimization to enhance end-effector positioning accuracy. Smooth motion transitions are achieved via a multi-scale velocity loss. Experiments demonstrate that SSR-HMR achieves high-accuracy reconstruction, with mean joint and end-effector position errors of 1.06 cm and 0.52 cm, respectively, while operating at 267 FPS on a CPU. Full article
(This article belongs to the Special Issue AI Models for Human-Centered Computer Vision and Signal Analysis)
Show Figures

Figure 1

Back to TopTop