Special Issue "Computer Vision and Machine Learning in Human-Computer Interaction"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 June 2020.

Special Issue Editor

Prof. Dr. Włodzimierz Kasprzak
E-Mail Website
Guest Editor
Warsaw University of Technology, Faculty of Electronics and Information Technology, Institute of Control and Computation Engineering, Nowowiejska 15/19, 00-665 Warsaw, Poland
Interests: computational techniques in pattern recognition, artificial intelligence and machine learning, and their application to image- and speech analysis; robot vision; biometric techniques

Special Issue Information

Dear Colleagues,

Among others, the rapid development in the area of imaging sensor technology has been responsible for the recent improvement and technological readiness of various human-computer interaction (HCI) systems, especially those taking the form of human-machine interfaces and human assistance systems. Numerous application fields of HCI techniques have already been found, like car-driver assistance systems, service and social robots, medical and healthcare systems, sport training assistance, and special communication modes for handicapped and elderly people. The price, size, and power requirements of image sensors and digital cameras are steadily falling, presenting new opportunities for machine learning techniques applied in computer vision systems. The miniaturisation of vision sensors and the improved design of high-resolution and high-speed RGB-D cameras significantly stimulates the collection of huge volumes of digital image data. Computer vision algorithms benefit a lot from this process since, along with classic signal processing and pattern recognition techniques, machine learning techniques can be realistically applied now, leading to new robust solutions to human-centred image analysis tasks.

In this Special Issue, we are particularly interested in system architecture and computational techniques, applied for the purpose of human-computer interactions, that are benefiting from modern vision sensors and cameras. From the methodological point of view, the focus is on combining classical pattern recognition and deep learning techniques to create new computational paradigms for typical tasks in visual human-machine interactions, like human pose detection and dynamic gesture recognition, hand- and body sign recognition, eye attention tracking, and face emotion recognition. On the practical side, we are looking for hardware and software components, prototypes, and demonstrators of smart human-computer interaction systems in various application fields. Topics of interest include but are not limited to the following:

  • Human-machine interfaces;
  • Human assistance;
  • Imaging sensors;
  • RGB-D cameras;
  • Image data collection and annotation;
  • Human pose detection;
  • Human gesture recognition;
  • Eye tracking;
  • Face emotion recognition;
  • Sign and body language recognition;
  • Vision-based human-computer interactions (VHCI).
  • Signal processing and pattern recognition in VHCI
  • Deep learning techniques in VHCI
  • Computational paradigms and system architectures for smart VHCI
  • Hardware and software of smart VHCI
  • Prototypes and demonstrators of smart VHCI
  • Applications of smart VHCI

Prof. Dr. Włodzimierz Kasprzak
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. The Special Issue runs on a continued submission model. Authors may submit their papers at any time. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers

This special issue is now open for submission, see below for planned papers.

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Embodied Agent Framework for designing Smart Human-Machine Interaction - a case study in Cyberspace Events Visualisation Control

Authors: Wojciech Szynkiewicz, Cezary Zieliński, Włodzimierz Kasprzak, Wojciech Dudek, Maciej Stefanczyk and Maksym Figat

Affiliation: Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00--665 Warszawa

*Correspondence: [email protected]; Tel.: +48-22-234-7632, +48-22-234-7397

Abstract: Smart human-computer interaction (HCI) is a time-aware, dynamic process in which two parties communicate via different modalities, e.g. voice, gesture, eye movement. The use of computer vision and machine intelligence techniques is essential when the human is carrying an exhausting and concentration-demanding activity. A smart HCI is a typical requirement for robotic systems, especially for social robots that autonomously act while communicating with human users. Thus, similarities between robot control system design and smart HCI design could be sought. The goal of this paper is to apply the embodied agent framework in HCI system design. The system's structure is defined in terms of cooperating agents having well-defined internal components and behavior. System activities are defined in terms of finite state machines and transition functions. This approach has been proved in social robotics to be very useful in the control system specification phase and supported well the system implementation stage. The case study deals with a multimodal human-computer interface for cyberspace events visualisation control. The multimodal interface is a component of the Operational Centre which is a part of the National Cybersecurity Platform. Cyberspace and its underlying infrastructure are vulnerable to a broad range of risk stemming from diverse cyber threats. The main role of this interface is to support security analysts and operators controlling visualisation of cyberspace events like incidents or cyber attacks especially when manipulating graphical information. Main visualisation control modalities are visual gesture- and voice-based commands. Thus, the design and implementation of gesture recognition- and speech-recognition functions is presented. Security requirements of the Operational Centre allow particular commands to be issued only by trusted, registered users. Thus two additional functions for human identification are implemented - the face recognition- and speaker identification functions.

Keywords: embodied agent framework; gesture/face recognition; speech/speaker recognition; event visualisation control

Back to TopTop