sensors-logo

Journal Browser

Journal Browser

Special Issue "Human–Machine Interfaces: Design, Sensing and Stimulation"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 2298

Special Issue Editors

Prof. Dr. Chern-Sheng Lin
E-Mail Website
Guest Editor
Department of Automatic Control Engineering, Feng Chia University, Taichung 40724, Taiwan
Interests: engineering physics; astronomy; computer science; materials science
Prof. Dr. Chih-Cheng Chen
E-Mail Website
Guest Editor
School of Information Engineering, Jimei University, Xiamen, Fujian 361021, China
Interests: Internet of Things; deep learning; big data; RFID; data mining; intelligent data hidden

Special Issue Information

Dear Colleagues,

The Special Issue, “Human–Machine Interfaces: Design, Sensing and Stimulation”, covers fundamental technology used in electronic, mechanical, and electrical engineering, including the synthesis and integration of Human–Machine Interfaces, the design of electronic devices, sensing technologies, the evaluation of various performance characteristics, and the exploration of their broad applications in industry, modeling and simulation, simulation analyses, and so forth. We invite investigators to contribute original research articles and review articles that will stimulate the continuing efforts to develop electronic and mechanical devices and sensors for Human–Machine Interfaces. Potential topics include, but are not limited to:

  • Applications of electronic devices and mechanical sensors in Human–Machine Interfaces
  • Authority and cooperation between humans and machines, sensing and controlling technologies in Human–Machine Interfaces applications
  • Novel sensors for Human–­Machine Interfaces applications
  • Medical and health applications of Human–Machine Interfaces, Sensors in Human–Machine Interfaces
  • Analysis, modeling, and simulation of Human–Machine Interfaces applications
  • Human operator’s mental activities in Human–Machine Interfaces applications
  • Psychophysiology and performance of Human–Machine Interfaces applications
  • Designing Human–Machine Interfaces
  • Special Human–Machine Interfaces: eye-tracking and brainwave-controlled system

Prof. Dr. Chern-Sheng Lin
Prof. Dr. Chih-Cheng Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Human–Machine Interfaces
  • sensing and controlling technologies
  • novel sensors
  • analysis, modeling, and simulation
  • designing eye-tracking and brainwave-controlled system

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Driving Mode Selection through SSVEP-Based BCI and Energy Consumption Analysis
Sensors 2022, 22(15), 5631; https://doi.org/10.3390/s22155631 - 28 Jul 2022
Viewed by 246
Abstract
Background: The brain–computer interface (BCI) is a highly cross-discipline technology and its successful application in various domains has received increasing attention. However, the BCI-enabled automobile industry is has been comparatively less investigated. In particular, there are currently no studies focusing on brain-controlled driving [...] Read more.
Background: The brain–computer interface (BCI) is a highly cross-discipline technology and its successful application in various domains has received increasing attention. However, the BCI-enabled automobile industry is has been comparatively less investigated. In particular, there are currently no studies focusing on brain-controlled driving mode selection. Specifically, different driving modes indicate different driving styles which can be selected according to the road condition or the preference of individual drivers. Methods: In this paper, a steady-state visual-evoked potential (SSVEP)-based driving mode selection system is proposed. Upon this system, drivers can select the intended driving modes by only gazing at the corresponding SSVEP stimuli. A novel EEG processing algorithm named inter-trial distance minimization analysis (ITDMA) is proposed to enhance SSVEP detection. Both offline and real-time experiments were carried out to validate the effectiveness of the proposed system. Conclusion: The results show that a high selection accuracy of up to 92.3% can be realized, although this depends on the specific choice of flickering duration, the number of EEG channels, and the number of training signals. Additionally, energy consumption is investigated in terms of which the proposed brain-controlled system considerably differs from a traditional driving mode selection system, and the main reason is shown to be the existence of a detection error. Full article
(This article belongs to the Special Issue Human–Machine Interfaces: Design, Sensing and Stimulation)
Show Figures

Figure 1

Article
Machine-Learning-Based Fine Tuning of Input Signals for Mechano-Tactile Display
Sensors 2022, 22(14), 5299; https://doi.org/10.3390/s22145299 - 15 Jul 2022
Viewed by 279
Abstract
Deducing the input signal for a tactile display to present the target surface (i.e., solving the inverse problem for tactile displays) is challenging. We proposed the encoding and presentation (EP) method in our prior work, where we encoded the target surface by scanning [...] Read more.
Deducing the input signal for a tactile display to present the target surface (i.e., solving the inverse problem for tactile displays) is challenging. We proposed the encoding and presentation (EP) method in our prior work, where we encoded the target surface by scanning it using an array of piezoelectric devices (encoding) and then drove the piezoelectric devices using the obtained signals to display the surface (presentation). The EP method reproduced the target texture with an accuracy of over 80% for the five samples tested, which we refer to as replicability. Machine learning is a promising method for solving inverse problems. In this study, we designed a neural network to connect the subjective evaluation of tactile sensation and the input signals to a display; these signals are described as time-domain waveforms. First, participants were asked to touch the surface presented by the mechano-tactile display based on the encoded data from the EP method. Then, the participants recorded the similarity of the surface compared to five material samples, which were used as the input. The encoded data for the material samples were used as the output to create a dataset of 500 vectors. By training a multilayer perceptron with the dataset, we deduced new inputs for the display. The results indicate that using machine learning for fine tuning leads to significantly better accuracy in deducing the input compared to that achieved using the EP method alone. The proposed method is therefore considered a good solution for the inverse problem for tactile displays. Full article
(This article belongs to the Special Issue Human–Machine Interfaces: Design, Sensing and Stimulation)
Show Figures

Figure 1

Article
Multi-Sensor Context-Aware Based Chatbot Model: An Application of Humanoid Companion Robot
Sensors 2021, 21(15), 5132; https://doi.org/10.3390/s21155132 - 29 Jul 2021
Cited by 2 | Viewed by 895
Abstract
In aspect of the natural language processing field, previous studies have generally analyzed sound signals and provided related responses. However, in various conversation scenarios, image information is still vital. Without the image information, misunderstanding may occur, and lead to wrong responses. In order [...] Read more.
In aspect of the natural language processing field, previous studies have generally analyzed sound signals and provided related responses. However, in various conversation scenarios, image information is still vital. Without the image information, misunderstanding may occur, and lead to wrong responses. In order to address this problem, this study proposes a recurrent neural network (RNNs) based multi-sensor context-aware chatbot technology. The proposed chatbot model incorporates image information with sound signals and gives appropriate responses to the user. In order to improve the performance of the proposed model, the long short-term memory (LSTM) structure is replaced by gated recurrent unit (GRU). Moreover, a VGG16 model is also chosen for a feature extractor for the image information. The experimental results demonstrate that the integrative technology of sound and image information, which are obtained by the image sensor and sound sensor in a companion robot, is helpful for the chatbot model proposed in this study. The feasibility of the proposed technology was also confirmed in the experiment. Full article
(This article belongs to the Special Issue Human–Machine Interfaces: Design, Sensing and Stimulation)
Show Figures

Figure 1

Back to TopTop