Special Issue "Machine Learning Techniques for Assistive Robotics"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (15 March 2020).

Special Issue Editors

Prof. Miguel Angel Cazorla Quevedo
Website SciProfiles
Guest Editor
University of Alicante
Interests: robotics; computer vision; deep learning
Special Issues and Collections in MDPI journals
Dr. Sergio Orts-Escolano
Website
Guest Editor
RoViT, University of Alicante, 03690 San Vicente del Raspeig (Alicante), Spain
Interests: high performance computer vision, GPU programming, 3D data processing (point cloud processing, stereo vision, etcetera) and artificial neural networks
Dr. Ester Martinez-Martin
Website
Guest Editor
RoViT, University of Alicante, 03690 San Vicente del Raspeig (Alicante), Spain
Interests: object detection and action recognition

Special Issue Information

Dear Colleagues,

Assistive robots are a category of robots, which share their area of work and interact with humans. Their main objective is to help humans, especially people with disabilities. To achieve this goal, it is necessary that these robots possess a series of characteristics: the ability to perceive their environment from their sensors and act consequently, to interact with people in a multimodal manner, and to navigate and make decisions autonomously. This complexity demands computationally expensive algorithms to be performed in real-time. So, with the advent of high-end embedded processors, several algorithms could be processed concurrently and in real-time.

All these capabilities involve, to a greater or lesser extent, the use of machine learning techniques. New deep learning techniques have enabled a very important qualitative leap in different areas of perception.

Novel theoretical approaches or practical applications of all aspects involving assistive robotics are welcomed. Reviews, datasets, benchmarks, and surveys of the state-of-the-art are also welcomed. Topics of interest to this Special Issue include, but are not limited to, the following topics:

  • Emotion recognition models and systems
  • Object recognition & pose estimation for assistive robotics
  • Activity recognition
  • Navigation, localization, and mapping
  • Ambient assistive living
  • Robot vision
  • Applications for people with disabilities
  • Scene understanding & description
  • Human-robot interaction
  • Embedded systems for assistive robotics

Prof. Dr. Miguel Cazorla
Dr. Sergio Orts-Escolano
Dr. Ester Martinez-Martin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial
Machine Learning Techniques for Assistive Robotics
Electronics 2020, 9(5), 821; https://doi.org/10.3390/electronics9050821 - 16 May 2020
Abstract
Assistive robots are a category of robots that share their area of work and interact with humans [...] Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)

Research

Jump to: Editorial, Review

Open AccessArticle
Pattern Recognition Techniques for the Identification of Activities of Daily Living Using a Mobile Device Accelerometer
Electronics 2020, 9(3), 509; https://doi.org/10.3390/electronics9030509 - 19 Mar 2020
Cited by 1
Abstract
The application of pattern recognition techniques to data collected from accelerometers available in off-the-shelf devices, such as smartphones, allows for the automatic recognition of activities of daily living (ADLs). This data can be used later to create systems that monitor the behaviors of [...] Read more.
The application of pattern recognition techniques to data collected from accelerometers available in off-the-shelf devices, such as smartphones, allows for the automatic recognition of activities of daily living (ADLs). This data can be used later to create systems that monitor the behaviors of their users. The main contribution of this paper is to use artificial neural networks (ANN) for the recognition of ADLs with the data acquired from the sensors available in mobile devices. Firstly, before ANN training, the mobile device is used for data collection. After training, mobile devices are used to apply an ANN previously trained for the ADLs’ identification on a less restrictive computational platform. The motivation is to verify whether the overfitting problem can be solved using only the accelerometer data, which also requires less computational resources and reduces the energy expenditure of the mobile device when compared with the use of multiple sensors. This paper presents a method based on ANN for the recognition of a defined set of ADLs. It provides a comparative study of different implementations of ANN to choose the most appropriate method for ADLs identification. The results show the accuracy of 85.89% using deep neural networks (DNN). Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Open AccessFeature PaperArticle
A Low-Cost Cognitive Assistant
Electronics 2020, 9(2), 310; https://doi.org/10.3390/electronics9020310 - 11 Feb 2020
Cited by 1
Abstract
In this paper, we present in depth the hardware components of a low-cost cognitive assistant. The aim is to detect the performance and the emotional state that elderly people present when performing exercises. Physical and cognitive exercises are a proven way of keeping [...] Read more.
In this paper, we present in depth the hardware components of a low-cost cognitive assistant. The aim is to detect the performance and the emotional state that elderly people present when performing exercises. Physical and cognitive exercises are a proven way of keeping elderly people active, healthy, and happy. Our goal is to bring to people that are at their homes (or in unsupervised places) an assistant that motivates them to perform exercises and, concurrently, monitor them, observing their physical and emotional responses. We focus on the hardware parts and the deep learning models so that they can be reproduced by others. The platform is being tested at an elderly people care facility, and validation is in process. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Open AccessArticle
Activities of Daily Living and Environment Recognition Using Mobile Devices: A Comparative Study
Electronics 2020, 9(1), 180; https://doi.org/10.3390/electronics9010180 - 18 Jan 2020
Cited by 1
Abstract
The recognition of Activities of Daily Living (ADL) using the sensors available in off-the-shelf mobile devices with high accuracy is significant for the development of their framework. Previously, a framework that comprehends data acquisition, data processing, data cleaning, feature extraction, data fusion, and [...] Read more.
The recognition of Activities of Daily Living (ADL) using the sensors available in off-the-shelf mobile devices with high accuracy is significant for the development of their framework. Previously, a framework that comprehends data acquisition, data processing, data cleaning, feature extraction, data fusion, and data classification was proposed. However, the results may be improved with the implementation of other methods. Similar to the initial proposal of the framework, this paper proposes the recognition of eight ADL, e.g., walking, running, standing, going upstairs, going downstairs, driving, sleeping, and watching television, and nine environments, e.g., bar, hall, kitchen, library, street, bedroom, living room, gym, and classroom, but using the Instance Based k-nearest neighbour (IBk) and AdaBoost methods as well. The primary purpose of this paper is to find the best machine learning method for ADL and environment recognition. The results obtained show that IBk and AdaBoost reported better results, with complex data than the deep neural network methods. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Open AccessArticle
Recognition of Activities of Daily Living and Environments Using Acoustic Sensors Embedded on Mobile Devices
Electronics 2019, 8(12), 1499; https://doi.org/10.3390/electronics8121499 - 07 Dec 2019
Cited by 6
Abstract
The identification of Activities of Daily Living (ADL) is intrinsic with the user’s environment recognition. This detection can be executed through standard sensors present in every-day mobile devices. On the one hand, the main proposal is to recognize users’ environment and standing activities. [...] Read more.
The identification of Activities of Daily Living (ADL) is intrinsic with the user’s environment recognition. This detection can be executed through standard sensors present in every-day mobile devices. On the one hand, the main proposal is to recognize users’ environment and standing activities. On the other hand, these features are included in a framework for the ADL and environment identification. Therefore, this paper is divided into two parts—firstly, acoustic sensors are used for the collection of data towards the recognition of the environment and, secondly, the information of the environment recognized is fused with the information gathered by motion and magnetic sensors. The environment and ADL recognition are performed by pattern recognition techniques that aim for the development of a system, including data collection, processing, fusion and classification procedures. These classification techniques include distinctive types of Artificial Neural Networks (ANN), analyzing various implementations of ANN and choosing the most suitable for further inclusion in the following different stages of the developed system. The results present 85.89% accuracy using Deep Neural Networks (DNN) with normalized data for the ADL recognition and 86.50% accuracy using Feedforward Neural Networks (FNN) with non-normalized data for environment recognition. Furthermore, the tests conducted present 100% accuracy for standing activities recognition using DNN with normalized data, which is the most suited for the intended purpose. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Open AccessArticle
Robot Motion Control via an EEG-Based Brain–Computer Interface by Using Neural Networks and Alpha Brainwaves
Electronics 2019, 8(12), 1387; https://doi.org/10.3390/electronics8121387 - 21 Nov 2019
Cited by 1
Abstract
Modern achievements accomplished in both cognitive neuroscience and human–machine interaction technologies have enhanced the ability to control devices with the human brain by using Brain–Computer Interface systems. Particularly, the development of brain-controlled mobile robots is very important because systems of this kind can [...] Read more.
Modern achievements accomplished in both cognitive neuroscience and human–machine interaction technologies have enhanced the ability to control devices with the human brain by using Brain–Computer Interface systems. Particularly, the development of brain-controlled mobile robots is very important because systems of this kind can assist people, suffering from devastating neuromuscular disorders, move and thus improve their quality of life. The research work presented in this paper, concerns the development of a system which performs motion control in a mobile robot in accordance to the eyes’ blinking of a human operator via a synchronous and endogenous Electroencephalography-based Brain–Computer Interface, which uses alpha brain waveforms. The received signals are filtered in order to extract suitable features. These features are fed as inputs to a neural network, which is properly trained in order to properly guide the robotic vehicle. Experimental tests executed on 12 healthy subjects of various gender and age, proved that the system developed is able to perform movements of the robotic vehicle, under control, in forward, left, backward, and right direction according to the alpha brainwaves of its operator, with an overall accuracy equal to 92.1%. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Open AccessArticle
Fallen People Detection Capabilities Using Assistive Robot
Electronics 2019, 8(9), 915; https://doi.org/10.3390/electronics8090915 - 21 Aug 2019
Cited by 2
Abstract
One of the main problems in the elderly population and for people with functional disabilities is falling when they are not supervised. Therefore, there is a need for monitoring systems with fall detection functionality. Mobile robots are a good solution for keeping the [...] Read more.
One of the main problems in the elderly population and for people with functional disabilities is falling when they are not supervised. Therefore, there is a need for monitoring systems with fall detection functionality. Mobile robots are a good solution for keeping the person in sight when compared to static-view sensors. Mobile-patrol robots can be used for a group of people and systems are less intrusive than ones based on mobile robots. In this paper, we propose a novel vision-based solution for fall detection based on a mobile-patrol robot that can correct its position in case of doubt. The overall approach can be formulated as an end-to-end solution based on two stages: person detection and fall classification. Deep learning-based computer vision is used for person detection and fall classification is done by using a learning-based Support Vector Machine (SVM) classifier. This approach mainly fulfills the following design requirements—simple to apply, adaptable, high performance, independent of person size, clothes, or the environment, low cost and real-time computing. Important to highlight is the ability to distinguish between a simple resting position and a real fall scene. One of the main contributions of this paper is the input feature vector to the SVM-based classifier. We evaluated the robustness of the approach using a realistic public dataset proposed in this paper called the Fallen Person Dataset (FPDS), with 2062 images and 1072 falls. The results obtained from different experiments indicate that the system has a high success rate in fall classification (precision of 100% and recall of 99.74%). Training the algorithm using our Fallen Person Dataset (FPDS) and testing it with other datasets showed that the algorithm is independent of the camera setup. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Graphical abstract

Open AccessArticle
Online Learned Siamese Network with Auto-Encoding Constraints for Robust Multi-Object Tracking
Electronics 2019, 8(6), 595; https://doi.org/10.3390/electronics8060595 - 28 May 2019
Cited by 4
Abstract
Multi-object tracking aims to estimate the complete trajectories of objects in a scene. Distinguishing among objects efficiently and correctly in complex environments is a challenging problem. In this paper, a Siamese network with an auto-encoding constraint is proposed to extract discriminative features from [...] Read more.
Multi-object tracking aims to estimate the complete trajectories of objects in a scene. Distinguishing among objects efficiently and correctly in complex environments is a challenging problem. In this paper, a Siamese network with an auto-encoding constraint is proposed to extract discriminative features from detection responses in a tracking-by-detection framework. Different from recent deep learning methods, the simple two layers stacked auto-encoder structure enables the Siamese network to operate efficiently only with small-scale online sample data. The auto-encoding constraint reduces the possibility of overfitting during small-scale sample training. Then, the proposed Siamese network is improved to extract the previous-appearance-next vector from tracklet for better association. The new feature integrates the appearance, previous, and next stage motions of an element in a tracklet. With the new features, an online incremental learned tracking framework is established. It contains reliable tracklet generation, data association to generate complete object trajectories, and tracklet growth to deal with missing detections and to enhance the new feature for tracklet. Benefiting from discriminative features, the final trajectories of objects can be achieved by an efficient iterative greedy algorithm. Feature experiments show that the proposed Siamese network has advantages in terms of both discrimination and correctness. The system experiments show the improved tracking performance of the proposed method. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Open AccessArticle
Automatic Scene Recognition through Acoustic Classification for Behavioral Robotics
Electronics 2019, 8(5), 483; https://doi.org/10.3390/electronics8050483 - 30 Apr 2019
Cited by 7
Abstract
Classification of complex acoustic scenes under real time scenarios is an active domain which has engaged several researchers lately form the machine learning community. A variety of techniques have been proposed for acoustic patterns or scene classification including natural soundscapes such as rain/thunder, [...] Read more.
Classification of complex acoustic scenes under real time scenarios is an active domain which has engaged several researchers lately form the machine learning community. A variety of techniques have been proposed for acoustic patterns or scene classification including natural soundscapes such as rain/thunder, and urban soundscapes such as restaurants/streets, etc. In this work, we present a framework for automatic acoustic classification for behavioral robotics. Motivated by several texture classification algorithms used in computer vision, a modified feature descriptor for sound is proposed which incorporates a combination of 1-D local ternary patterns (1D-LTP) and baseline method Mel-frequency cepstral coefficients (MFCC). The extracted feature vector is later classified using a multi-class support vector machine (SVM), which is selected as a base classifier. The proposed method is validated on two standard benchmark datasets i.e., DCASE and RWCP and achieves accuracies of 97.38 % and 94.10 % , respectively. A comparative analysis demonstrates that the proposed scheme performs exceptionally well compared to other feature descriptors. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Open AccessArticle
Three-Stream Convolutional Neural Network with Squeeze-and-Excitation Block for Near-Infrared Facial Expression Recognition
Electronics 2019, 8(4), 385; https://doi.org/10.3390/electronics8040385 - 29 Mar 2019
Cited by 2
Abstract
Near-infrared (NIR) facial expression recognition is resistant to illumination change. In this paper, we propose a three-stream three-dimensional convolution neural network with a squeeze-and-excitation (SE) block for NIR facial expression recognition. We fed each stream with different local regions, namely the eyes, nose, [...] Read more.
Near-infrared (NIR) facial expression recognition is resistant to illumination change. In this paper, we propose a three-stream three-dimensional convolution neural network with a squeeze-and-excitation (SE) block for NIR facial expression recognition. We fed each stream with different local regions, namely the eyes, nose, and mouth. By using an SE block, the network automatically allocated weights to different local features to further improve recognition accuracy. The experimental results on the Oulu-CASIA NIR facial expression database showed that the proposed method has a higher recognition rate than some state-of-the-art algorithms. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessFeature PaperReview
Socially Assistive Robots for Older Adults and People with Autism: An Overview
Electronics 2020, 9(2), 367; https://doi.org/10.3390/electronics9020367 - 21 Feb 2020
Cited by 3
Abstract
Over one billion people in the world suffer from some form of disability. Nevertheless, according to the World Health Organization, people with disabilities are particularly vulnerable to deficiencies in services, such as health care, rehabilitation, support, and assistance. In this sense, recent technological [...] Read more.
Over one billion people in the world suffer from some form of disability. Nevertheless, according to the World Health Organization, people with disabilities are particularly vulnerable to deficiencies in services, such as health care, rehabilitation, support, and assistance. In this sense, recent technological developments can mitigate these deficiencies, offering less-expensive assistive systems to meet users’ needs. This paper reviews and summarizes the research efforts toward the development of these kinds of systems, focusing on two social groups: older adults and children with autism. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Open AccessReview
Identification of Daily Activites and Environments Based on the AdaBoost Method Using Mobile Device Data: A Systematic Review
Electronics 2020, 9(1), 192; https://doi.org/10.3390/electronics9010192 - 20 Jan 2020
Cited by 3
Abstract
Using the AdaBoost method may increase the accuracy and reliability of a framework for daily activities and environment recognition. Mobile devices have several types of sensors, including motion, magnetic, and location sensors, that allow accurate identification of daily activities and environment. This paper [...] Read more.
Using the AdaBoost method may increase the accuracy and reliability of a framework for daily activities and environment recognition. Mobile devices have several types of sensors, including motion, magnetic, and location sensors, that allow accurate identification of daily activities and environment. This paper focuses on the review of the studies that use the AdaBoost method with the sensors available in mobile devices. This research identified the research works written in English about the recognition of daily activities and environment recognition using the AdaBoost method with the data obtained from the sensors available in mobile devices that were published between 2012 and 2018. Thus, 13 studies were selected and analysed from 151 identified records in the searched databases. The results proved the reliability of the method for daily activities and environment recognition, highlighting the use of several features, including the mean, standard deviation, pitch, roll, azimuth, and median absolute deviation of the signal of motion sensors, and the mean of the signal of magnetic sensors. When reported, the analysed studies presented an accuracy higher than 80% in recognition of daily activities and environments with the Adaboost method. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Back to TopTop