Assistive Robots for Healthcare and Human–Robot Interaction

Assistive robots are still mostly prototypes that only remotely recall human interactive dynamics [...].

They contributed to increasing the knowledge on assistive robots and the HRI considering, as enablers, to support the process of care giving, potentially enhancing patients' well-being and decreasing the caregiver workload.
In the first study, Kim and colleagues [1] discussed studies on care robots and the human-centered artificial intelligence framework, presented an ethical design for the sensing services of care robots, and reported the development of a care robot for frail older adult users.
In the second study, Aygun et al. [2] analyzed and modeled data from a multi-modal simulated driving study specifically designed to evaluate different levels of cognitive workload induced by various secondary tasks, such as dialogue interactions and braking events, in addition to the primary driving task. They performed statistical analyses of various physiological signals including eye gaze, electroencephalography, and arterial blood pressure from the healthy volunteers and utilized several machine learning methodologies, including k-nearest neighbor, naive Bayes, random forest, support-vector machines, and neural network-based models to infer human cognitive workload levels.
In the third study, Michel et al. [3] presented a prototype robot capable of supporting a surgeon during otological surgery. On the basis of the observation that patients may move or wake up during an operation, and that the surgeon must regularly clean the endoscope optics, new robot architecture was presented.
In the successive work, Julia Arias-Rodríguez and colleagues [4] discussed the feasibility and validation of a microwave antenna-based imaging system for intra-operative surgical navigation. The authors reported that the experimental assessment of the proposed system showed accuracies and errors consistent with other approaches with other technologies found in the literature, thus highlighting the interest for further studies.
In the fifth study, Grazia D'Onofrio et al. [5] identified if traditional machine learning algorithms could be used to assess every users' emotions separately, to relate emotion recognizing in two robotic modalities, a static or motion robot, and to evaluate the acceptability and usability of an assistive robot from an end-user point of view. The authors reported that the random forest algorithm's performance was better in terms of accuracy and execution time than k-nearest neighbor's algorithm, and the robot was not a disturbing factor in the arousal of emotions.
Slawomir Tobis et al.'s [6] study focused on technology acceptance. They posed the question of whether there was a possibility of interacting with the technology and if this had an impact on the scores awarded by the respondents in various domains of the needs and requirements for social robots to be deployed in the care of older adults. The authors concluded that pre-implementation studies and assessments should include the possibility of interacting with the robot to provide its future users with a clear idea of the technology and facilitate the necessary customizations of the machine.
In the seventh study, Kazuyuki Matsumoto and colleagues [7] focused on interview dialogue systems, proposing a method based on a multi-task learning neural network that uses embedded representations of sentences to understand the context of the text and utilizes the intention of an utterance as a feature.
In the successive manuscript, Chris Lytridis et al. [8] presented novel tools for the analysis of human behavior data regarding robot-assisted special education for children with autism spectrum disorder (ASD). The authors included an understanding of human behavior in response to an array of robot actions and an improved intervention design based on suitable mathematical instruments.
Grazia D'Onofrio and colleagues [9] determined the needs and preferences of older people and their caregivers for improving healthy and active aging and guiding the technological development of a technological system. Additionally, these authors highlighted the importance of pre-implementation studies in order to improve the acceptance of the technological systems by end-users.
Afterwards, Hsiao-Kuan Wu et al. [10] showed that the robot can follow the user in the designated position while the user performs forward, backward, and lateral movements, turning and walking along a curve. Nan Liang and Goldie Nejat [11] presented the first comprehensive investigation and meta-analysis of two types of robotic presence to determine how they influence the HRI outcomes and impact user tasks.
In the final manuscript, Amal Alabdulkareem and colleagues [12] proposed a systematic review and asserted that robot-assisted therapy is a promising field of application for intelligent social robots, especially to support children with ASD in achieving their therapeutic and educational objectives (social and emotional development, communication and interaction development, cognitive development, motor development, sensory development, and areas other than developmental ones).
In light of this Special Issue, as the area of social robotics and HRI grows, public demonstrations have the potential to provide insights into the robot and system effectiveness in public settings and the reactions of the people. One of the challenges is that, although modeling the dynamics of expressions and emotions has been extensively studied in the literature, how to model personality in a time-continuous manner has been an open problem.

Conflicts of Interest:
The authors declare no conflict of interest.