Next Article in Journal
Realworld 3D Object Recognition Using a 3D Extension of the HOG Descriptor and a Depth Camera
Previous Article in Journal
A Review of Techniques for RSS-Based Radiometric Partial Discharge Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feeling of Safety and Comfort towards a Socially Assistive Unmanned Aerial Vehicle That Monitors People in a Virtual Home

by
Lidia M. Belmonte
1,2,
Arturo S. García
2,3,
Rafael Morales
1,2,
Jose Luis de la Vara
2,3,
Francisco López de la Rosa
2 and
Antonio Fernández-Caballero
2,3,4,*
1
Departamento de Ingeniería Eléctrica, Electrónica, Automática y Comunicaciones, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
2
Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
3
Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
4
Biomedical Research Networking Center in Mental Health (CIBERSAM), 28016 Madrid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(3), 908; https://doi.org/10.3390/s21030908
Submission received: 28 December 2020 / Revised: 21 January 2021 / Accepted: 23 January 2021 / Published: 29 January 2021
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Unmanned aerial vehicles (UAVs) represent a new model of social robots for home care of dependent persons. In this regard, this article introduces a study on people’s feeling of safety and comfort while watching the monitoring trajectory of a quadrotor dedicated to determining their condition. Three main parameters are evaluated: the relative monitoring altitude, the monitoring velocity and the shape of the monitoring path around the person (ellipsoidal or circular). For this purpose, a new trajectory generator based on a state machine, which is successfully implemented and simulated in MATLAB/Simulink®, is described. The study is carried out with 37 participants using a virtual reality (VR) platform based on two modules, UAV simulator and VR Visualiser, both communicating through the MQTT protocol. The participants’ preferences have been a high relative monitoring altitude, a high monitoring velocity and a circular path. These choices are a starting point for the design of trustworthy socially assistive UAVs flying in real homes.

1. Introduction

Over the last decade, there has been an ever growing interest in human–robot interaction (HRI) not only in traditional industrial fields but also in emerging areas such as homes [1]. Of note, personal aerial robotics is becoming more and more pervasive to our home environments. Therefore, it seems vital to understand how home drones are perceived and understood by inhabitants to be fully accepted [2]. Within the broad class of personal robots, assistive robots for the elderly use to be grouped into rehabilitation and socially assistive robots [3]. The manner in which people accept social assistive robots in their life is still unknown [1]. Moreover, one-third of assistive technologies are abandoned within one year of use [4]. For this reason, the acceptability of social assistive robots is an essential aspect to overcome the resistance toward them. This is why acceptance tests for assistive robots caring of elder adults are highly demanded [3,4].
Social robot capabilities include approaching people in a natural manner [5] employing affective elements close to human–human interactions [6]. In this sense, the concept of trust is very important in the adoption of technologies to assist older adults at home [7,8]. Trust can be defined as an attitudinal judgement of the degree to which a user (the ageing adult) can rely on an agent (the social assistive robot) to achieve its goals under conditions of uncertainty [9]. People are more reluctant to engage with robots if negative consequences are more likely, and once confidence has been lost, people take longer to use this technology again [6,10]. Moreover, the safety and efficiency of HRI collaboration often depend on appropriately calibrating trust towards the robot [9] and using a user-centred approach to realise what impacts the development of trust [11]. To date, trust regarding older adults’ adoption of assistive technology has been determined in several ways, including whether the elderly feels safe and comfortable with the proposed solution [12,13,14,15]. The evaluation of these variables requires advanced physical prototyping or, as an alternative, virtual reality (VR) as a simulation tool that allows for fast, flexible and iterative testing processes [16,17,18].
This paper deals with the use of unmanned aerial vehicles (UAVs) as socially assistive robots for dependent people, including ageing adults [19,20]. The design and implementation of the UAV is being done step by step. At each stage of progress, the focus is on ascertaining the acceptance of the final beneficiary regarding the use of the UAV. Considering the characterisation of socially interactive robots [21], the UAV, at this stage of design, includes the perception of emotions, the consideration of natural cues and social competencies such as adaptation to the needs of the users. Other features will be implemented in the future, e.g., communication through high-level dialogue and display of personality and distinctiveness. The purpose of our system is therapeutic help, and therefore assistance.
As it is essential to build a relationship of trust between the person and the flying assistive robot towards a full acceptance from the perspective of the assisted human, it is fundamental to consider the comfort and well-being of the end user. In this sense, the social UAV must carry out its mission, helping the person, but interrupting their daily routine as little as possible. The UAV should be seen as positive and not as a hindrance or even a danger to the person. More concretely, this paper introduces an assistive UAV for home care of dependent people. The mission of the robot is to perform a monitoring flight from time to time to determine their condition [22] and the possible assistance. This monitoring flight basically consists of a series of manoeuvres to take-off, get close to the person, fly around them to obtain facial images and then return to its base. This way, the main interaction between the socially assistive UAV and the person resides in the central part of the monitoring process, the flight around the assisted person. A flying assistance robot could be an effective surveillance solution for a dependent person, since it can be programmed to search and capture data without depending on the person to activate any emergency device. Since it can move to where the person is, it is possible that it will be more successful in its vigilance than multiple cameras installed in fixed places in the house. Finally, the social aspect of the robot can contribute to user acceptance.
The present study is solely focused on analysing three key parameters of the monitoring process: (i) the relative flight height, (ii) the speed of the UAV during the lap to the person, and finally, (iii) the shape of the trajectory that the UAV follows them around, considering two main options, namely, a circular path, which leads to maintain a constant distance between the person and the UAV, and an ellipsoidal one where the distance changes along the way. It should be noted that it was necessary to develop a new algorithm of trajectory planning for a quadrotor model, whose mathematical basis is explained in detail in this paper, to implement these options. This way, the article presents the results of a VR-based trust study considering feelings of safety and comfort on the trajectories followed by a socially assistive quadrotor while monitoring a person.
In order to evaluate the users’ sense of safety and comfort, a survey was conducted in which the participants assessed the three parameters of the UAV’s trajectory during the monitoring process in a VR environment. The conclusions of this study will allow us to continue our challenging research project on assistive UAVs to advance in home care of dependent people. The used VR platform consists of two modules interconnected by means of the message queuing telemetry transport (MQTT) protocol. The modules are (i) a complete UAV simulator that includes the trajectory generator for the monitoring process, the control scheme and the dynamic model of a quadrotor, and, (ii) a VR visualiser in which a quadrotor’s 3D model performs the monitoring process of an avatar in a virtual home. This platform has allowed experiencing in first-person the UAV’s monitoring process, i.e., observing different monitoring paths for the sake of commenting which one feels more comfortable as well as evaluating other aspects regarding safety and privacy. In the simulated scene, the person is completely still, looking forward, being aware of the flight of the UAV around them. In future simulation versions, the level of realism in terms of daily life scenes will be increased. The person will no longer be standing still in a certain (and known) position, but will be allowed to move freely around their home. Neither does this version of the simulator consider obstacles (fixed or mobile) that the UAV may encounter during flight in the real world. Obviously, this issue will soon be addressed using computer vision techniques [23,24].
The remainder of the article is structured as follows. Section 2 describes the main characteristics of the overall system including the VR platform. Section 3.1 details the mathematical development of the new ellipsoidal trajectory planner implemented to carry out the study with the participants. Section 4 details the safety and comfort evaluation study regarding the monitoring parameters studied. Section 5 offers the results of the study. Finally, Section 6 summarises the conclusions of the work and introduces some future research lines.

2. General Description of the System

The proposed socially assistive robot’s main mission is to serve as a tool for in-home monitoring of a dependent person in order to enable recording of images that can be analysed to determine their condition and the possible assistance that would be needed. This way, it is possible to provide the patient with help or support and improve their quality of life. This is a quite new area of investigation. In this respect, we could highlight a project for a custom-made quadcopter for monitoring patients indoors using images, sonar and voice recognition techniques [25]. A recent paper has introduced some of the most recent and interesting applications that drones can find in creating ambient assisted living environments for the elderly [26]. Even, a framework has been proposed for UAV navigation and people/object detection in cluttered indoor environments.
As mentioned above, the monitoring process basically consists in a flight around the person’s position to obtain images of the face. Indeed, several very recent papers are dedicated to the still challenging task of designing robust flight controllers to control small flying vehicles (e.g., [27,28,29]), especially indoors [30]. In our case, during the flight around the person, a visual interaction between the robot and the assisted person is produced. The objective of this work is to analyse how this monitoring path should be in order to elicit the highest degree of safety and comfort feelings in the monitored person, which is an imperative issue for indoor drones [31]. For this reason, three key parameters, which are detailed below, are proposed.
The first parameter is the relative monitoring altitude ( z r ), that is, the height with respect to the monitored person’s head ( z p ) at which the UAV performs the supervisory flight, i.e., the monitoring altitude will be given by expression z m = z p + z r . As shown in Figure 1, the relative altitude parameter can be positive, negative, or even null, giving rise to three possible options in the monitoring process: the UAV is located (i) above ( z r > 0 ), (ii) at ( z r = 0 ), and, (iii) below ( z r < 0 ) the person’s head height.
The second parameter is the monitoring velocity ( ω m ), that is, the speed at which the UAV moves during the flight around the assisted person. This velocity will directly influence the monitoring time and is, therefore, a parameter that can greatly influence the feelings of comfort and safety of the monitored person. Finally, the third parameter is the monitoring radius, i.e., the distance at which the UAV is placed from the person’s position during the monitoring flight. This radius can be constant, which leads the UAV to trace a circular path around the person being monitored, or it can be variable, resulting in an elliptical path. For this reason, we also refer to this parameter as the monitoring trajectory (or the shape of the trajectory). Figure 2 shows the three possible options: (a) elliptical trajectory with the UAV closer to the person’s face, (b) circular trajectory with a constant distance between the UAV and the person, and (c) elliptical trajectory but with the UAV away from the person’s face and closer to their side.
As can be inferred from the above parameters, the monitoring process is closely related to the position and orientation of the assisted person. Therefore, the trajectory planner needs to know these two variables at all time. This information would be provided by a module in charge of locating the person and would probably be implemented via computer vision using the camera on board of the UAV. However, even though it is paramount for the functioning of the system, this is out of the scope of the paper and will be addressed in the future. For this paper, we assume that the current position and orientation of the person is available at all times. Based on this information, the planner determines the reference signal for the position and yaw angle of the UAV for the different manoeuvres that make up the entire monitoring process. These references are the entry of the control algorithm that finally calculates the inputs to be applied for the UAV to perform the flight correctly.
Figure 3 represents the general scheme of the socially assistive UAV simulator implemented in MATLAB/Simulink®. It shows the different blocks that compose it, which are: the dynamic model of a quadrotor UAV, the nonlinear control algorithm, and finally the trajectory planner for the monitoring process. It should be mentioned that: (i) this UAV simulator is part of a VR platform developed by our research group, whose operation is explained in Section 4; (ii) the dynamics of the quadrotor and the generalised proportional integral (GPI) controller, which can be consulted in a previous work [32]. Regarding the trajectory planner, the initial versions [16,33] were designed for a circular monitoring path at a constant height defined by the person’s head. However, to carry out the study of the influence of the above-mentioned parameters in the safety and comfort of the monitored user, it has been necessary to implement a new trajectory planner algorithm which will be detailed next.

3. Trajectory Planning

This section describes the new ellipsoidal trajectory planning algorithm for the monitoring flight of a quadrotor UAV. It consists of a state machine in which the reference signals for the UAV’s position and yaw angle are defined in each of the manoeuvres necessary to complete the monitoring process. It should be recalled that the signals generated by the path planner are the (reference) inputs of the UAV’s controller which in the end calculates what (control) inputs should be applied to the aircraft to direct its flight correctly during the monitoring of the person. In this process, the information captured by the camera on board the UAV will be sent to a processing station for analysis using artificial intelligence techniques, such as emotion recognition [34,35,36], to determine the person’s physical condition and possible assistance [37].
Before detailing the equations of the different states that make up the planner, we proceed to define the mathematical basis for plotting the ellipsoidal path around the person (an ellipse on the horizontal plane defined by the monitoring height, centred on the person’s position and rotated according to their orientation).

3.1. Equations of the Ellipse

Firstly, the main path of the monitoring process consists in an ellipse in the horizontal plane ( X Y ) whose mathematical equation in the simplest case (centred at the origin) is the following:
x 2 R x 2 + y 2 R y 2 = 1
where R x is the radius on the X axis and R y is the radius on the Y axis.
However, we must consider that the ellipse will be centred on the position of the person, therefore its equation would be as follows:
( x C x ) 2 R x 2 + ( y C y ) 2 R y 2 = 1
where R x and R y are the radius on the X and Y axes, respectively, and ( C x , C y ) are the coordinates of the centre of the ellipse that will coincide with the person’s horizontal position, ( x p , y p ) .
Moreover, we must consider that the ellipse will also be rotated an angle of θ rad, according to the orientation of the person ( α p ), and therefore the equation will be expressed as follows:
( ( x C x ) · cos ( θ ) + ( y C y ) · sin ( θ ) ) 2 R x 2 + ( ( x C x ) · sin ( θ ) ( y C y ) · cos ( θ ) ) 2 R y 2 = 1
where R x and R y are the radius on the X and Y axes, respectively, ( C x , C y ) are the coordinates of the centre of the ellipse, and θ is the above-mentioned rotation angle.
To conclude, Equation (4) details the parametric formula depending on the variable γ of an ellipse centred at the position ( C x , C y ) , and rotated at a certain angle θ . Figure 4 represents this parameterisation in the particular case of the monitoring process, i.e., when the ellipse is centred and rotated according to the person’s position and orientation, respectively ( ( C x , C y ) = ( x p , y p ) ; θ = α p ). This parametric equation will be useful for the generation of the reference path for the UAV’s X Y position as will be explained bellow.
x ( γ ) = C x + R x cos ( γ ) cos ( θ ) R y sin ( γ ) sin ( θ ) y ( γ ) = C y + R x cos ( γ ) sin ( θ ) + R y sin ( γ ) cos ( θ )
where:
  • C x is the X-coordinate of the centre [m],
  • C y is the Y-coordinate of the centre [m],
  • R x is the radius on the X-axis [m],
  • R y is the radius on the Y-axis [m],
  • θ is the rotation angle [rad], and
  • γ is the parameter, which ranges from 0 to 2 π .

3.2. Determination of the Safety Position

The safety position is the point at which the UAV approaches after take-off (to approximate the patient’s immediate surroundings) and from which it begins to rotate around the person to take images that will be sent for analysis. After completing the data capture lap, the UAV returns from this safety position to the base position for landing. Therefore, the safety position is an important way-point in the monitoring process whose calculation is explained next. In the case of considering an ellipsoidal monitoring trajectory, the X Y coordinates of the safety position will be determined by the nearest intersection between the imaginary line connecting the centre of the person and the position of the UAV with the ellipse centred and rotated according to the person’s position and orientation (see Figure 5). It should be recalled that the Z coordinate of this safety position will be given by the UAV’s altitude in the monitoring process (person’s head height plus relative altitude parameter).
This way, the problem raised is: (1) determine the points of intersection of a line passing through the centre of the ellipse and the position of the UAV with the ellipse rotated and centred on the person; (2) determine the point of intersection closest to the position of the UAV. The mathematical resolution is detailed below.

3.2.1. Calculation of the Intersection Points with the Rotated Ellipse

The equation of a line defined by two points, ( x a , y a ) and ( x b , y b ) is detailed in Equation (5).
y = y a + y b y a x b x a · ( x x a ) ;
The above can also be expressed in parametric form, thus giving rise to two individual expressions for each component of the horizontal plane ( X Y ) dependent on the t parameter:
x ( t ) = x a + ( x b x a ) · t ; y ( t ) = y a + ( y b y a ) · t ;
If the point ( x a , y a ) matches with the centre of the ellipse around the person ( x p , y p ) , and the point ( x b , y b ) is the UAV’s position, ( x d , y d ) , the parametric Equation (6) can be written as follows:
x ( t ) = x p + ( x d x p ) · t ; y ( t ) = y p + ( y d y p ) · t ;
In order to calculate the intersection between the line defined by Equation (6) and the ellipse defined by the person’s pose, i.e., centred and rotated according to their position and orientation, ( C x , C y ) = ( x p , y p ) ; θ = α p , it is necessary to substitute Equation (7) in Equation (3), obtaining:
( ( x p + ( x d x p ) · t x p ) · cos ( α p ) + ( y p + ( y d y p ) · t y p ) · sin ( α p ) ) 2 R x 2 + ( ( x p + ( x d x p ) · t x p ) · sin ( α p ) ( y p + ( y d y p ) · t y p ) · cos ( α p ) ) 2 R y 2 = 1
( ( ( x d x p ) · t ) · cos ( α p ) + ( ( y d y p ) · t ) · sin ( α p ) ) 2 R x 2 + ( ( ( x d x p ) · t ) · sin ( α p ) ( ( y d y p ) · t ) · cos ( α p ) ) 2 R y 2 = 1
After some operations, Equation (9) is simplified and the result is detailed in Equation (10):
t 2 · f 1 R x 2 + t 2 · f 2 R y 2 = 1
where
f 1 = ( x d 2 + x p 2 2 x d x p ) · cos 2 ( α p ) + ( y d 2 + y p 2 2 y d y p ) · sin 2 ( α p ) + 2 · ( x d x p ) · ( y d y p ) · cos ( α p ) · sin ( α p ) ; f 2 = ( x d 2 + x p 2 2 x d x p ) · sin 2 ( α p ) + ( y d 2 + y p 2 2 y d y p ) · cos 2 ( α p ) 2 · ( x d x p ) · ( y d y p ) · sin ( α p ) · cos ( α p ) ;
Now, from Equation (10) it is possible to obtain the expression of the parameter t (see Equation (11)), and from that determine two possible solutions, t 1 and t 2 , using Equations (12) and (13), respectively:
t = R x · R y f 1 · R y 2 + f 2 · R x 2
  • Solution 1:
    t 1 = R x · R y f 1 · R y 2 + f 2 · R x 2
  • Solution 2:
    t 2 = t 1 = R x · R y f 1 · R y 2 + f 2 · R x 2
Substituting the two solutions of the t parameter, t 1 and t 2 , in the parametric Equation (7), we obtain the two points of intersection between the rotated ellipse and the line joining the position of the UAV with the centre of the ellipse (the position of the person):
  • Solution 1:
    x 1 = x ( t 1 ) = x p + ( x d x p ) · t 1 ; y 1 = y ( t 1 ) = y p + ( y d y p ) · t 1 ;
  • Solution 2:
    x 2 = x ( t 2 ) = x p + ( x d x p ) · t 2 ; y 2 = y ( t 2 ) = y p + ( y d y p ) · t 2 ;

3.2.2. Determination of the Nearest Intersection Point (Safety Position)

Finally, to determine the safety position ( x s p , y s p ) , it is necessary to know which intersection point is the closest to the position of the UAV (see example in Figure 5). To do this, we calculate the distance between the UAV and the two candidates calculated above:
d 1 = ( x 1 x d ) 2 + ( y 1 y d ) 2 ; d 2 = ( x 2 x d ) 2 + ( y 2 y d ) 2 ;
Thus, if the distance d 1 is equal to or less than d 2 , the first point will be the safety position in the monitoring process, ( x s p , y s p ) = ( x 1 , y 1 ) . Conversely, if the distance d 2 is less than d 1 , the safety position will be the second point, ( x s p , y s p ) = ( x 2 , y 2 ) . Please remember that the Z coordinate of this safety position will be given by the altitude of the UAV in the monitoring process, z m (person’s height plus relative height parameter).

3.3. Ellipsoidal Trajectory around the Person

To generate the reference ellipsoidal path around the person, it is necessary to convert the safety position at which the UAV stops for a time before turning around him or her into the corresponding value of the parameter γ of the parametric equation of the ellipse, Equation (4). To do this, we first calculate the position in the original ellipse ( x e o , y e o ) , an ellipse without rotation and centred at the point ( 0 , 0 ) , to which the safety position corresponds in the rotated and person-centred ellipse, P o s E R o t = ( x s p , y s p ) . In this way, and using matrix notation, we obtain:
x s p y s p P o s E R o t = c o s ( α p ) s i n ( α p ) s i n ( α p ) cos ( α p ) M A T R O T · x e o y e o P o s E O r i g + x p y p C e n t r e
x e o y e o = c o s ( α p ) s i n ( α p ) s i n ( α p ) cos ( α p ) 1 · x s p x p y s p y p
Secondly, and depending on the quadrant in which the calculated position is located in the original ellipse, ( x e o , y e o ) , the value of the parameter γ is determined using the inverse of the cosine or sine functions as follows:
  • Quadrant I:  x e o > 0 ,  y e o > 0
    γ = arccos x e o R x = arcsin y e o R y
  • Quadrant II:  x e o < 0 ,  y e o > 0
    γ = arccos x e o R x = π arcsin y e o R y
  • Quadrant III:  x e o < 0 ,  y e o < 0
    γ = 2 π arccos x e o R x = π arcsin y e o R y
  • Quadrant IV:  x e o > 0 ,  y e o < 0
    γ = 2 π arccos x e o R x = 2 π + arcsin y e o R y
Once we know the value of the γ parameter to which the initial position correspond and from which the ellipsoidal path around the person will start, that is, the equivalent to the safety position ( γ i n i t i a l = γ s p ), we can use the parametric formula of the ellipse, Equation (4), and gradually increase its parameter to complete one lap ( γ i n i t i a l + 2 π rad). This will be the procedure used to obtain the reference trajectories for the UAV’s X Y position for the ellipse around the person as detailed below.

3.4. States of the Trajectory Planner

This section describes in the detail the states that compose the new trajectory planner for the monitoring process considering an ellipsoidal path around the monitored person. In the previous version of the planner [16], the path around the person was circular. It is worth mentioning that this particular case can also be implemented using the equations described below simply by taking R x = R y . Compared to the previous version, in which the monitoring flight was only performed at a constant height defined by the person’s head, it is now also possible to vary this height by the above-mentioned parameter of relative monitoring altitude ( z r ). In this way, the UAV will fly at the same height as the person’s head ( z r = 0 ), at a higher height ( z r > 0 ), or at a lower height ( z r < 0 ).
Before describing the states of the planner, the following assumptions have been considered in its development: (i) the UAV remains in its base position before and after a monitoring process ( x b , y b , z b ) ; (ii) the information of the monitored person’s position, ( x p , y p , z p ) , and orientation, α p , are known at each moment; (iii) the reference trajectories are defined so that the UAV’s camera always points in its forward direction or towards the person; (iv) there are no energy limitations since the monitoring process is carried out in a short period of time; and, (v) consequently, it is presumed that the person does not walk during this brief process. The states of the trajectory planner are a total of twelve represented graphically in Figure 6 and whose equations for the position and orientation of the UAV are summarised in Table 1 and Table 2. The details of each state are as follows.
  • STATE 0—Home: Represents the UAV waiting in its base position, ( x b , y b , z b ) , and initially oriented with a yaw angle, ψ ( 0 ) , which by default is equal to 0 rad. When it receives the instruction to start the monitoring process, it transits to state 1.
  • STATE 1—Takeoff: This is the first manoeuvre to raise the UAV, with a constant speed v z , from the base level, z b , until the monitoring altitude, z m . This level will be equal to the person’s height (measured in the centre of their head, z p ) plus the relative altitude parameter ( z r ), which can be positive, zero, or even negative to consider the three scenarios mentioned before (above their head, coinciding with it or below it). Thus, when the UAV reaches the monitoring level, z m = z p + z r , the planner switches to state 2.
  • STATE 2—Orientation Towards the Person: After takeoff, the UAV is requested to turn, varying the yaw angle with a speed defined by ω ψ , in order to find the person with its on board camera. Since the person’s position is known at each moment, it is possible to calculate the final (target) yaw angle, ψ f 2 , which will be the difference between the angle α = a r c t a n ( y p y i 2 x p x i 2 ) and the camera’s angle α c a m e r a as can be observed in Figure 6a. Therefore, the yaw angle is gradually modified until the UAV’s camera is focused on the person. At this moment, the planner transits to state 3.
  • STATE 3—Approximation: The UAV must approach the person (already centred on the camera’s view) to turn around them in the next states. As already detailed, the UAV must reach the safety position whose X Y coordinates will be given by the nearest intersection of the (imaginary) line joining the positions of the UAV and the person with the ellipse centred and rotated according to their position and orientation, respectively. The relative monitoring altitude should be maintained constant, i.e., the Z coordinate of the safety position will match the monitoring level. The planner will transit to state 4 once the UAV reaches the safety position ( x s p , y s p , z s p ) .
  • STATE 4—Waiting in Safety Position: Intermediate state in which the UAV waits for a short time to start later the lap around the person more precisely. Therefore, once the programmed time ( t s 4 ) has elapsed, it transits to state 5.
  • STATE 5—Orbit Around the Person: At this moment, the ellipsoidal rotation of the UAV around the person starts from the safety position (whose equivalence to the parameter γ of the ellipse’s parametric formula is determined according to Equations (17)–(22)), while the flight height is kept constant. On the other hand, the UAV’s yaw angle is modified during the ellipsoidal trajectory so that its on board camera is always pointing towards the person. This way, once the UAV’s camera finds the person’s face (when the UAV is in front of the face), the planner switches to state 6. Since the person’s information is known, and the ellipse is rotated according to their orientation, θ = α p , the position in front of their face, labelled as photo position in Figure 6d, ( x p h , y p h , z p h ) , can be calculated from the ellipse’s parametrisation by taking γ = 0 [rad].
  • STATE 6—Data Capture: In the general case, the UAV can remain in the position in front of the person’s face, ( x p h , y p h , z p h ) , to capture images with better accuracy. Once the data capture timer ( t s 6 ) elapses, the planner transitions to state 7. On the contrary, if the UAV is programmed to take pictures of the person for the entire lap around him or her, the waiting time can be set to zero ( t s 6 = 0 ) and the planner would transit directly to state 7. The UAV would continue the lap without stopping.
  • STATE 7—Motion and Safety Position: The UAV will continue its ellipsoidal rotation around the monitored person to reach the safety position again. The height of the UAV will be kept constant and the yaw angle will be varied so that the on board camera continues to be pointed at the patient. Mathematically, it is, therefore, equal to state 5 and, consequently, the reference trajectories are determined in a very similar way. Once the UAV reaches the safety position, ( x s p , y s p , z s p ) , the planner transits to state 8.
  • STATE 8—Orientation Towards the Base: Once the UAV has completed its ellipsoidal trajectory around the monitored person, it is commanded to turn on itself, gradually adjusting its yaw angle to search for the base with its camera. In this way, a procedure similar to state 2 will be followed, but the final yaw angle is now known in advance. As can be seen in Figure 6g, the UAV must turn π rad, so that the UAV’s camera directly points towards the base. At this point, the planner will switch to state 9.
  • STATE 9—Return to Base: The aircraft is ordered to return to over the base position while keeping the flight height constant. In this way, the UAV’s movement is the same as state 3, but in the opposite direction, thus the reference trajectories are calculated in a similar way (in this case even the final position is known, so no additional calculation is necessary unlike state 3 in which the safety position was determined). Once the UAV is positioned over its base, at ( x b , y b , z m ) , the planner transits to state 10.
  • STATE 10—Yaw Angle Adjustment: Before landing, the yaw angle of the UAV is adjusted to its initial value to prepare the aircraft for future monitoring processes, while the UAV’s position is kept constant. The reference trajectories will be, therefore, analogous to state 2. This way, when the adjustment is completed, the planner will shift to state 11.
  • STATE 11—Landing: Finally the UAV is commanded to land at its base ( x b , y b , z b ) . In this way, the aircraft will descend vertically, with a constant speed defined by the parameter v z , until reaching the base level, and the planner will transit to the initial state (0) to be ready for the next monitoring process.

3.5. Simulation Results

The results of the simulation tests carried out using the settings shown in Table 3 and Table 4 are summarised in Figure 7.
Each subfigure (a–g) corresponding to each test performed, represents in 3D the reference path generated by the planner (in blue) and the actual trajectory performed by the UAV (in red) during the monitoring process of a person whose head position and orientation (in pink) are known. In addition, the orientation of the UAV’s camera is also represented by means of arrows whose colours change over time (the colour bar on the right side represents the time evolution during the complete simulation of 250 s). The results show how the trajectory planner is able to correctly generate the reference for the position and yaw angle of the UAV in each case. The accuracy of the GPI controller [32], which calculates the (control) inputs in order to reduce the tracking error of the trajectories to zero, is also checked. This ensures that the quadrotor model performs the monitoring flight accurately and according to the planner’s references.

4. Experimental Setup

The system described in this paper relies on the software and hardware platform described in [16]. It is a distributed architecture with two main modules: the UAV simulator, in charge of reproducing the flight of the UAV considering its dynamics, and generating the trajectories; and the VR Visualiser, in charge of rendering the virtual UAV and its behaviour, as well as the virtual environment in which the UAV flight takes place. These two modules communicate with each other using the MQTT protocol, exchanging the position and orientation of the user as well as of the UAV. The user’s information, sent from the VR Visualiser to the UAV simulator, is used to calculate the UAV’s trajectory, while the UAV’s state, sent from the UAV simulator to the VR Visualiser, is used to update the visual representation of the UAV. Figure 8 depicts this and shows the software tools used in the implementation of the architecture: MATLAB/Simulink® for the UAV simulation, Unity3D to recreate the virtual home environment and Mosquitto as the open source MQTT broker selected.
As mentioned in Section 3, the UAV simulator described in [16] has been extended to include the different options for the relative monitoring altitude, the monitoring velocity and the monitoring radius. The virtual environment in which the action took place was a living room with a sofa in the centre and a TV in front of it, as can be seen on the right side of Figure 8. The avatar was sitting on the sofa and the UAV’s base station was located behind it. This configuration coincides with that described in the results of the simulations presented in Section 3.5. The task that the participants had to perform was simple; they only had to sit on the couch and watch TV. The UAV would then carry out various monitoring processes using the different variables considered. The virtual drone included a positional audio source that generated 3D spatial audio. The selected audio clip corresponded to the one produced by a real UAV during its flight, and was played in a loop during the monitoring process.

4.1. Procedure

Unlike our previous work described in [16], and due to the restrictions imposed by the COVID-19 pandemic regarding the use of head mounted equipment covering the face of the users, three videos were created showcasing the different alternatives considered for relative monitoring altitude, monitoring velocity and monitoring radius. Thus, the participants did not immerse themselves in the virtual environment but watched a video recorded from the perspective of an avatar sitting on the couch while watching TV inside the designed virtual living room.
Two different studies were carried out, first with descendants of physically impaired elderly people, and, after that, the experiment was repeated with elderly people. The participants received an email with a short description of the study and a link to an online questionnaire including the videos and some questions about it. The questionnaire was online from 1 May to 10 December 2020. It took an average time of 14 min to be filled by the participants in the evaluation.

4.2. Questionnaire

A questionnaire was designed using Microsoft Forms. It was divided into two main parts, a demographic questionnaire and a questionnaire about their preferences on the variables used in the study. This second questionnaire was designed specifically for this work, inspired by other approaches [38,39,40]. It was divided into three parts, one per each of the variables used in the study. For each part, a video was prepared showing each of the options per variable (e.g., three different flight heights) and then the participants had to answer to some questions regarding what they saw in the video and their preferences. The reason why the questionnaire was divided into three parts is that this would allow the participants to concentrate on only one parameter without considering the influence of the others. There was a total amount of 36 questions which were divided into preference questions, perceived safety, perceived supervision level, estimated distraction caused by the UAV flight and adequacy. A 5-point Likert scale ranging from strongly disagree to strongly agree was used to measure the responses.

4.3. Participants and Data Collection

For the first experiment, over 100 emails were sent to recruit participants. The potential participants were descendants of physically impaired elderly people who regularly attended a socio-cultural centre for older adults. We believed that the sons and daughters of these elderly people would be able to empathise with the problem of their ancestors and carry out the proposed experiment with great interest. A total amount of 37 questionnaires were received. From them, 13 participants were female ( 35 % ) and 24 male ( 65 % ), while the total mean age was M = 41.41 , S D = 16.19 , M a x = 75 and M i n = 22 . Regarding their use of technology, 70 % of the participants stated they are advanced users, 25 % are basic users and 5 % prefer not to use technology in their daily lives. As 70 % of the participants were advanced users of technology, their responses to the questionnaire may probably have biased the results positively. However, the tendency is that more and more people get accustomed to this kind of equipment.
The experiment was repeated, this time with elderly people instead of their relatives. We performed the experiment with participants in the socio-cultural centre for older adults. We managed to obtain data from 23 participants (52% female and 48% male), with a mean age of M = 74.48 , S D = 6.01 , M a x = 87 and M i n = 66 . This time, 35% of the participants stated that they prefer not to use technology, 39% were basic users and 26% were advanced users.

4.4. Data Analysis

The data gathered through the questionnaires are described using descriptive statistics. They include measures of central tendency (mean and median) and dispersion (standard deviation, SD, and interquartile range, IQR) as well as percentages of responses in a given range of answers. In the 5-point Likert scale used, the responses in the range 4–5 (agree or strongly agree) have been considered positive, 3 has been considered neutral, and 1–2 have been considered negative (disagree or strongly disagree). The distribution of responses for the questionnaire is plotted as stacked horizontal histograms, and pie charts are used to show the preference of the users about the different alternatives on relative monitoring altitude, monitoring velocity and monitoring radius. Since the data collected did not meet the requirements for normality (according to the Shapiro–Wilk test), the non-parametric Kruskal–Wallis test was used for null hypothesis testing in the comparison of different alternatives with a significance of 95%. In the cases in which the test found differences, the differences were studied using the Dunn’s post-hoc test together with a Bonferroni correction for pair-wise comparisons. IBM SPSS Statistics (version 24) and Microsoft Excel were used to conduct the statistical analyses.

5. Results

This section presents the results of both experiments. The results of the initial experiment with younger adults are provided in Section 5.1, while the results of the second one with older adults can be found in Section 5.2.

5.1. Results of the Experiment with Younger Adults (Relatives)

The results of the first experiment are summarised in Table 5, as well as Figure 9 and Figure 10. Figure 10 uses pie charts to plot the preferences of the users regarding the variables under study. The preferred relative monitoring altitude is the highest one, which was selected by 78 % of the participants ( 14 % for low and 8 % for medium). The difference in the preference for the monitoring velocity is not that big; 46 % of the users selected high, 38 % medium and 16 % low. Finally, 54 % of the participants selected the circular trajectory as their preferred one, followed by the elliptical trajectory when it is farther from their face ( 30 % ) and closer to their face ( 16 % ). Thus, the preferences are high altitude, high velocity and circular trajectory.

5.1.1. Results for Relative Monitoring Altitude with Younger Adults

The results for the safety, supervision, and distraction questions are described in Table 5 and depicted in Figure 9. The results for safety show that users feel safer with the highest altitude, being the median value for this response 4 ( I Q R = 1.00 ) and having 81 % of the responses in the rage 4–5 (agree or strongly agree). This is in contrast with the medium (median value 2) and low (median value 3) altitudes. It is worth noting that the medium altitude had 51 % of the responses in the range 1–2 (disagree or strongly disagree). This result can be observed in Figure 9, being much more blue in A.Saf3. The Kruskal–Wallis test shows a significant difference in the results obtained for each different altitude ( χ ( 2 ) 2 = 31.20, p < 0.001), a post-hoc pairwise comparison revealed that this difference was between the highest altitude and the medium (p < 0.001) and the lower (p = 0.001), being the highest altitude perceived as safer.
Regarding supervision, the median values obtained for the three different altitudes is 3, only differing in the IQR value: I Q R = 2.00 for the highest altitude, and I Q R = 1.00 for the medium and lower altitudes. A deeper look at the results shows that medium and high altitudes had similar percentages of positive answers (35% for high, 49% for medium and 24 % for low) while all had similar percentages of negative answers (27%, 22% and 30%, respectively for high, medium and low). The Kruskal–Wallis test did not show any difference in the results for the three options ( χ ( 2 ) 2 = 4.29, p = 0.117).
Finally, for distraction, the median values were 2 ( I Q R = 1.00 ), 5 ( I Q R = 1.00 ) and 3 ( I Q R = 2.00 ) for high, medium and low altitudes respectively. The percentages of positive answers for each altitude (high, medium, low) were 19%, 89% and 68%, while the negative ones were 54%, 5% and 22%. This difference can be noticed with a quick look at the A.Dis section of Figure 9. The Kruskal–Wallis test found a difference in the values gathered for each altitude ( χ ( 2 ) 2 = 41.92, p < 0.001), being this difference between the high altitude and the other two (p < 0.001 for both), and the distraction lower for the high altitude.

5.1.2. Results for Monitoring Velocity with Younger Adults

The medians of the results for safety for the different monitoring velocities were 4 ( I Q R = 2.00 ), 4 ( I Q R = 1.00 ) and 4 ( I Q R = 1.00 ) for high, medium and low, respectively. The percentages of positive answers were 57%, 84% and 84%, while the percentages of negative answers were 27%, 3% and 5%. There was a significant difference in the data ( χ ( 2 ) 2 = 9.97, p = 0.007) and a post-hoc test showed that this difference was between high and low velocity (p = 0.009), stating that the users considered the lower velocity safer.
The medians for the supervision data gathered for high, medium and low velocity were 3 ( I Q R = 0.00 ), 3 ( I Q R = 1.00 ) and 1 ( I Q R = 1.00 ), respectively. The percentages of positive answers were 16%, 35% and 57%. The percentages of negative answers were 22%, 11% and 5%. The differences in the data are also apparent in Figure 9.
Regarding distraction, the median values were 4 ( I Q R = 2.00 ) for high velocity, 3 ( I Q R = 1.00 ) for medium velocity and 4 ( I Q R = 3.00 ) for low velocity. The percentages of positive answers were 51%, 38% and 54% and the percentages of negative answers were 27%, 24% and 27%, respectively. In this case, the data look similar in Figure 9, and there were no statistically significant differences between the responses of the three groups ( χ ( 2 ) 2 = 1.50, p = 0.473).
The median values obtained for the adequacy questions for velocity were 3 ( I Q R = 2.00 ) for high, 4 ( I Q R = 1.00 ) for medium and 3 ( I Q R = 2.00 ) for low velocity. The percentages of positive answers were 49%, 54% and 43%, while the negative ones were 32%, 11% and 38%, respectively. One more time, there were no statistically significant differences between the responses of the three groups ( χ ( 2 ) 2 = 2.25, p = 0.324).

5.1.3. Results for Monitoring Radius with Younger Adults

The last group of questions was about the monitoring radius followed by the UAV during the monitoring process. The median values obtained for safety were 4 ( I Q R = 1.00 ) for the elliptical trajectory farther from the user’s face, 4 ( I Q R = 1.00 ) for the circular trajectory and 2 ( I Q R = 2.00 ) for the elliptical trajectory closer to the user’s face. The percentages of positive answers were 70%, 73% and 27%, while they were 14%, 5% and 51% for the negative answers, respectively. The difference is noticeable this time with a quick look at the T.Saf block in Figure 9, and it was confirmed by the Kruskal–Wallis test ( χ ( 2 ) 2 = 23.86, p < 0.001). The post-hoc test revealed that the difference was between the elliptical trajectory closer to the user’s face and both the circular (p < 0.001) and the elliptical trajectory farther from the user’s face (p < 0.001), being the perceived safety lower for the trajectory closer to the user’s face.
The medians for supervision were 3 ( I Q R = 1.00 ) for the elliptical trajectory farther from the user’s face, 3 ( I Q R = 1.00 ) for the circular trajectory and 3 ( I Q R = 1.00 ) for the elliptical trajectory closer to the user’s face. The percentages of positive answers were 19%, 14% and 46%, while they were 30%, 27% and 19% for the negative answers, respectively. There was a significant difference ( χ ( 2 ) 2 = 8.78, p < 0.01) between the elliptical trajectory closer to the user’s face and the circular trajectory (p = 0.028) and the elliptical trajectory farther from the user’s face (p = 0.034). The post-hoc test states that the perceived level of supervision is higher for the trajectory closer to the user’s face.
About distraction, the median values for monitoring radius were 3 ( I Q R = 2.00 ) for the elliptical trajectory farther from the user’s face, 3 ( I Q R = 2.00 ) for the circular trajectory and 4 ( I Q R = 1.00 ) for the elliptical trajectory closer to the user’s face. The percentages of positive answers were 35%, 30% and 76%, while they were 32%, 43% and 14% for the negative answers, respectively. There was a significant difference in the responses ( χ ( 2 ) 2 = 20.15, p < 0.001), being the level of distraction higher for the trajectory closer to the user’s face as compared to the circular trajectory (p = 0.001) and the elliptical trajectory farther from the user’s face (p < 0.001).
Finally, the median values regarding adequacy were 4 ( I Q R = 1.00 ), 4 ( I Q R = 1.00 ) and 2 ( I Q R = 2.00 ). The percentages of positive answers were 54%, 73% and 27%, while they were 19%, 3% and 51% for the negative answers, respectively. The Kruskal–Wallis test found a difference in the values gathered for each monitoring radius ( χ ( 2 ) 2 = 25.98, p < 0.001), being this difference between the elliptical trajectory closer to the user’s face and the other two (p < 0.001 for the circular trajectory and p < 0.001 for elliptical trajectory farther from the user’s face), being the adequacy lower for the trajectory closer to the user’s face.

5.2. Results of the Experiment with Older Adults (with Physical Impairments)

This section presents the results of the second experiment, which are summarised in Table 6, Figure 11 and Figure 12. The results for preferences regarding the three variables under study are similar to the first study with younger adults. Regarding relative monitoring altitude, 78% preferred the highest altitude, 13% the medium and 9% the lowest altitude. Again, the difference in the preferences is not that big for the monitoring velocity: 48%, 39% and 13% for high, medium and low velocity, respectively. Regarding the monitoring trajectory, the preference of the participants was 48% for circular, 39% for the elliptical trajectory when it is farther from their face, and 13% when it is closer to their face.

5.2.1. Results for Relative Monitoring Altitude with Older Adults

Regarding safety for relative monitoring altitude, the results show that the participants did not feel positive about any of the options; the percentages of positive (5 or 4) vs. negative (1 or 2) answers are 22%, 17% and 30% for low, medium and high altitudes for positive answers, while for the negative ones are 57%, 52% and 30%, respectively. Therefore, the participants showed a neutral attitude towards the high altitude only (39% of the answers). For the other two alternatives, the trend in the responses is towards neutral or negative attitude. The median values of the data support this, as it was 2 for low and medium altitude ( I Q R = 1.00 and 2.00 respectively), and 3 ( I Q R = 2.00 ) for the high altitude. Despite this, the Kruskal–Wallis test fails at finding a significant difference in the results obtained for each different altitude ( χ ( 2 ) 2 = 3.50, p = 0.174 ).
For supervision, the median values for the three different altitudes are similar to those obtained for younger adults (3 with I Q R = 2.00 for low, 4 with I Q R = 1.00 for medium and 3 with I Q R = 1.50 for high altitude). The Kruskal–Wallis test did not show any difference in the results for the three options ( χ ( 2 ) 2 = 5.71, p = 0.058 ). Despite this, a closer look at the data shows that the percentages of positive and negative answers are 43% for positive ones and 17% for negative answers in the case of low altitude, 52% and 17% for medium altitude and 26% and 48% for high altitude. This is aligned with the results obtained for altitude preference, as older adults selected high as the better altitude.
The median values obtained for distraction were 2 ( I Q R = 1.00 ), 5 ( I Q R = 1.00 ) and 4 ( I Q R = 2.50 ) for high, medium and low altitudes, respectively, which are similar to the results obtained for younger adults. The percentages of positive and negative answers for each altitude (high, medium, low) are also similar to those obtained for younger adults: 13%, 83% and 65% for positive answers and 57%, 9% and 26% for negative answers. There is a statistically significant difference in the values gathered for each altitude according to the Kruskal–Wallis test ( χ ( 2 ) 2 = 24.55, p < 0.001 ), with differences between the high altitude and the other two ( p = 0.001 for the difference with low altitude and p < 0.001 for the difference with medium altitude), being the distraction lower for the high altitude.

5.2.2. Results for Monitoring Velocity with Older Adults

The different monitoring velocities had median values for safety of 2 ( I Q R = 2.00 ), 4 ( I Q R = 1.50 ) and 4 ( I Q R = 1.00 ) for high, medium and low, respectively. 35%, 65% and 65% were positive responses, while 52%, 9% and 17% were negative answers. A statistically significant difference was found in the data ( χ ( 2 ) 2 = 9.76, p = 0.008). This difference was between high and low velocity (p = 0.036) and between high and medium velocity (p = 0.012). Thus, the older adults considered the lower and medium velocities safer.
The medians for the supervision data gathered for high, medium and low velocity were 4 ( I Q R = 2.00 ), 3 ( I Q R = 1.50 ) and 3 ( I Q R = 2.00 ), respectively. The percentages of positive answers were 52%, 48% and 30%, and 35%, 26% and 43% for the negative answers. Again, and similarly to the data obtained for younger adults, the apparent differences in the data are not significant enough according to the Kruskal–Wallis test ( χ ( 2 ) 2 = 2.70, p = 0.259).
The case of distraction was different. The median values were 3 ( I Q R = 0.50 ) for high velocity, 4 ( I Q R = 1.00 ) for medium velocity and 4 ( I Q R = 1.50 ) for low velocity. The percentages of positive answers were 13%, 52% and 61% and the percentages of negative answers were 26%, 0% and 4%, respectively. There was a significant difference between them ( χ ( 2 ) 2 = 15.18, p = 0.001). The difference was between the low and high velocities (p = 0.001), and medium and high velocities (p = 0.009), being the level of perceived distraction lower for the high velocity.
The last variable measured for monitoring velocity was adequacy, and the results obtained were 4 ( I Q R = 1.00 ) for high, 3 ( I Q R = 1.00 ) for medium and 4 ( I Q R = 2.00 ) for low velocity. The percentages of positive answers were 52%, 48% and 61%, while the negative ones were 22%, 13% and 22%, respectively. No significant differences could be found between the responses of the three groups ( χ ( 2 ) 2 = 1.35, p = 0.510).

5.2.3. Results for Monitoring Radius with Older Adults

Regarding safety, the median values for safety were 4 ( I Q R = 0.50 ) for the elliptical trajectory farther from the user’s face, 4 ( I Q R = 1.00 ) for the circular trajectory and 3 ( I Q R = 2.00 ) for the elliptical trajectory closer to the user’s face. The percentages of positive answers were 74%, 65% and 30%, while the negative ones were 13%, 9% and 48%, respectively. The difference was confirmed by the Kruskal–Wallis test ( χ ( 2 ) 2 = 11.99, p = 0.002). This difference was between the elliptical trajectory closer to the user’s face and both the circular (p = 0.011) and the elliptical trajectory farther from the user’s face (p = 0.006), being the perceived safety lower for the trajectory closer to the user’s face.
The medians for supervision were 4 ( I Q R = 1.00 ) for the elliptical trajectory farther from the user’s face, 4 ( I Q R = 0.50 ) for the circular trajectory and 2 ( I Q R = 1.50 ) for the elliptical trajectory closer to the user’s face. The percentages of positive answers were 65%, 74% and 26%, while they were 9%, 4% and 52% for the negative answers, respectively. The Kruskal–Wallis test found a significant difference ( χ ( 2 ) 2 = 14.27, p = 0.001). The post-hoc test revealed that this difference was between the elliptical trajectory closer to the user’s face and both the circular trajectory (p = 0.002) and the elliptical trajectory farther from the user’s face (p = 0.006). In both cases, the perceived level of supervision is higher for the trajectory closer to the user’s face.
No significant difference could be found for distraction ( χ ( 2 ) 2 = 1.93, p < 0.381). The median values were similar for the different monitoring radius: 3 ( I Q R = 1.00 ) for the elliptical trajectory farther from the user’s face, 3 ( I Q R = 0.50 ) for the circular trajectory and 3 ( I Q R = 1.50 ) for the elliptical trajectory closer to the user’s face. The same result was found for the percentages of positive answers, which were 17%, 13% and 35% for positive responses and 30%, 26% and 26% for the negative ones, respectively.
Adequacy was the last variable, and the median values obtained were 4 ( I Q R = 1.00 ), 3 ( I Q R = 2.00 ) and 3 ( I Q R = 2.00 ) for the trajectories farther from the user’s face, circular and closer to the user’s face, respectively. The percentages of positive answers were 61%, 30% and 39%, while they were 9%, 43% and 48% for the negative answers, respectively. This time, a significant difference could be found for each monitoring radius ( χ ( 2 ) 2 = 10.17, p = 0.006). The post-hoc test found that this difference was between the elliptical trajectory farther from the user’s face and the other two (p = 0.019 for the circular trajectory and p = 0.016 for elliptical trajectory closer to the user’s face). In both cases, the adequacy is higher for the trajectory farther from the user’s face.

6. Discussion

This paper belongs to a research line in socially assistive UAVs with potential for dependent people, including ageing adults. The paper has presented an assistive UAV whose mission is to perform a monitoring flight from time to time to determine a person’s condition and a possible assistance. This monitoring flight basically consists of a series of manoeuvres to take-off, get close to the person, fly around the person to obtain facial images and then return to its base. Moreover, a survey was conducted to evaluate the users’ sense of safety and comfort in a VR home environment.
The 60 participants (37 descendants and 23 older adults) evaluated several parameters of the UAV’s trajectory during the monitoring process. The main aim of this evaluation was to study the impact of different alternatives for three key parameters of the monitoring process of an assistive UAV: the relative monitoring altitude, the monitoring velocity and the monitoring radius. For each parameter, three alternatives were implemented in our simulation platform and tested using a VR environment. The user preferences were consistent with the answers they provided about questions regarding the perceived safety, supervision, distraction and adequacy. Table 7 summarises the main results presented in the previous section, which are used in this discussion.
High altitude was selected as the most appropriate altitude for the UAV monitoring process by most participants. At the same time, it was perceived as the safest one, especially for younger adults, followed by the lower altitude.A possible explanation is that the participants felt that a higher altitude would avoid collisions with the UAV, which could be dangerous specially at the user head level (medium altitude). Even though there is no statistically significant difference in the perceived safety for older adults, there is a reduction in the results obtained for them (means and percentages) when compared to younger adults. This reduction is more noticeable in the higher altitude (81% of positive answers for younger adults and 30% for older adults), where most of the responses were neutral (40%). Hence, older adults may not feel completely safe with this monitoring altitude, but, more importantly, they do not feel in danger, so it is very likely that they will tolerate and accept it. Regarding the perceived supervision level, no significant differences could be found, despite being the medium altitude perceived slightly more negative than the other two. For distraction, the highest altitude was perceived to cause the lowest level of distraction in a statistically significant manner, probably because it was far from their line of sight. Therefore, it seems reasonable to think that the highest altitude is the best one for monitoring processes.
High velocity was considered as the most appropriate one for monitoring, followed by medium velocity. However, it was perceived as less safe and with higher level of perceived supervision in younger adults than the low velocity. That was not exactly the case for distraction, since no statistically significant difference was found in the data for younger adults. Notice, however, that a lower velocity distracts more than high velocity in older adults. Regarding the adequacy of the monitoring velocities, the data gathered was very similar for all of them, so, again, no statistically significant difference could be found. In this case, either the high or the medium velocity could be selected when designing the monitoring process of a UAV.
The circular trajectory was selected by most of the participants as their preferred one, while the elliptical trajectory passing closer to their face was the one they liked the least. This is in line with the results for safety and supervision, as closer to their face is perceived as less safe, with too much supervision and causing more distractions to them. For distraction, the same applies to younger adults, but no significant difference was detected for older adults. Moreover, there is no statistically significant difference between circular and elliptical trajectory passing farther from the user’s face, only a slightly higher perceived supervision for the circular one, which could be related to the fact that the trajectory passes closer to the user’s face. Unlike what was observed when comparing means and percentages of younger and older adults for the other two variables, perceived safety is not affected. Thus, according to the data gathered, the circular trajectory would be preferred over elliptical ones, but the elliptical trajectory passing farther from the user’s face could be also a good choice according to safety, supervision level and distraction.
The limitations of this evaluation are related to the difficulties of performing face-to-face experiments using VR facilities. Due to the current situation amid the COVID-19 pandemic, a series of videos were shown to the participants. The validation through videos, and not in an immersive VR environment, may have distorted some of the results. The authors, therefore, will hopefully repeat the experiments in a close future. In addition, the sample size must also be incremented to reinforce the current conclusions derived from the study.
Nonetheless, the authors believe that using VR as an alternative to physical prototyping saves time, provides a high flexibility and enables iterative testing possibilities in the design of socially assistive technologies incorporating different aspects of user trust in automation. Our future work also aims to make progress on the safety issues associated with flying a UAV at home, with the consequent risk of a domestic accident. Since all the instructions that come with current commercial UAVs advise against flying close to people, it is our intention to contact experts in miniaturisation in order to achieve the maximum physical integrity of the person.

Author Contributions

Conceptualization, L.M.B., A.S.G. and A.F.-C.; Formal analysis, R.M.; Funding acquisition, A.F.-C.; Investigation, L.M.B., A.S.G., R.M. and J.L.d.l.V.; Methodology, R.M. and J.L.d.l.V.; Project administration, A.F.-C.; Software, L.M.B. and A.S.G.; Supervision, R.M. and A.F.-C.; Validation, L.M.B., A.S.G., J.L.d.l.V. and A.F.-C.; Writing—original draft, L.M.B., A.S.G. and F.L.d.l.R.; Writing—review & editing, L.M.B., A.S.G., R.M., J.L.d.l.V. and A.F.-C. All authors have read and agreed to the published version of the manuscript.

Funding

The work leading to this paper has received funding from iRel40 and VALU3S projects. iRel40 and VALU3S are European co-funded innovation projects that have been granted by the ECSEL Joint Undertaking (JU) under grant agreements No. 876659 and 876852, respectively. The funding of the projects comes from the Horizon 2020 research programme and participating countries. National funding is provided by Germany, including the Free States of Saxony and Thuringia, Austria, Belgium, Finland, France, Italy, the Netherlands, Slovakia, Spain, Sweden, and Turkey. This work has received National funding from Spain’s Agencia Estatal de Investigación (AEI) under PCI2020-112240 and PCI2020-112001 grants, respectively. In addition, this work has received funding from AEI/European Social Fund (ESF) (grant No. EQC2019-006063-P and PID2020-115220RB-C21), Junta de Comunidades de Castilla-La Mancha/ESF (grant No. SBPLY/19/180501/000270), Ramon y Cajal Program (AEI/ESF grant No. RYC-2017-22836) and CIBERSAM (Biomedical Research Networking Center in Mental Health).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and the ethical standards of the responsible committee on human experimentation of Complejo Hospitalario Universitario de Albacete.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funding sources had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
GPIGeneralised Proportional Integral
HRIHuman-Robot Interaction
IQRInterquartile Range
MQTTMessage Queue Telemetry Transport
SDStandard Deviation
UAVUnmanned Aerial Vehicle
VRVirtual Reality

References

  1. Nocentini, O.; Fiorini, L.; Acerbi, G.; Sorrentino, A.; Mancioppi, G.; Cavallo, F. A Survey of Behavioral Models for Social Robots. Robotics 2019, 8, 54. [Google Scholar] [CrossRef] [Green Version]
  2. Wojciechowska, A.; Frey, J.; Mandelblum, E.; Amichai-Hamburger, Y.; Cauchard, J.R. Designing Drones: Factors and Characteristics Influencing the Perception of Flying Robots. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies; Association for Computing Machinery: New York, NY, USA, 2019; Volume 3, p. 111. [Google Scholar] [CrossRef]
  3. Martín Rico, F.; Rodríguez-Lera, F.; Clavero, J.; Guerrero-Higueras, A.; Matellán Olivera, V. An Acceptance Test for Assistive Robots. Sensors 2020, 20, 3912. [Google Scholar] [CrossRef]
  4. Cavallo, F.; Esposito, R.; Limosani, R.; Manzi, A.; Bevilacqua, R.; Felici, E.; Di Nuovo, A.; Cangelosi, A.; Lattanzio, F.; Dario, P. Robotic Services Acceptance in Smart Environments With Older Adults: User Satisfaction and Acceptability Study. J. Med. Internet Res. 2018, 20, e264. [Google Scholar] [CrossRef] [PubMed]
  5. Garcia-Salguero, M.; Gonzalez-Jimenez, J.; Moreno, F.A. Human 3D Pose Estimation with a Tilting Camera for Social Mobile Robot Interaction. Sensors 2019, 19, 4943. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Lewis, M.; Sycara, K.; Walker, P. The Role of Trust in Human-Robot Interaction. In Foundations of Trusted Autonomy; Springer: Cham, Switzerland, 2018; pp. 135–159. [Google Scholar] [CrossRef] [Green Version]
  7. McMurray, J.; Strudwick, G.; Forchuk, C.; Morse, A.; Lachance, J.; Baskaran, A.; Allison, L.; Booth, R. The Importance of Trust in the Adoption and Use of Intelligent Assistive Technology by Older Adults to Support Aging in Place: Scoping Review Protocol. JMIR Res. Protoc. 2017, 6, e218. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Yusif, S.; Soar, J.; Hafeez-Baig, A. Older people, assistive technologies, and the barriers to adoption: A systematic review. Int. J. Med. Inform. 2016, 94, 112–116. [Google Scholar] [CrossRef] [Green Version]
  9. Okamura, K.; Yamada, S. Adaptive trust calibration for human-AI collaboration. PLoS ONE 2020, 15, e0229132. [Google Scholar] [CrossRef] [Green Version]
  10. Hoffman, R.R.; Johnson, M.; Bradshaw, J.M.; Underbrink, A. Trust in Automation. IEEE Intell. Syst. 2013, 28, 84–88. [Google Scholar] [CrossRef]
  11. Langer, A.; Feingold-Polak, R.; Mueller, O.; Kellmeyer, P.; Levy-Tzedek, S. Trust in socially assistive robots: Considerations for use in rehabilitation. Neurosci. Biobehav. Rev. 2019, 104, 231–239. [Google Scholar] [CrossRef]
  12. Song, Y.; Luximon, Y. Trust in AI Agent: A Systematic Review of Facial Anthropomorphic Trustworthiness for Social Robot Design. Sensors 2020, 20, 5087. [Google Scholar] [CrossRef]
  13. Gompei, T.; Umemuro, H. Factors and Development of Cognitive and Affective Trust on Social Robots. In Social Robotics; Ge, S.S., Cabibihan, J.J., Salichs, M.A., Broadbent, E., He, H., Wagner, A.R., Castro-González, Á., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 45–54. [Google Scholar] [CrossRef]
  14. de Graaf, M.M.; Allouch, S.B.; Klamer, T. Sharing a life with Harvey: Exploring the acceptance of and relationship-building with a social robot. Comput. Hum. Behav. 2015, 43, 1–14. [Google Scholar] [CrossRef]
  15. Mcknight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a Specific Technology: An Investigation of Its Components and Measures. ACM Trans. Manag. Inf. Syst. 2011, 2, 12. [Google Scholar] [CrossRef]
  16. Belmonte, L.; Garcia, A.S.; Segura, E.; Novais, P.J.; Morales, R.; Fernandez-Caballero, A. Virtual Reality Simulation of a Quadrotor to Monitor Dependent People at Home. IEEE Trans. Emerg. Top. Comput. 2020. [Google Scholar] [CrossRef]
  17. Sadka, O.; Giron, J.; Friedman, D.A.; Zuckerman, O.; Erel, H. Virtual-reality as a Simulation Tool for Non-humanoid Social Robots. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–9. [Google Scholar] [CrossRef]
  18. Templin, T.; Popielarczyk, D. The Use of Low-Cost Unmanned Aerial Vehicles in the Process of Building Models for Cultural Tourism, 3D Web and Augmented/Mixed Reality Applications. Sensors 2020, 20, 5457. [Google Scholar] [CrossRef]
  19. Górriz, J.M.; Ramírez, J.; Ortíz, A.; Martínez-Murcia, F.J.; Segovia, F.; Suckling, J.; Leming, M.; Zhang, Y.D.; Álvarez-Sánchez, J.R.; Bologna, G.; et al. Artificial intelligence within the interplay between natural and artificial computation: Advances in data science, trends and applications. Neurocomputing 2020, 410, 237–270. [Google Scholar] [CrossRef]
  20. Martinez-Gomez, J.; Fernández-Caballero, A.; Garcia-Varea, I.; Rodriguez, L.; Romero-Gonzalez, C. A taxonomy of vision systems for ground mobile robots. Int. J. Adv. Robot. Syst. 2014, 11, 111. [Google Scholar] [CrossRef] [Green Version]
  21. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
  22. Marín-Morales, J.; Llinares, C.; Guixeres, J.; Alcañiz, M. Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. Sensors 2020, 20, 5163. [Google Scholar] [CrossRef]
  23. Fernández-Caballero, A.; López, M.T.; Saiz-Valverde, S. Dynamic stereoscopic selective visual attention (DSSVA): Integrating motion and shape with depth in video segmentation. Expert Syst. Appl. 2008, 34, 1394–1402. [Google Scholar] [CrossRef] [Green Version]
  24. Fernández, M.A.; Fernández-Caballero, A.; López, M.T.; Mira, J. Length–speed ratio (LSR) as a characteristic for moving elements real-time classification. Real-Time Imaging 2008, 9, 49–59. [Google Scholar] [CrossRef] [Green Version]
  25. Todd, C.; Watfa, M.; Mouden, Y.E.; Sahir, S.; Ali, A.; Niavarani, A.; Lutfi, A.; Copiaco, A.; Agarwal, V.; Afsari, K.; et al. A proposed UAV for indoor patient care. Technol. Health Care 2015, 1–8. [Google Scholar] [CrossRef]
  26. Sokullu, R.; Balcı, A.; Demir, E. The role of drones in ambient assisted living systems for the elderly. In Enhanced Living Environments: Algorithms, Architectures, Platforms, and Systems; Ganchev, I., Garcia, N.M., Dobre, C., Mavromoustakis, C.X., Goleva, R., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 295–321. [Google Scholar] [CrossRef]
  27. He, M.; He, J.; Scherer, S. Model-based real-time robust controller for a small helicopter. Mech. Syst. Signal Process. 2021, 146, 107022. [Google Scholar] [CrossRef]
  28. Huong, D.C.; Huynh, V.T.; Trinh, H. Dynamic Event-Triggered State Observers for a Class of Nonlinear Systems with Time Delays and Disturbances. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 3457–3461. [Google Scholar] [CrossRef]
  29. Belmonte, L.M.; Morales, R.; Fernández-Caballero, A.; Somolinos, J.A. A tandem active disturbance rejection control for a laboratory helicopter with variable-speed rotors. IEEE Trans. Ind. Electron. 2016, 63, 6395–6406. [Google Scholar] [CrossRef]
  30. Khosiawan, Y.; Nielsen, I. A system of UAV application in indoor environment. Prod. Manuf. Res. 2016, 4, 2–22. [Google Scholar] [CrossRef] [Green Version]
  31. de Miguel Molina, M.; Campos, V.S.; Ángeles Carabal Montagud, M.; de Miguel Molina, B. Ethics for civil indoor drones: A qualitative analysis. Int. J. Micro Air Veh. 2018, 10, 340–351. [Google Scholar] [CrossRef]
  32. Fernández-Caballero, A.; Belmonte, L.M.; Morales, R.; Somolinos, J.A. Generalized proportional integral control for an unmanned quadrotor system. Int. J. Adv. Robot. Syst. 2015, 12, 85. [Google Scholar] [CrossRef] [Green Version]
  33. Belmonte, L.M.; Morales, R.; García, A.S.; Segura, E.; Novais, P.; Fernández-Caballero, A. Trajectory Planning of a Quadrotor to Monitor Dependent People. In Understanding the Brain Function and Emotions; Springer: Cham, Switzerland, 2019; pp. 212–221. [Google Scholar] [CrossRef]
  34. Castillo, J.C.; Castro-González, Á.; Alonso-Martín, F.; Fernández-Caballero, A.; Salichs, M.Á. Emotion detection and regulation from personal assistant robot in smart environment. In Personal Assistants: Emerging Computational Technologies; Springer International Publishing: Cham, Switzerland, 2018; pp. 179–195. [Google Scholar] [CrossRef]
  35. Castillo, J.C.; Castro-González, Á.; Fernández-Caballero, A.; Latorre, J.M.; Pastor, J.M.; Fernández-Sotos, A.; Salichs, M.A. Software architecture for smart emotion recognition and regulation of the ageing adult. Cogn. Comput. 2016, 8, 357–367. [Google Scholar] [CrossRef]
  36. Castillo, J.C.; Fernández-Caballero, A.; Castro-González, Á.; Salichs, M.A.; López, M.T. A Framework for Recognizing and Regulating Emotions in the Elderly. In Ambient Assisted Living and Daily Activities; Pecchia, L., Chen, L.L., Nugent, C., Bravo, J., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 320–327. [Google Scholar]
  37. Sokolova, M.; Serrano-Cuerda, J.; Castillo, J.; Fernández-Caballero, A. A fuzzy model for human fall detection in infrared video. J. Intell. Fuzzy Syst. 2003, 24, 215–228. [Google Scholar] [CrossRef] [Green Version]
  38. Dockx, K.; Alcock, L.; Bekkers, E.; Ginis, P.; Reelick, M.; Pelosin, E.; Lagravinese, G.; Hausdorff, J.M.; Mirelman, A.; Rochester, L.; et al. Fall-prone older people’s attitudes towards the use of virtual reality technology for fall prevention. Gerontology 2017, 63, 590–598. [Google Scholar] [CrossRef]
  39. Ninomiya, T.; Fujita, A.; Suzuki, D.; Umemuro, H. Development of the Multi-dimensional Robot Attitude Scale: Constructs of people’s attitudes towards domestic robots. In International Conference on Social Robotics; Springer: Cham, Switzerland, 2015; pp. 482–491. [Google Scholar] [CrossRef]
  40. Chiari, L.; van Lummel, R.; Pfeiffer, K.; Lindemann, U.; Zijlstra, W. Deliverable 2.2: Classification of the user’s needs, characteristics and scenarios-update. In Unpublished Report from the EU Project (6th Framework Program, IST Contract No. 045622) Sensing and Action to Support Mobility in Ambient Assisted Living; Department of Health: London, UK, 2009. [Google Scholar]
Figure 1. 3D representation in isometric perspective of the UAV’s monitoring altitude, z m , which is determined by the person’s height, z p , plus the relative monitoring altitude, z r .
Figure 1. 3D representation in isometric perspective of the UAV’s monitoring altitude, z m , which is determined by the person’s height, z p , plus the relative monitoring altitude, z r .
Sensors 21 00908 g001
Figure 2. 3D representation in isometric perspective of the trajectory according to the UAV’s monitoring radius: (a) elliptical trajectory closer to the person’s face ( R x > R y ); (b) circular trajectory in which the monitoring radius is constant ( R x = R y ); (c) elliptical trajectory farther from the person’s face ( R x < R y ).
Figure 2. 3D representation in isometric perspective of the trajectory according to the UAV’s monitoring radius: (a) elliptical trajectory closer to the person’s face ( R x > R y ); (b) circular trajectory in which the monitoring radius is constant ( R x = R y ); (c) elliptical trajectory farther from the person’s face ( R x < R y ).
Sensors 21 00908 g002
Figure 3. General diagram of the UAV s imulator which receives from the VR visualiser the information concerning the person’s avatar to calculate the reference trajectories used by the controller to guide the UAV in the monitoring process while returns the aircraft’s position and orientation to represent its flight in a virtual home environment.
Figure 3. General diagram of the UAV s imulator which receives from the VR visualiser the information concerning the person’s avatar to calculate the reference trajectories used by the controller to guide the UAV in the monitoring process while returns the aircraft’s position and orientation to represent its flight in a virtual home environment.
Sensors 21 00908 g003
Figure 4. Variables of the parametric equation of the ellipse centred at the X Y position of the person and rotated according to their orientation.
Figure 4. Variables of the parametric equation of the ellipse centred at the X Y position of the person and rotated according to their orientation.
Sensors 21 00908 g004
Figure 5. Determination of the safety position ( x s p , y s p , z s p ) to which the UAV is approaching to start the elliptical monitoring lap around the person.
Figure 5. Determination of the safety position ( x s p , y s p , z s p ) to which the UAV is approaching to start the elliptical monitoring lap around the person.
Sensors 21 00908 g005
Figure 6. Graphical representation of the trajectory planner’s states: (a) states 0—home, 1—takeoff, and 2—orientation towards the person; (b) state 3—approximation; (c) state 4—waiting in safety position; (d) state 5—orbit around the person; (e) state 6—data capture; (f) state 7—motion to safety position; (g) state 8—orientation towards the base; (h) state 9—return to base; (i) state 10—yaw angle adjustment; (j) state 11—landing.
Figure 6. Graphical representation of the trajectory planner’s states: (a) states 0—home, 1—takeoff, and 2—orientation towards the person; (b) state 3—approximation; (c) state 4—waiting in safety position; (d) state 5—orbit around the person; (e) state 6—data capture; (f) state 7—motion to safety position; (g) state 8—orientation towards the base; (h) state 9—return to base; (i) state 10—yaw angle adjustment; (j) state 11—landing.
Sensors 21 00908 g006
Figure 7. Results of the tests to verify the new ellipsoidal trajectory planner carried out using the UAV simulator (part of the VR platform and implemented in MATLAB/Simulink®). For each test, the following is represented: (1) trajectory generated by the planner (in blue) against the actual trajectory performed by the quadrotor model as the result of the action of the GPI controller (in red); (2) orientation of the UAV’s camera by means of arrows whose colour change over time; (3) way-points: base position (blue circle), safety position (green circle), and person’s position and orientation (pink circle and arrow).
Figure 7. Results of the tests to verify the new ellipsoidal trajectory planner carried out using the UAV simulator (part of the VR platform and implemented in MATLAB/Simulink®). For each test, the following is represented: (1) trajectory generated by the planner (in blue) against the actual trajectory performed by the quadrotor model as the result of the action of the GPI controller (in red); (2) orientation of the UAV’s camera by means of arrows whose colour change over time; (3) way-points: base position (blue circle), safety position (green circle), and person’s position and orientation (pink circle and arrow).
Sensors 21 00908 g007
Figure 8. Architecture of the distributed platform.
Figure 8. Architecture of the distributed platform.
Sensors 21 00908 g008
Figure 9. Distribution of responses for each question for the experiment with younger adults.
Figure 9. Distribution of responses for each question for the experiment with younger adults.
Sensors 21 00908 g009
Figure 10. User preference for each of the variables measured for the experiment with younger adults.
Figure 10. User preference for each of the variables measured for the experiment with younger adults.
Sensors 21 00908 g010
Figure 11. Distribution of responses for each question for older adults.
Figure 11. Distribution of responses for each question for older adults.
Sensors 21 00908 g011
Figure 12. User preference for each of the variables measured for older adults.
Figure 12. User preference for each of the variables measured for older adults.
Sensors 21 00908 g012
Table 1. Trajectory planner’s states for the monitoring process. (Part I—States from 0 to 6). Reference trajectories for the position ( x , y , z ) and yaw angle ( ψ ) of a quadrotor UAV monitoring a person whose position ( x p , y p , z p ) and orientation ( α p ) are known.
Table 1. Trajectory planner’s states for the monitoring process. (Part I—States from 0 to 6). Reference trajectories for the position ( x , y , z ) and yaw angle ( ψ ) of a quadrotor UAV monitoring a person whose position ( x p , y p , z p ) and orientation ( α p ) are known.
StateReference TrajectoriesParameters and Condition [C]
0 x 0 * ( t ) = x b ( x b , y b , z b ) : base position [m]
y 0 * ( t ) = y b ψ ( 0 ) : initial yaw angle (by default 0) [rad]
z 0 * ( t ) = z b
ψ 0 * ( t ) = ψ ( 0 )
[C] If Instruction is Received → State 1
1 x 1 * ( t ) = x i 1 z r : relative monitoring altitude [m]
y 1 * ( t ) = y i 1 v z : velocity in Z-Axis [m/s]
z 1 * ( t ) = z i 1 + t t i 1 t f 1 t i 1 · ( z m z i 1 )
ψ 1 * ( t ) = ψ i 1
where:  t f 1 = t i 1 + z m z i 1 v z ;  z m = z p + z r
[C] If ( x , y , z ) = ( x b , y b , z m ) State 2
2 x 2 * ( t ) = x i 2 ω ψ : angular velocity (yaw) [rad/s]
y 2 * ( t ) = y i 2 α c a m e r a : camera’s angle [rad]
z 2 * ( t ) = z i 2
ψ 2 * ( t ) = ψ i 2 + t t i 2 t f 2 t i 2 · ( ψ f 2 ψ i 2 )
where:  t f 2 = t i 2 + ψ f 2 ψ i 2 ω ψ ;
ψ f 2 = ( α α c a m e r a ) ; α = a r c t a n y p y i 2 x p x i 2
[C] If ψ = ψ f 2 = ( α α c a m e r a ) State 3
3 x 3 * ( t ) = x i 3 + v d · ( x s p x i 3 ) d · ( t t i 3 ) R x : radius on the X-axis [m]
y 3 * ( t ) = y i 3 + v d · ( y s p y i 3 ) d · ( t t i 3 ) R y : radius on the Y-axis [m]
z 3 * ( t ) = z i 3 v d : diagonal velocity [m/s]
ψ 3 * ( t ) = ψ i 3
where:  d = ( x s p x i 3 ) 2 + ( y s p y i 3 ) 2 ;
( x s p , y s p ) see Section 3.2
[C] If ( x , y , z ) = ( x s p , y s p , z s p ) State 4
4 x 4 * ( t ) = x i 4 t s 4 : timer [s]
y 4 * ( t ) = y i 4
z 4 * ( t ) = z i 4
ψ 4 * ( t ) = ψ i 4
[C] If t i 4 + t s 4 State 5
5 x 5 * ( t ) = x p + R x cos ( γ ) cos ( α p ) R y sin ( γ ) sin ( α p ) R x : radius on the X-axis [m]
y 5 * ( t ) = y p + R x cos ( γ ) sin ( α p ) + R y sin ( γ ) cos ( α p ) R y : radius on the Y-axis [m]
z 5 * ( t ) = z i 5 ω m : monitoring angular velocity [rad/s]
ψ 5 * ( t ) = a r c t a n y p y 5 * ( t ) x p x 5 * ( t ) α c a m e r a α c a m e r a : camera’s angle [rad]
where:  γ = γ s p + ω m · ( t t i 5 ) ;
γ s p see Section 3.3
ω m see Table 3[C] If ( x , y , z ) = ( x p h , y p h , z p h ) State 6
6 x 6 * ( t ) = x i 6 t s 6 : timer [s]
y 6 * ( t ) = y i 6
z 6 * ( t ) = z i 6
ψ 6 * ( t ) = ψ i 6
[C] If t i 6 + t s 6 State 7
Notation ⇒ ( x n * ( t ) , y n * ( t ) , z n * ( t ) ) : reference position in state n; ψ n * ( t ) reference yaw angle in state n; t i n : initial time of state n; t f n : final time of state n; x i n = x ( t i n ) : initial value of x coordinate at the beginning of state n (at instant t i n ); y i n = y ( t i n ) : initial value of y coordinate at the beginning of state y (at instant t i n ); z i n = z ( t i n ) : initial value of z coordinate at the beginning of state n (at instant t i n ); ψ i n = ψ ( t i n ) : initial value of ψ angle at the beginning of state n (at instant t i n ).
Table 2. Trajectory planner’s states for the monitoring process. (Part II—States from 7 to 11). Reference trajectories for the position ( x , y , z ) and yaw angle ( ψ ) of a quadrotor UAV monitoring a person whose position ( x p , y p , z p ) and orientation ( α p ) are known.
Table 2. Trajectory planner’s states for the monitoring process. (Part II—States from 7 to 11). Reference trajectories for the position ( x , y , z ) and yaw angle ( ψ ) of a quadrotor UAV monitoring a person whose position ( x p , y p , z p ) and orientation ( α p ) are known.
StateReference TrajectoriesParameters and Condition [C]
7 x 7 * ( t ) = x p + R x cos ( γ ) cos ( α p ) R y sin ( γ ) sin ( α p ) R x : radius on the X-axis [m]
y 7 * ( t ) = y p + R x cos ( γ ) sin ( α p ) + R y sin ( γ ) cos ( α p ) R y : radius on the Y-axis [m]
z 7 * ( t ) = z i 7 ω m : monitoring angular velocity [rad/s]
ψ 7 * ( t ) = a r c t a n y p y 7 * ( t ) x p x 7 * ( t ) α c a m e r a α c a m e r a : camera’s angle [rad]
where:  γ = γ p h + ω m · ( t t i 7 ) ; γ p h = 0
ω m see Table 3[C] If ( x , y , z ) = ( x s p , y s p , z s p ) State 8
8 x 8 * ( t ) = x i 8 ω ψ : angular velocity (yaw) [rad/s]
y 8 * ( t ) = y i 8
z 8 * ( t ) = z i 8
ψ 8 * ( t ) = ψ i 8 + t t i 8 t f 8 t i 8 · ( ψ f 8 ψ i 8 )
where:  t f 8 = t i 8 + ψ f 8 ψ i 8 ω ψ ;  ψ f 8 = ( ψ i 8 π )
[C] If ψ = ψ f 8 = ( ψ i 8 π ) State 9
9 x 9 * ( t ) = x i 9 + v d · ( x b x i 9 ) d · ( t t i 9 ) ( x b , y b , z b ) : base position [m]
y 9 * ( t ) = y i 9 + v d · ( y b y i 9 ) d · ( t t i 9 ) v d : diagonal velocity [m/s]
z 9 * ( t ) = z i 9
ψ 9 * ( t ) = ψ i 9
where:  d = ( x b x i 9 ) 2 + ( y b y i 9 ) 2
[C] If ( x , y , z ) = ( x b , y b , z m ) State 10
10 x 10 * ( t ) = x i 10 ω ψ : angular velocity (yaw) [rad/s]
y 10 * ( t ) = y i 10 ψ 0 : initial yaw angle (by default 0) [rad]
z 10 * ( t ) = z i 10
ψ 10 * ( t ) = ψ i 10 + t t i 10 t f 10 t i 10 · ( ψ ( 0 ) ψ i 10 )
where:  t f 10 = t i 10 + ψ ( 0 ) ψ i 10 ω ψ
[C] If ψ = ψ ( 0 ) State 11
11 x 11 * ( t ) = x i 11 ( x b , y b , z b ) : base position [m]
y 11 * ( t ) = y i 11 v z : velocity in Z-Axis [m/s]
z 11 * ( t ) = z i 11 + t t i 11 t f 11 t i 11 · z b z i 11
ψ 11 * ( t ) = ψ i 11
where:  t f 11 = t i 11 + z b z i 11 v z
[C] If ( x , y , z ) = ( x b , y b , z b ) State 0
Notation ⇒ ( x n * ( t ) , y n * ( t ) , z n * ( t ) ) : reference position in state n; ψ n * ( t ) reference yaw angle in state n; t i n : initial time of state n; t f n : final time of state n; x i n = x ( t i n ) : initial value of x coordinate at the beginning of state n (at instant t i n ); y i n = y ( t i n ) : initial value of y coordinate at the beginning of state y (at instant t i n ); z i n = z ( t i n ) : initial value of z coordinate at the beginning of state n (at instant t i n ); ψ i n = ψ ( t i n ) : initial value of ψ angle at the beginning of state n (at instant t i n ).
Table 3. Values established in the simulation tests for each of the parameters under study; relative monitoring altitude, monitoring velocity, and monitoring radius (type/shape of trajectory).
Table 3. Values established in the simulation tests for each of the parameters under study; relative monitoring altitude, monitoring velocity, and monitoring radius (type/shape of trajectory).
Parameter1st Value (Extreme 1)2nd Value (Intermediate)3rd Value (Extreme 2)
Relative Monitoring AltitudeLow/BelowMedium/CentredHigh/Above
z r z m = z p + z r z r = 0.30 [m] z r = 0 [m] z r = + 0.30 [m]
Monitoring VelocityLow VelocityMedium VelocityHigh Velocity
ω m = ω γ · f ω f ω = 0.65 f ω = 1 f ω = 1.35
Monitoring RadiusCloser to the FaceEquidistant (Circular)Farther from the Face
(Ellipse) R x R x = 0.90 [m] R x = 1.30 [m] R x = 1.70 [m]
(Ellipse) R y R y = 1.70 [m] R y = 1.30 [m] R y = 0.90 [m]
Table 4. Parameters defined in the UAV simulator (MATLAB/Simulink® environment).
Table 4. Parameters defined in the UAV simulator (MATLAB/Simulink® environment).
DescriptionValueUnits
Simulation
    Sample Time T s = 0.01 [s]
    Simulation Time t = 250 [s]
Quadrotor UAV
    Initial Position (Base Position) ( x b , y b , z b ) = ( 0 , 0 , 0 ) [m]
    Initial Yaw Angle ψ ( 0 ) = 0 [rad]
    Camera’s Angle α c a m e r a = π / 4 [rad]
    Mass m = 1 [Kg]
Trajectory Planner
  Fixed Parameters:
    Velocity in Z axis [state 1—landing/state 11—takeoff] v z = [ 8.6 × 10 2 / 4.3 × 10 2 ] [m/s]
    Velocity in X Y Diagonal Motion [state 3/state 9] v d = [ 0.1 / 0.2 ] [m/s]
    Angular Velocity for Yaw Adjustment ω ψ = 3 π / 50 [rad/s]
    Angular Velocity for Ellipsoidal Motion ω γ = 9 π / 250 [rad/s]
    Timer of State 4—Waiting in Safety Position t s 4 = 5 [s]
    Timer of State 6—Data Capture t s 6 = 0 [s]
  Variable Parameters ⇒ see Table 3 and Figure 7
Table 5. Questionnaire and participants’ responses for the experiment with younger adults.
Table 5. Questionnaire and participants’ responses for the experiment with younger adults.
QuestionMeanSDMedianIQR
Altitude
 Safety
   A.Saf1: I felt safe during the monitoring process by the UAV flying below my head3.031.143.002.00
   A.Saf2: I felt safe during the monitoring process by the UAV flying in front of my head2.541.102.001.00
   A.Saf3: I felt safe during the monitoring process by the UAV flying above my head4.050.914.001.00
 Supervision
   A.Sup1: I feel that there is too much supervision by the UAV flying below my head2.891.023.001.00
   A.Sup2: I feel that there is too much supervision by the UAV flying in front of my head3.411.143.002.00
   A.Sup3: I feel that there is too much supervision by the UAV flying above my head3.051.033.001.00
 Distraction
   A.Dis1: I think the UAV would distract me from my daily routine by flying below my head3.761.264.002.00
   A.Dis2: I think the UAV would distract me from my daily routine by flying in front of my head4.350.825.001.00
   A.Dis3: I think the UAV would distract me from my daily routine by flying above my head2.431.042.001.00
Velocity
 Safety
   V.Saf1: I felt safe during the monitoring process by the UAV flying at a low velocity4.190.844.001.00
   V.Saf2: I felt safe during the monitoring process by the UAV flying at a medium velocity4.110.744.001.00
   V.Saf3: I felt safe during the monitoring process by the UAV flying at a high velocity3.431.194.002.00
 Supervision
   V.Sup1: I feel that there is too much supervision by the UAV flying at a low velocity3.700.854.001.00
   V.Sup2: I feel that there is too much supervision by the UAV flying at a medium velocity3.320.783.001.00
   V.Sup3: I feel that there is too much supervision by the UAV flying at a high velocity2.970.803.000.00
 Distraction
   V.Dis1: I think the UAV would distract me from my daily routine by flying at a low velocity3.541.374.003.00
   V.Dis2: I think the UAV would distract me from my daily routine by flying at a medium velocity3.221.163.001.00
   V.Dis3: I think the UAV would distract me from my daily routine by flying at a high velocity3.351.234.002.00
 Adequacy
   V.Ade1: I found the low velocity of the UAV adequate3.031.323.002.00
   V.Ade2: I found the medium velocity of the UAV adequate3.490.874.001.00
   V.Ade3: I found the high velocity of the UAV adequate3.301.153.002.00
Trajectory
 Safety
   T.Saf1: I felt safe during the monitoring process by the UAV flying elliptically (closer to face)2.701.102.002.00
   T.Saf2: I felt safe during the monitoring process by the UAV flying following a circular trajectory3.890.814.001.00
   T.Saf3: I felt safe during the monitoring process by the UAV flying elliptically (farther from face)3.680.944.001.00
 Supervision
   T.Sup1: I feel that there is too much supervision by the UAV flying elliptically (closer to face)3.511.073.001.00
   T.Sup2: I feel that there is too much supervision by the UAV flying following a circular trajectory2.890.703.001.00
   T.Sup3: I feel that there is too much supervision by the UAV flying elliptically (farther from face)2.890.813.001.00
 Distraction
   T.Dis1: I think the UAV would distract me from my daily routine by flying elliptically (closer to face)3.861.034.001.00
   T.Dis2: I think the UAV would distract me from my daily routine by flying following a circular trajectory2.840.993.002.00
   T.Dis3: I think the UAV would distract me from my daily routine by flying elliptically (farther from face)2.951.053.002.00
 Adequacy
   T.Ade1: I found the UAV’s monitoring distance to be correct by the UAV flying elliptically (closer to face)2.681.202.002.00
   T.Ade2: I found the UAV’s monitoring distance to be correct by the UAV flying following a circular trajectory3.840.694.001.00
   T.Ade3: I found the UAV’s monitoring distance to be correct by the UAV flying elliptically (farther from face)3.511.074.001.00
Table 6. Questionnaire and participants’ responses for the second experiment with older adults.
Table 6. Questionnaire and participants’ responses for the second experiment with older adults.
QuestionMeanSDMedianIQR
Altitude
  Safety
    A.Saf12.431.082.001.00
    A.Saf22.391.202.002.00
    A.Saf32.910.953.002.00
  Supervision
    A.Sup13.300.973.001.00
    A.Sup23.431.164.001.00
    A.Sup32.741.053.001.50
  Distraction
    A.Dis13.741.324.002.50
    A.Dis24.260.965.001.00
    A.Dis32.350.982.001.00
Velocity
  Safety
    V.Saf13.701.024.001.00
    V.Saf23.830.944.001.50
    V.Saf32.781.242.002.00
  Supervision
    V.Sup12.741.053.002.00
    V.Sup23.171.073.001.50
    V.Sup33.221.134.002.00
  Distraction
    V.Dis13.830.894.001.50
    V.Dis23.610.664.001.00
    V.Dis32.870.814.002.00
  Adequacy
    V.Ade13.741.394.002.00
    V.Ade23.481.163.001.00
    V.Ade33.431.124.001.00
Trajectory
  Safety
    T.Saf12.741.143.002.00
    T.Saf23.740.864.001.00
    T.Saf33.741.014.000.50
  Supervision
    T.Sup12.651.232.001.50
    T.Sup23.870.764.000.50
    T.Sup33.741.014.001.00
  Distraction
    T.Dis13.301.113.001.50
    T.Dis22.870.764.000.50
    T.Dis32.870.873.001.00
  Adequacy
    T.Ade12.741.293.002.00
    T.Ade22.830.943.002.00
    T.Ade33.740.924.001.00
Table 7. Summary of the results for younger and older adults.
Table 7. Summary of the results for younger and older adults.
Younger AdultsOlder Adults
Altitude
PreferenceHigh altitudeHigh altitude
SafetyHigh altitude safer than
medium and low altitudes
No significant difference
SupervisionNo significant differenceNo significant difference
DistractionMedium and low altitudes
distract more than high
altitude
Medium and low altitudes
distract more than high
altitude
Velocity
PreferenceHigh velocityHigh velocity
SafetyLower velocity safer than
high velocity
Lower and medium velocities
safer than high velocity
SupervisionLow velocity has more
perceived supervision
than high velocity
No significant difference
DistractionNo significant differenceLower velocity distracts
more than high velocity
AdequacyNo significant differenceNo significant difference
Trajectory
PreferenceCircular trajectoryCircular trajectory
SafetyCircular and elliptical
trajectory farther from
the user’s face safer than
elliptical trajectory closer
to the user’s face
Circular and elliptical
trajectory farther from the
user’s face safer than
elliptical trajectory closer
to the user’s face
SupervisionElliptical trajectory close to
the user has more perceived
supervision than circular and
elliptical trajectory far from
the user
Higher supervision for
elliptical trajectory close to
the user than circular and
elliptical trajectory far from
the user
DistractionElliptical trajectory close to
the users distracts more
than circular and elliptical
trajectory farther from the
user’s face
No significant difference
AdequacyElliptical trajectory far from
the user and circular
trajectory are more
adequate than elliptical
trajectory closer to the user’s
face
Elliptical trajectory far from
the user more adequate
than circular and elliptical
trajectory closer to the
user’s face
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Belmonte, L.M.; García, A.S.; Morales, R.; de la Vara, J.L.; López de la Rosa, F.; Fernández-Caballero, A. Feeling of Safety and Comfort towards a Socially Assistive Unmanned Aerial Vehicle That Monitors People in a Virtual Home. Sensors 2021, 21, 908. https://doi.org/10.3390/s21030908

AMA Style

Belmonte LM, García AS, Morales R, de la Vara JL, López de la Rosa F, Fernández-Caballero A. Feeling of Safety and Comfort towards a Socially Assistive Unmanned Aerial Vehicle That Monitors People in a Virtual Home. Sensors. 2021; 21(3):908. https://doi.org/10.3390/s21030908

Chicago/Turabian Style

Belmonte, Lidia M., Arturo S. García, Rafael Morales, Jose Luis de la Vara, Francisco López de la Rosa, and Antonio Fernández-Caballero. 2021. "Feeling of Safety and Comfort towards a Socially Assistive Unmanned Aerial Vehicle That Monitors People in a Virtual Home" Sensors 21, no. 3: 908. https://doi.org/10.3390/s21030908

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop