Next Article in Journal
A* Based Routing and Scheduling Modules for Multiple AGVs in an Industrial Scenario
Previous Article in Journal
Dynamic Modeling of Planar Multi-Link Flexible Manipulators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multirobot Confidence and Behavior Modeling: An Evaluation of Semiautonomous Task Performance and Efficiency

Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI 48202, USA
*
Author to whom correspondence should be addressed.
Robotics 2021, 10(2), 71; https://doi.org/10.3390/robotics10020071
Submission received: 13 March 2021 / Revised: 30 April 2021 / Accepted: 11 May 2021 / Published: 17 May 2021

Abstract

:
There is considerable interest in multirobot systems capable of performing spatially distributed, hazardous, and complex tasks as a team leveraging the unique abilities of humans and automated machines working alongside each other. The limitations of human perception and cognition affect operators’ ability to integrate information from multiple mobile robots, switch between their spatial frames of reference, and divide attention among many sensory inputs and command outputs. Automation is necessary to help the operator manage increasing demands as the number of robots (and humans) scales up. However, more automation does not necessarily equate to better performance. A generalized robot confidence model was developed, which transforms key operator attention indicators to a robot confidence value for each robot to enable the robots’ adaptive behaviors. This model was implemented in a multirobot test platform with the operator commanding robot trajectories using a computer mouse and an eye tracker providing gaze data used to estimate dynamic operator attention. The human-attention-based robot confidence model dynamically adapted the behavior of individual robots in response to operator attention. The model was successfully evaluated to reveal evidence linking average robot confidence to multirobot search task performance and efficiency. The contributions of this work provide essential steps toward effective human operation of multiple unmanned vehicles to perform spatially distributed and hazardous tasks in complex environments for space exploration, defense, homeland security, search and rescue, and other real-world applications.

1. Introduction

Researchers have long sought to enable multiple robots working together as a team [1,2,3,4,5,6,7,8,9,10] to perform distributed tasks such as area exploration, search, and surveillance [11,12,13,14,15,16,17,18,19] and complex tasks in hostile conditions such as the assembly of structures in orbit, lunar, and planetary environments [20,21,22,23,24,25,26]. Advances in sensing, computing, and materials may make fully autonomous multirobot systems possible in certain circumstances. However, many applications will demand human supervision and intervention to satisfy safety requirements, overcome technical limitations, or authorize critical actions. Human operators will be expected to perform tasks such as approving targets and resolving navigational impasses for manned-unmanned teams with robotic or optionally manned vehicles. Near-term teams of robots and humans will benefit from the unique advantages of human cognition, reasoning, ingenuity, and soft skills. Even after significant advances, humans will likely often retain a vital role as the authority ultimately responsible for safety and operating within established constraints.
Human interaction with multiple mobile robots involves information from many sources, multiple frames of reference, and competing tasks. Factors affecting single robot control via video-based interfaces include restricted fields of view, difficulty ascertaining orientations of the environment and robot, unnatural and occluded viewpoints, limited depth information, time delay, and poor video quality [27]. Increasing the number of robots multiplies these challenges, with each robot having potentially unique and dynamic orientations, camera perspectives, and sensory frames of reference. The demands of multitasking can overload the operator and limit the scalability of human–robot interaction as the number of robots increases [28,29,30,31,32,33].
Increasing automation does not necessarily improve performance. Our group’s prior user study measured search task performance with four robots operated at each of three levels of autonomy [34,35]. Automation and augmented reality (AR) graphics were intended to allow the participant to focus on higher-level tasks. With a fixed number of robots, successively higher levels of robot autonomy were expected to improve performance. However, the results revealed performance might decrease as autonomy increases past some threshold. We observed that many operators over-relied on automation and that operator inattention might have contributed to the unexpected drop in performance.
In this paper, we hypothesize that measuring attention and incorporating it as feedback in the system can mitigate these factors and improve performance. This work describes a robot confidence model which varies robot behavior in response to indicators of operator attention and the results of a user study that demonstrate its utility. The term robot confidence is used here as a metaphor to describe the mapping of attention-related inputs to robot behaviors. This research contributes techniques of incorporating operator attention as feedback to enable effective and efficient control of multiple semiautonomous mobile robots by a human operator.

2. Background

2.1. Robot Confidence

Concepts related to confidence are often linked to human trust in autonomy and allocation of control or how a human operator uses available autonomy levels. Operator confidence typically refers to the self-assurance of a human in their ability to perform a task or trust in a robot’s ability to function autonomously. Research includes the impact of transparency and reliability on operator confidence [36]. Models estimating human self-confidence have been developed for purposes such as automatically choosing between manual and autonomous control [37].
Research related to robot confidence is typically aimed at altering human trust in autonomy or allocating control authority. A common objective is convincing the operator to shift the allocation of control toward autonomy or manual operation as appropriate to optimize performance. For example, a robot may provide visual feedback indicating its self-confidence to influence the operator’s trust [38]. Alternatively, a model of robot confidence might be used to directly distribute authority, such as setting shared-controller gains to amplify or attenuate inputs from a teleoperator and ultrasonic sensors [39]. Other research includes a robot expressing its certainty in performing policy learned from a human teacher [40,41,42] and modeling a robot’s confidence in a human co-worker [43] or its ability to predict human actions in a shared environment [44]. A similar concept is algorithm self-confidence applied, for example, to a visual classification algorithm [45].
Other works have modeled aspects of human cognition enabling robots to self-assess and adapt their behaviors [46,47,48]. The authors in [49] proposed an artificial neural network-based model of emotional metacontrol for modulating sensorimotor processes, and used intrinsic frustration and boredom to intervene in a visual search task. The authors in [50] modeled curiosity in an intrinsic motivation system used to maximize a robot’s learning progress by avoiding unlearnable situations and shifting its attention to situations of increasing difficulty.

2.2. Human Interaction with Multiple Robots

Automation is necessary for a human operator to effectively control multiple robots. Research often focuses on how many robots can be operated [31] and methods to do so efficiently [28,32]. General approaches to address operator overload due to multitasking include redesigning tasks and interfaces to reduce demands, training operators to develop automaticity, improve attention management, and automating tasks and task management [51]. Research toward interaction with multiple semiautonomous robots includes task switching and operator attention allocation [52,53,54], such as identifying where an operator should focus and influencing the operator’s behavior accordingly via visual cues in a graphical user interface [55]. Other work includes determining which aspects of a given task are most suitable for automation [16], measuring and influencing operator trust in team autonomy [19], using intelligent agents to help human operators manage a team of multiple robots [13], and augmented reality interfaces to integrate information from multiple sources and project it into a view of the real world using a common frame of reference [35,56,57].

2.3. Understanding the User’s Intent

Dragan et al. [58] address the fundamental problem of teleoperation interfaces being limited by the inherent indirectness of these systems. The report discusses intelligent and customizable solutions for the adverse effects of remote operations. They state that the decision on what assistance must be provided to operators must be contextual and dependent on the prediction of the user’s intent. Their main recommendation is that a robot learns specific policies based on examples. Chien, Lin, and Lee [52] proposed a hidden Markov model (HMM) to examine operator intent, and performed offline HMM analysis of multirobot interaction queuing mechanisms. Several groups [59,60,61] (including our own) have used eye-gaze tracking to determine the user intent for zooming the camera. Eye-gaze data clusters can be used as inputs into a classification algorithm (such as one based on linear discriminant analysis) to determine the user intent for zooming the camera. Similarly, [59] uses simultaneous eye-gaze displays of multiple users to show their mutual intent.
Goldberg and Schryver [60] developed an off-line method for predicting a user’s intent to change or maintain the zoom (i.e., magnification/reduction) and gave camera zoom control as an example application. Latif, Sherkat, and Lotfi [62,63] proposed a gaze-based interface to drive a robot and change the on-board camera view. Zhu, Gedeon, and Taylor [64] developed a gaze-based pan and tilt control that continually repositioned the camera viewpoint to bring the user’s fixation point to the video screen center. Kwok et al. [59] applied eye-gaze tracking to allow control of two bimanual surgical robots by independent operators.

2.4. Eye Tracking for Human–Robot Interaction

Many metrics have been proposed for evaluating the performance of human–robot teams [53,65,66]. However, operator awareness, intent, and workload are influenced by various factors linked to task conditions, human perception, and cognition, which make these challenging to define and measure. Established methods are typically subjective [67,68,69,70,71,72,73,74]. A number of physiological indicators can be observed, including many related to eye movements that are measurable using non-invasive techniques [69,75,76].
Attention is a cognitive function and thus is difficult to measure directly. Eye tracking technology enables physiological measurements linked to various aspects of human cognition, including attention [77,78,79,80,81]. The role of eye movements and visual attention in reading, scene perception, visual search, and other information processing has long been studied [82,83,84,85]. Eye gaze describes the point where a person is looking. A fixation is a relatively stable visual gaze at an area of interest. A saccade is a rapid ballistic movement to a new area of interest. The detection of fixations and saccades from raw eye movement data was a principal focus of early work toward eye-tracking for human–computer interaction (HCI) [60,86,87,88,89,90]. Advances in video-based eye tracking [91,92] and automatic fixation and saccade detection [93] have inspired applications for human–computer interaction (HCI) [94,95].
Gaze-directed pointing is the archetype of interactive eye-tracking use cases and core motivation for fundamental work such as fixation and saccade detection [93]. Overt attention directed at a user interface element is a strong indicator of user intent to interact with that element. Spatial input is a highly intuitive use of eye-tracking for interactive systems. Considerable research has been conducted toward gaze-based pointing [77,78,79,82,86,87,94,96]. Interactive applications include hands-free user input for the disabled [96,97], camera control [60,61,62,63,64,98,99], and automotive applications [100,101,102,103]. Techniques have been developed to teleoperate a robot using eye gaze, including an interface to drive a robot and change the view of an on-board camera [62,63]. The user interface featured graphical overlays for control elements. Gaze input commands were activated by either dwell-time or a foot clutch, enabling hands-free teleoperation.
More subtle than gaze-based control, the human eye is a unique window into perception and cognition processes. Eye gaze has been used as a proxy for attention [77,78,79]. The authors in [80] modeled dynamic operator overload based on the operator’s attention to a critical situation associated with impending failure. The response time before initial fixation represented delayed attention. The number of fixations on an object represented the allocation of attention. Fixation has been applied as a measure of attention allocation for an online predictive model of operator overload during supervisory control of multiple unmanned aerial vehicles (UAVs) within a simulation environment [81]. A logistic regression model, developed to predict vehicle damage when an operator failed to correct a collision course, was applied to generate real-time alerts. The model was a function of the delay prior to allocating visual attention to the vehicle, how much attention was diverted away from the vehicle once attended, and how much time remained before the collision will occur. Fixation has also been used to measure situation awareness [104,105], operator fatigue [106], and workload [75,76,107,108,109,110].

2.5. Augmented Reality for Human–Robot Interfaces

Human control of multiple mobile robots requires considerable divided attention, integrating information from many sources, and switching between multiple frames of reference. Projecting sensed data onto the real-world scene, at the point of observation or the point being observed, may help alleviate the cognitive burden of mentally integrating information from various sources. Augmented reality (AR) is the registration and visual integration of computer-generated graphics and real-world environments [111,112]. Demonstrated techniques include overlaying sensed data onto individual robots via wearable head-up display [56] and superimposing arrows on 20 robots to create a gradient toward a target location [57].
Telerobotic systems very often rely on real-time video from the perspective of or external to the robot. One challenge of teleoperation is limited visuospatial perspective. AR techniques such as color-coded orientation cues that visually map controller input axes (on the joystick hardware) to end effector axes (on the display) can improve telemanipulator navigation, with significant reductions in trajectory distance, deviations from the ideal path, and navigation error [113].
AR can also reduce visual search and mental integration demands. During traditional neuronavigation, a surgeon must mentally transform two-dimensional medical imaging data into three-dimensional structures and project this information on her or his view of the patient. Systems for augmented neuronavigation can perform transformations by computer and display composite video with models of structures of interest projected on the surgical site, resulting in significantly lower task time and fewer errors [114].

3. Materials and Methods

3.1. Robot Confidence Model

A discrete robot confidence model was developed to dynamically adapt the behavior of a robot in response to operator attention directed at the robot or its activities. For this work, robot confidence is a value that increases upon attention to the robot and decreases over time while the operator does not attend the robot. Figure 1 contains a diagram summarizing the major elements of the model. Each indicator of operator attention in x has a corresponding weight in p which can be used to bias the inputs. The maximum value u of the weighted inputs is computed to establish the highest estimate of attention given the inputs. Robot confidence c k at the current timestep k is a function of the maximum weighted input ( u ), the confidence at the preceding timestep, and a constant minimum confidence value. The previous confidence value is decreased by a constant decrement value before taking the maximum value of these inputs. Similar to the calculation of u , the maximum value is again taken to yield the highest confidence given all inputs, feedback, and constraints. The computed confidence value is then used to adapt robot behaviors according to predefined rules.
The key features of this model are the aggregation of weighted attention indicators as a maximum value, a second maximum for the actual confidence value calculation, and the decremented previous confidence value. It should be noted that confidence at any given time might exceed the maximum weighted input ( u ) if the decremented previous confidence is higher. That is, confidence can only decrease by at most c d , even when u has a lower value than the decremented confidence ( c k 1 c d ). In other words, a sudden drop in attention does not result in an immediate dramatic reduction in confidence. Instead, confidence gradually decreases over time. Additionally, note the confidence value continues decreasing during long periods of inattention but does not drop below the minimum value c min .
Figure 1 acknowledges that robot behaviors observed by the operator may influence attention. Although this could be exploited to create a second feedback loop by, for example, exercising exaggerated or unexpected motions or flashing visual alerts, our implementation of the model sought to avoid overtly influencing attention in order to minimize external factors affecting overall system performance.
For n binary indicators of operator attention in x and corresponding parameters in p , Equation (3) computes a maximum weighted input u , where p x is the element-wise product of x and p . Equation (4) defines robot confidence c k at timestep k , where c d is a confidence decrement subtracted from the previous confidence value c k 1 and c min is a minimum confidence value.
x = [ x 1 x n ]
p = [ p 1 p n ]
u = max ( p x ) = max ( [ p 1 x 1 p n x n ] )
c k = max ( u , c k 1 c d , c min )
Figure 2 illustrates an implementation of the model. The diagram shows how confidence value changes during a notional sequence of inputs. This example has an input vector x = [ x 1   x 2 ] with two binary indicators of attention x 1 , x 2 [ 0 , 1 ] and two associated weight parameters p = [ p 1   p 2 ] = [ 25   10 ] , a confidence decrement c d = 10 , and minimum value c min = 0 .
Starting with initial robot confidence c 0 = 0 , Figure 2 depicts the following sequence of events:
  • At timestep 1 , Equation (3) with model input x = [ 0   0 ] results in a maximum confidence input of u = max ( [ p 1 x 1   p 2 x 2 ] ) = max ( 25 · 0 ,   10 · 0 ) = 0 . Equation (4) then yields robot confidence c 1 = max ( u , c 0 c d ,   c min ) = max ( 0 ,   0 10 ,   0 ) = 0 . Here, we see the maximum value prevents confidence values below c min .
  • At timestep 2 , input x = [ 1   0 ] results in u = max ( 25 · 1 ,   10 · 0 ) = 25 and robot confidence c 2 = max ( 25 ,   0 10 ,   0 ) = 25 .
  • At timestep 3 , input x = [ 1   1 ] results in u = max ( 25 · 1 ,   10 · 1 ) = 25 and robot confidence c 3 = max ( 25 ,   25 10 ,   0 ) = 25 . Note that multiple instances of the same input u at consecutive timesteps sustain confidence but do not increase it.
  • At timestep 4 , input x = [ 0   1 ] results in u = max ( 25 · 0 ,   10 · 1 ) = 10 and robot confidence c 4 = max ( 10 ,   25 10 ,   0 ) = max ( 10 ,   15 ,   0 ) = 15 . Here, we see the new confidence value is the decremented value ( c 3 c d ) = 15 because this is still greater than the weighted sum u .
  • At timestep 5 , input x = [ 0   0 ] results in u = max ( 25 · 0 ,   10 · 0 ) = 0 and robot confidence c 5 = max ( 0 ,   15 10 ,   0 ) = 5 .
  • At timestep 6 , input x = [ 0   1 ] results in u = max ( 25 · 0 ,   10 · 1 ) = 10 and robot confidence c 6 = max ( 10 ,   5 10 ,   0 ) = 10 . Note that confidence does not increase by the value of the maximum weighted input ( 10 ). Instead, it only increases by 5 to reach a value of 10 .
This example highlights several key features of the model. First, multiple simultaneous indicators of attention are not cumulative. The indicator with the highest weighted value takes precedence. Second, consecutive instances of the same indicator are not cumulative. Confidence is sustained at the same value until the indicator is no longer present or another indicator takes precedence. Lastly, confidence decreases at most c d per timestep and never drops below the floor value c min . In other words, confidence never decreases by more than c d even when the inputs would otherwise yield a lower confidence value.
The model uses a maximum-value approach to aggregate indicators of attention and update the confidence value. The weighted maximum in Equation (3) for input aggregation avoids the problem of multiple indicators influencing confidence more or less depending on whether they are registered simultaneously. The maximum in Equation (4) to update the confidence value prevents consecutive indicators from having a cumulative effect, especially for high-frequency indicators such as fixations. Thus, the model accommodates indicators that might occur simultaneously but at different and potentially variable frequencies.
For example, the indicators implemented for the user study described later are eye gaze fixations and direct input commands from the operator. A single fixation duration could be less than 100 ms, and multiple fixations are likely between operator commands, which might occur seconds or minutes apart. In addition, multiple fixations are likely during periods of focused attention, but the number and duration of fixations can vary. These could be a relatively small number of long fixations or a higher number of short fixations. Individual operator differences and the design of fixation detection algorithms also contribute to variability in fixation counts and durations.

3.2. Multirobot Test Platform with Eye Tracking

A multirobot test platform was developed to implement and test the robot confidence model described above. The platform consisted of four semiautonomous robots in a flat, unobstructed environment, an operator control station, and a camera mounted above the operating environment that supplied video to the control station. Figure 3 shows the control station which the operator used to command and observe the robots. The display showed video of the test environment with graphics superimposed to provide information about the robots, tasks, and user input. The user inputted robot trajectories using a computer mouse (not shown). An ET1000 (The Eye Tribe ApS, Copenhagen, Denmark ) eye tracker with a reported accuracy of 0.5°–1° was mounted below the display and used to obtain eye-gaze data at 60 Hz for online estimation of screen coordinates where the operator directed their attention. Video S1 in the Supplementary Material highlights the robots as shown by the operator display (0:02), the operator in contact with forehead and chin rests to stabilize head position and orientation (0:12), and the eye tracker located below the display (0:24).
Figure 4 summarizes how the test platform was used to implement and test the confidence model. The eye tracker seen in the left panel (Figure 4a) provided gaze data which were processed to estimate operator attention directed at specific robots. These data and direct operator interaction with robots via command input were used to compute a confidence value for each robot. This value increased in response to attention directed at a robot and decreased over time while the operator attended to other objects. The center panel (Figure 4b) shows a visual representation of the confidence value. The platform could superimpose a light green arc on the robots to communicate confidence values to the operator; however, this graphic was not employed for the study presented here. The arc would gradually shorten to indicate decreasing confidence during periods of inattention to the robot, as depicted in the series of three time-lapsed images from top to bottom within the panel. The behavior of a robot changed according to its confidence value. A user study was conducted to measure search task performance and efficiency in relation to confidence and behavior changes, represented by the right panel (Figure 4c).
Figure 5 shows the robots within the test environment (left) and the presentation of this environment augmented with superimposed graphics via the operator display (right). The robots were 23 cm (9 in) wide and 25.4 cm (10 in) long. Independently driven rubber tracks enabled differential steering, including pivot turns (i.e., turning in place). Design details are available at https://github.com/lucas137/trackedrobot. The AprilTag visual fiducial system [115,116] was used to estimate the location and orientation of the robots in the test environment using video frames from the overhead camera. These data were used to identify operator interactions with specific robots, predict collisions, and superimpose graphics. A solid black circle was drawn on the robots as a high-contrast background for additional color-coded graphics, including a smaller solid light gray circle indicating the robot’s navigation status. The operator defined robot trajectories by inputting one or more waypoints which the robot automatically maneuvered to visit. The platform predicted robot collisions based on their trajectories, suspended the motion of robots just prior to the collision, and changed the color of the robots’ status graphic from light gray to orange. The robots remained suspended until the operator canceled or redefined one of the trajectories to resolve the collision. Software-defined obstacles were used to create navigation constraints. These virtual obstacles were drawn on the operator display as solid black rectangles. The platform did not allow the operator to input waypoints that would result in a trajectory crossing an obstacle. A continuous obstacle was placed around the perimeter of the test environment to prevent the operator from defining trajectories outside this space.
The platform had two trajectory input modes. The operator initiated the primary input mode by left-clicking a robot and then inputted waypoints by moving the mouse to each desired waypoint and using a single click to add it to the trajectory. The input was concluded by double-clicking to add the final waypoint. The robot then immediately executed the trajectory until it reached the final waypoint, or the operator canceled execution. The operator could cancel execution by clicking the robot, which stopped all motion and discarded the remaining unfinished trajectory. A trajectory being inputted could be discarded prior to execution by right-clicking anywhere on the screen. Inputted trajectories were drawn with a thin white line from the robot’s location to its current destination waypoint and between each subsequent waypoint. This provided the operator a visual representation of the remaining path during execution. During trajectory input, a thicker line and a circle at each waypoint were drawn. A light blue circle at the mouse cursor location indicated the pending waypoint that would be added upon a single or double click. Invalid pending waypoints were drawn yellow, and a yellow box was drawn around the obstacle the proposed trajectory would violate.
The second trajectory input mode provided the operator a means to orient the robot and move in tight quarters. The operator initiated this input by clicking a robot, holding the mouse button, and dragging the mouse to a destination waypoint. The robot immediately executed a pivot turn to face the destination and then moved in a straight line until either the robot reached it or the operator released the mouse button. If the destination was behind it, the robot pivoted to align its back with the destination and then moved backward to reach it. Otherwise, the robot pivoted to align its front and moved forward. If the operator moved the mouse while still holding the mouse button, the destination changed accordingly. The robot pivoted again as needed to face the new destination before resuming its motion toward it. As the robot always pivoted first, the operator could reorient the robot at its current location by picking a destination in the desired direction and releasing the mouse button when the robot achieved its new orientation but before it started moving toward the destination.
The robot confidence model was implemented with two indicators of operator attention ( x ): operator eye gaze fixations on the robot and operator input commands to the robot. For both indicators, the weighed maximum parameter value was 240 (i.e., p = [ 240   240 ] ). The confidence decrement was c d = 1 and the minimum value c min = 0 . Each robot’s confidence value was updated at approximately 60 Hz, when data samples from the eye tracker were processed to determine fixations. Operator input was processed at approximately 24 Hz, the test platform video display framerate.
Three robot behavior modes were defined with rules affecting a robot’s velocity according to its confidence value while operating autonomously:
  • Velocity boost;
  • Velocity drop;
  • Constant velocity (control).
In velocity boost mode, a robot moved faster than the nominal baseline velocity during periods of high confidence. Similarly, velocity drop mode reduced robot speeds during periods of low confidence. Robot speed was not affected by confidence in constant velocity mode, which served as a control for comparing performance in the other modes.

3.3. Experiment

A user study was conducted to measure search task performance and efficiency with respect to the 3 behavior modes (velocity boost, velocity drop, and constant velocity). Figure 6 contains a video frame from the control interface with added labels to point out features of the study. The task was to find multiple targets distributed within the test environment. The positions of these targets were software-defined, similar to the obstacles described above, but were hidden until located by the user. A robot “detected” a target when it was positioned within a configured distance, faced the target within ±45°, and had no obstacles between it and the target. As seen at the top of Figure 6, a green line was drawn to indicate a target detection. The line started at the robot and went through the target to a point beyond it. The length of this line was constant to avoid revealing the exact target location.
Video S2 in the Supplementary Material contains annotations noting target detection (0:07, 0:39, 0:46, 1:17) and target location (1:01, 1:24) events. To localize a target, the user positioned two robots to detect it, resulting in detection lines intersecting at the target point. The user then clicked the intersection to get credit for finding the target. This requirement was designed to simulate a real-world task that must be accomplished using multiple robots. A green circle with a light gray border was drawn to indicate a located target (top of Figure 6). A target could only be located once during the study trial, so this graphic persisted for the remaining duration of the trial.
The path plan on the left side of Figure 6 shows a robot trajectory inputted by the user. In addition, small blue circles were drawn to mark the robot’s location at regular time intervals, as seen on the right side of the figure. These persistent breadcrumbs provided a history of areas explored in search of targets. Study participants were asked to locate as many hidden targets as possible during each 5-min trial. A circular light green trial countdown graphic, seen in the center of Figure 6, shortened until disappearing at the end of the trial and displayed the number of seconds remaining.
A within-subjects design was selected to minimize participant-level variations such as spatial ability [117,118,119]. Three sets of 11 target locations were used to enable repeated measures for each robot behavior mode. Targets were randomly selected such that the overall difficulty of finding targets was roughly equal for all 3 sets. Each participant completed 9 study trials during a single session, 1 trial per combination of robot behavior mode and target set. Thus, 3 repeated measures for each behavior mode per participant. The presentation of behaviors and target sets was randomized and presented in counterbalanced order. Participants were presented with on-screen instructional material, received hands-on training, and completed self-paced practice exercises to develop proficiency using the test platform and performing the search task. The platform’s eye tracker was calibrated for each participant. A smaller number of targets that were easier to find than the 3 study target sets were used for the training trials to ensure participants would quickly discover them and gain experience completing the target localization task. In all cases, the number of discoverable targets was not revealed to participants during the study.
The study collected data for three search task metrics. Search performance was measured for each trial by recording the ratio of targets detected at least once by any of the robots and the ratio of targets located by the study participant. Search efficiency was computed for each trial by multiplying the ratio of targets located by the average robot motor speed during the trial. Motor speeds were normalized values ranging from 0 (no motion) to 1 (maximum speed capable). Thus, all three search task metrics could range from 0 to 1, with 1 being the best value possible.

3.4. Data Analysis

To understand how the robot behavior modes and other factors were related to the observed search performance and efficiency, mixed-effects regression models were constructed to explain the observed data by study trial. Linear mixed-effects models offer a robust statistic method capable of handling a variety of situations such as unbalanced data and missing values, and can be extended via generalized linear mixed-effects models to analyze data with non-normal error distributions [120,121,122,123,124]. Regression analyses were conducted using R (version 3.6.1) [125]. Linear mixed-effects models were fit by maximum likelihood using the lmer function of the R package lme4 (version 1.1.21) [126]. The general form of the R formulas used to specify the models is:
y ~ behavior + confidence + targetset + time + (1|participant)
where behavior is an unordered categorical variable with 3 levels for the robot behavior modes (velocity boost, velocity drop, constant velocity), confidence is a continuous variable for average robot confidence value during a given trial, targetset is an unordered categorical variable with 3 levels identifying which target set was used for the trial, and time is a continuous variable for time of day in decimal hours.
The explanatory variables for robot behavior mode, confidence value, target set, and time of day were entered into the model as fixed effects terms. The behavior and confidence terms were the primary interest, while target set and time of day were included to account for variations in the data that may be due to these other factors. Both continuous variables, confidence and time of day, were centered and scaled for model fitting. The model also included a random effects term with random intercepts by participant to account for correlation due to repeated measures.
The response variable y is the measure of performance for which the model is fitted. A total of three models were fitted:
  • Ratio of targets detected;
  • Ratio of targets located;
  • Search task efficiency.
The PBmodcomp function of the pbkrtest package (version 0.4.7) [127] was used to perform parametric bootstrap model comparisons to test whether each explanatory variable contributed significantly to the model fit. For each comparison, PBmodcomp compared the full model with a reduced model which omitted the variable being tested, and reported the fraction of simulated likelihood ratio test (LRT) values greater than or equal to the observed LRT value. A total of 30,000 simulations were performed per comparison.

4. Results

Data were collected with 12 healthy volunteers who had normal or corrected-to-normal vision (three females, nine males; mean age = 28.9, SD = 4.4). Each study session was approximately 2 h in length, during which all nine study trials were tested with the same participant, one trial per combination of the within-subject conditions (robot behavior mode and target set) for a total of 108 samples collected. About 1 h of the session was spent inducting the participant, conducting practice trials before the study trials, and receiving feedback from the participant after all trials were completed. The size of the study was in-part influenced by the two-factor counterbalancing scheme used to balance both robot behavior and target set, preliminary data collection, and prior studies conducted by our group.
Linear mixed-effects models were fitted to explain the observed search task metrics data while accounting for correlation due to repeated measures with each participant. Figure 7 shows added variable plots with fitted lines from each mixed-effects model, with grouping levels for robot behavior mode and target set. Marginal prediction values were obtained via the predict.merMod function from the lme4 package [126]. The fitted models have two continuous variables: average robot confidence and time of day. We focused on confidence as the continuous variable of interest, while holding time at its median value. Raw confidence values were projected onto the x-axis to show the distribution of data with respect to fitted values. The y-axes are set to the same scale for comparison.
The fitted lines for ratio of targets detected (Figure 7a) exhibit less variation than those for ratio of targets located (Figure 7b) and search task efficiency (Figure 7c). Predicted values for target set three (dashed line) reflect somewhat lower performance across all three metrics, perhaps indicating this set of targets was generally more difficult to find than the other sets. Taking this into consideration, predictions for targets located and efficiency are noticeably higher for both velocity boost and velocity drop robot behavior modes versus the control (constant velocity mode). However, these differences appear similar in magnitude to those by target set. The slopes of the fitted lines indicate potential positive relationships observed between average robot confidence by trial and all three measures of performance. The slopes for targets detected and targets located appear similar, and higher than search task efficiency.
Parametric bootstrap model comparisons were used to test the contribution of each explanatory variable to the fitted mixed-effects models. The test statistic was the ratio of simulated likelihood ratio test (LRT) values greater than or equal to the observed LRT value. p-values less than 0.05 indicated a model term that contributed significantly to the model fit (i.e., removing the term from the model significantly decreased the goodness of fit). Table 1 summarizes the model comparison results. No main effect of robot behavior mode was found for all three metrics (p ≥ 0.32). In other words, whether and which confidence-based behavior (or the control) was used for a given trial does not help account for differences in search performance or efficiency. However, a significant main effect of average robot confidence was observed for targets detected and located (both p < 0.01) as well as search efficiency (p < 0.05).
In addition to these principal results, Table 1 shows results for the target set and time terms. Although care was taken to select random target locations such that the target sets were of equal difficulty, it is reasonable to expect some variation in the data due to differences between the sets. Time of day was included as a fixed-effects term to account for potential variation due to circadian rhythm (e.g., operator fatigue). However, neither target set or time contributed significantly to any of the model fits (p ≥ 0.074 and p ≥ 0.18, respectively).

5. Discussion

This work developed a generalized robot confidence model which transforms multiple indicators of operator attention to a single confidence value which can be used to adapt robot behaviors. Specifically, we employed confidence as a metaphor relating indicators of operator attention and robot behaviors which respond to these indicators, and observed correlations between average confidence and three measures of multirobot search performance and efficiency.
Prior works related to robot confidence have focused on the allocation of control between human and robot [39], influencing operator behavior [38], or otherwise directly communicating the robot’s self-assessed state [40,41,42,43]. Other work is aimed at intrinsic motivations of the robot [49,50] or understanding of its environment [44]. While our model of confidence could drive overt feedback to the operator or be applied only to internal processes of the robot, the implementation presented here is directed at minimally intrusive adjustment of physical behavior to mitigate the challenges of human interaction with multiple mobile robots. The online application distinguishes this work from others which estimated operator attention offline [52] or used human eye gaze for training [128].
Our model produces a confidence value for each robot using a weighted-maximum to aggregate any number of inputs that may exhibit a high degree of variability, such as eye gaze fixations near a point of interest, along with a decremented previous value as feedback and a minimum confidence limit. A maximum-value approach was used to aggregate attention indicators and update the confidence value (see Equations (3) and (4)). This approach makes selective use of the available information to determine the confidence value. Future work might explore more sophisticated methods such as artificial neural networks and learned behaviors [49,50], hidden Markov models (HMMs) [52], and graph convolution networks [128].
We implemented the proposed robot confidence model using eye gaze fixation and user input as indicators of attention, along with adaptive behaviors which were automatically selected at threshold levels of confidence. The resulting system assessed operator attention in real-time to determine the confidence value of each robot and altered robot behavior accordingly.
We expected the user study to demonstrate improved search task performance and efficiency when robot velocity increased or decreased in response to high or low robot confidence, respectively. Parametric bootstrap comparisons of mixed-effects models found this confidence-based robot behavior was not significant to models explaining the observed search task metrics. Instead, we found the by-trial average confidence value contributed significantly to the models. This finding was evidence of positive relationships with all three metrics: targets detected, targets located, and search efficiency. This result suggests the confidence value itself has utility as a predictor of task performance and efficiency.
Future work might incorporate our confidence model as a real-time predictor of search task outcomes which can be leveraged to improve effective human supervision of multiple mobile robots in the field or for training. For example, graphics might be superimposed on a control interface display to communicate robot confidence to the operator to inform the allocation of resources and decision making. Our implementation centered on robot velocity. Future work might examine other robot behaviors, including team behaviors, adapted according to real-time confidence.
We envision a team of robots that will, through both imitation and reinforcement learning, automatically create robot behavior policies that improve performance. In this scenario, the human operator would seamlessly (via observation and interaction) adjust the robot’s confidence based on its performance. When mission objectives or milestones are reached, the robot will again be rewarded to boost confidence. Additionally, other factors can be added to attenuate robot behavior which includes signal time delay, signal corruption, power degradation, or even terrain constraints. Robot behavior policies such as “stay in pairs” or “move to be in a camera’s field of view” could automatically be learned and executed. Robot teams pairing with humans would, over time, become more efficient.
Furthermore, if simulation environments with sufficient resolution could be developed, robot policies under various configurations and conditions could be learned over many iterations in simulation. Applications include using duplicate robots in controlled environments for policy training before execution in the field, for example, training with terrestrial robots to develop policies for planetary exploration.

6. Conclusions

In this paper, we hypothesize that measuring attention and incorporating it as feedback in the system can mitigate factors affecting the function of multiple semiautonomous robots and improve performance. We presented a generalized robot confidence model which transforms key operator attention indicators to a robot confidence value for each robot to enable the robots’ adaptive behaviors. This model was implemented using operator eye gaze fixations and command inputs as the attention indicators, and successfully evaluated to reveal evidence linking average robot confidence to multirobot search task performance and efficiency. This work provides essential steps toward effective human operation of multiple unmanned vehicles to perform spatially distributed and hazardous tasks in complex environments for space exploration, defense, homeland security, search and rescue, and other real-world applications.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/robotics10020071/s1, Video S1: Multirobot test platform overview, Video S2: User interface with annotations noting target detection and target location events.

Author Contributions

Conceptualization, N.L. and A.P.; methodology, N.L. and A.P.; software, N.L.; validation N.L. and A.P.; formal analysis, N.L. and A.P.; investigation, N.L. and A.P.; resources, N.L. and A.P.; data curation N.L.; writing—original draft preparation, N.L. and A.P.; writing—review and editing, N.L. and A.P.; visualization, N.L.; supervision, A.P.; project administration, A.P.; funding acquisition, N.L. and A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This research was approved by the Institutional Review Board at Wayne State University (IRB# 082105B3E(R), approved 12 August 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank Luke Reisner for his contribution to this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dudek, G.; Jenkin, M.R.M.; Milios, E.; Wilkes, D. A taxonomy for multi-agent robotics. Auton. Robot. 1996, 3, 375–397. [Google Scholar] [CrossRef]
  2. Farinelli, A.; Iocchi, L.; Nardi, D. Multirobot systems: A classification focused on coordination. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2004, 34, 2015–2028. [Google Scholar] [CrossRef] [Green Version]
  3. Arai, T.; Pagello, E.; Parker, L.E. Guest editorial advances in multirobot systems. Robot. Autom. IEEE Trans. 2002, 18, 655–661. [Google Scholar] [CrossRef] [Green Version]
  4. Parker, L.E. Current research in multirobot systems. Artif. Life Robot. 2003, 7, 1–5. [Google Scholar] [CrossRef]
  5. Lucas, N.P.; Pandya, A.K.; Ellis, R.D. Review of multi-robot taxonomy, trends, and applications for defense and space. In Proceedings of SPIE 8387, Unmanned Systems Technology XIV; International Society for Optics and Photonics: Baltimore, MD, USA, 2012. [Google Scholar] [CrossRef]
  6. Parker, L.E. Heterogeneous Multi-Robot Cooperation. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1994. [Google Scholar]
  7. Parker, L.E. ALLIANCE: An architecture for fault tolerant multirobot cooperation. Robot. Autom. IEEE Trans. 1998, 14, 220–240. [Google Scholar] [CrossRef] [Green Version]
  8. Mataric, M.J. Reinforcement learning in the multi-robot domain. Auton. Robot. 1997, 4, 73–83. [Google Scholar] [CrossRef]
  9. Stone, P.; Veloso, M. Multiagent systems: A survey from a machine learning perspective. Auton. Robot. 2000, 8, 345–383. [Google Scholar] [CrossRef]
  10. Dias, M.B.; Zlot, R.; Kalra, N.; Stentz, A. Market-based multirobot coordination: A survey and analysis. Proc. IEEE 2006, 94, 1257–1270. [Google Scholar] [CrossRef] [Green Version]
  11. Burgard, W.; Moors, M.; Stachniss, C.; Schneider, F.E. Coordinated multi-robot exploration. IEEE Trans. Robot. 2005, 21, 376–386. [Google Scholar] [CrossRef] [Green Version]
  12. Fox, D.; Burgard, W.; Kruppa, H.; Thrun, S. A probabilistic approach to collaborative multi-robot localization. Auton. Robot. 2000, 8, 325–344. [Google Scholar] [CrossRef]
  13. Rosenfeld, A.; Agmon, N.; Maksimov, O.; Kraus, S. Intelligent agent supporting human–multi-robot team collaboration. Artif. Intell. 2017, 252, 211–231. [Google Scholar] [CrossRef]
  14. Kruijff, G.-J.M.; Janíček, M.; Keshavdas, S.; Larochelle, B.; Zender, H.; Smets, N.J.; Mioch, T.; Neerincx, M.A.; Diggelen, J.; Colas, F. Experience in system design for human-robot teaming in urban search and rescue. In Field and Service Robotics; Springer: Berlin/Heidelberg, Germany, 2014; pp. 111–125. [Google Scholar] [CrossRef]
  15. Zadorozhny, V.; Lewis, M. Information fusion based on collective intelligence for multi-robot search and rescue missions. In Proceedings of the 2013 IEEE 14th International Conference on Mobile Data Management, Milan, Italy, 3–6 June 2013; pp. 275–278. [Google Scholar] [CrossRef]
  16. Lewis, M.; Wang, H.; Chien, S.Y.; Velagapudi, P.; Scerri, P.; Sycara, K. Choosing autonomy modes for multirobot search. Hum. Factors 2010, 52, 225–233. [Google Scholar] [CrossRef] [Green Version]
  17. Kruijff-Korbayová, I.; Colas, F.; Gianni, M.; Pirri, F.; Greeff, J.; Hindriks, K.; Neerincx, M.; Ögren, P.; Svoboda, T.; Worst, R. TRADR project: Long-term human-robot teaming for robot assisted disaster response. Ki-Künstliche Intell. 2015, 29, 193–201. [Google Scholar] [CrossRef] [Green Version]
  18. Gregory, J.; Fink, J.; Stump, E.; Twigg, J.; Rogers, J.; Baran, D.; Fung, N.; Young, S. Application of multi-robot systems to disaster-relief scenarios with limited communication. In Field and Service Robotics; Springer: Cham, Switzerland, 2016; pp. 639–653. [Google Scholar] [CrossRef]
  19. Dawson, S.; Crawford, C.; Dillon, E.; Anderson, M. Affecting operator trust in intelligent multirobot surveillance systems. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3298–3304. [Google Scholar] [CrossRef]
  20. Leitner, J. Multi-robot cooperation in space: A survey. In 2009 Advanced Technologies for Enhanced Quality of Life; IEEE: New Jersey, NJ, USA, 2009; pp. 144–151. [Google Scholar] [CrossRef] [Green Version]
  21. Thangavelautham, J.; Law, K.; Fu, T.; El Samid, N.A.; Smith, A.D.; D’Eleuterio, G.M. Autonomous multirobot excavation for lunar applications. Robotica 2017, 35, 2330–2362. [Google Scholar] [CrossRef] [Green Version]
  22. Stroupe, A.; Huntsberger, T.; Okon, A.; Aghazarian, H.; Robinson, M. Behavior-based multi-robot collaboration for autonomous construction tasks. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 1495–1500. [Google Scholar] [CrossRef]
  23. Simmons, R.; Singh, S.; Heger, F.; Hiatt, L.M.; Koterba, S.; Melchior, N.; Sellner, B. Human-robot teams for large-scale assembly. In Proceedings of the NASA Science Technology Conference, College Park, MD, USA, 19–21 June 2007. [Google Scholar]
  24. Boning, P.; Dubowsky, S. The coordinated control of space robot teams for the on-orbit construction of large flexible space structures. Adv. Robot. 2010, 24, 303–323. [Google Scholar] [CrossRef] [Green Version]
  25. Ueno, H.; Nishimaki, T.; Oda, M.; Inaba, N. Autonomous cooperative robots for space structure assembly and maintenance. In Proceedings of the 7th International Symposium on Artificial Intelligence, Robotics, and Automation in Space, Nara, Japan, 19–23 May 2003. [Google Scholar]
  26. Huntsberger, T.; Pirjanian, P.; Trebi-Ollennu, A.; Nayar, H.D.; Aghazarian, H.; Ganino, A.; Garrett, M.; Joshi, S.; Schenker, P. CAMPOUT: A control architecture for tightly coupled coordination of multirobot systems for planetary surface exploration. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2003, 33, 550–559. [Google Scholar] [CrossRef]
  27. Chen, J.Y.; Haas, E.C.; Barnes, M.J. Human performance issues and user interface design for teleoperated robots. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2007, 37, 1231–1245. [Google Scholar] [CrossRef]
  28. Crandall, J.W.; Goodrich, M.A.; Olsen, D.R., Jr.; Nielsen, C.W. Validating human-robot interaction schemes in multitasking environments. Syst. Man Cybern. Part A Syst. Hum. IEEE Trans. 2005, 35, 438–449. [Google Scholar] [CrossRef]
  29. Cummings, M.; Nehme, C.; Crandall, J.; Mitchell, P. Predicting operator capacity for supervisory control of multiple UAVs. In Innovations in Intelligent Machines-1; Springer: Berlin/Heidelberg, Germany, 2007; Volume 70, pp. 11–37. [Google Scholar] [CrossRef]
  30. Olsen, D.R., Jr.; Wood, S.B. Fan-out: Measuring human control of multiple robots. In Proceedings of the SIGCHI Conf. on Human Factors in Computer Systems, Vienna, Austria, 24–29 April 2004; pp. 231–238. [Google Scholar] [CrossRef]
  31. Olsen, D.R.; Goodrich, M.A. Metrics for evaluating human-robot interactions. In Proceedings of the PerMIS, Gaithersburg, MD, USA, 16–18 August 2003. [Google Scholar]
  32. Goodrich, M.A.; Olsen, D.R., Jr. Seven principles of efficient human robot interaction. In Proceedings of the SMC’03 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme-System Security and Assurance (Cat. No.03CH37483), Washington, DC, USA, 8 October 2003; Volume 4, pp. 3942–3948. [Google Scholar] [CrossRef] [Green Version]
  33. Crandall, J.W.; Cummings, M.L. Identifying predictive metrics for supervisory control of multiple robots. Robot. IEEE Trans. 2007, 23, 942–951. [Google Scholar] [CrossRef] [Green Version]
  34. Lee, S.Y.-S. An Augmented Reality Interface for Multi-Robot Tele-Operation and Control; Wayne State University Dissertations: Detroit, MI, USA, 2011; Available online: https://digitalcommons.wayne.edu/oa_dissertations/381 (accessed on 15 March 2021).
  35. Lee, S.; Lucas, N.P.; Ellis, R.D.; Pandya, A. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control. In Proceedings of the SPIE 8387, Unmanned Systems Technology XIV, 83870N, Baltimore, MD, USA, 25 May 2012. [Google Scholar] [CrossRef]
  36. Wright, J.L.; Chen, J.Y.; Lakhmani, S.G. Agent transparency and reliability in human–robot interaction: The influence on user confidence and perceived reliability. IEEE Trans. Hum.Mach. Syst. 2020, 50, 254–263. [Google Scholar] [CrossRef]
  37. Saeidi, H.; Wang, Y. Incorporating Trust and Self-Confidence Analysis in the Guidance and Control of (Semi) Autonomous Mobile Robotic Systems. IEEE Robot. Autom. Lett. 2019, 4, 239–246. [Google Scholar] [CrossRef]
  38. Desai, M.; Kaniarasu, P.; Medvedev, M.; Steinfeld, A.; Yanco, H. Impact of robot failures and feedback on real-time trust. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; pp. 251–258. [Google Scholar] [CrossRef] [Green Version]
  39. Sanders, D.A.; Sanders, B.J.; Gegov, A.; Ndzi, D. Using confidence factors to share control between a mobile robot tele-operater and ultrasonic sensors. In Proceedings of the 2017 Intelligent Systems Conference (IntelliSys), London, UK, 7–8 September 2017; pp. 1026–1033. [Google Scholar] [CrossRef] [Green Version]
  40. Chernova, S.; Veloso, M. Confidence-based multi-robot learning from demonstration. Int. J. Soc. Robot. 2010, 2, 195–215. [Google Scholar] [CrossRef]
  41. Chernova, S. Confidence-Based Robot Policy Learning from Demonstration. Master’s Thesis, Carnegie-Mellon University Pittsburgh Pa School of Computer Science, Pittsburgh, PA, USA, 5 March 2009. [Google Scholar]
  42. Chernova, S.; Veloso, M. Teaching collaborative multi-robot tasks through demonstration. In Proceedings of the Humanoids 2008—8th IEEE-RAS International Conference on Humanoid Robots, Daejeon, Korea, 1–3 December 2008; pp. 385–390. [Google Scholar] [CrossRef]
  43. Tran, A. Robot Confidence Modeling and Role Change In Physical Human-Robot Collaboration. Master’s Thesis, University of Technology Sydney, Ultimo, Australia, March 2019. [Google Scholar]
  44. Fisac, J.F.; Bajcsy, A.; Herbert, S.L.; Fridovich-Keil, D.; Wang, S.; Tomlin, C.J.; Dragan, A.D. Probabilistically safe robot planning with confidence-based human predictions. In Proceedings of the Robotics: Science and Systems, Pittsburgh, PA, USA, 26–30 June 2018. [Google Scholar] [CrossRef]
  45. Pronobis, A.; Caputo, B. Confidence-based cue integration for visual place recognition. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 2394–2401. [Google Scholar] [CrossRef] [Green Version]
  46. Belkaid, M.; Cuperlier, N.; Gaussier, P. Autonomous cognitive robots need emotional modulations: Introducing the eMODUL model. IEEE Trans. Syst. ManCybern. Syst. 2018, 49, 206–215. [Google Scholar] [CrossRef]
  47. Schillaci, G.; Pico Villalpando, A.; Hafner, V.V.; Hanappe, P.; Colliaux, D.; Wintz, T. Intrinsic motivation and episodic memories for robot exploration of high-dimensional sensory spaces. Adapt. Behav. 2020. [Google Scholar] [CrossRef]
  48. Huang, X.; Weng, J. Novelty and Reinforcement Learning in the Value System of Developmental Robots; Computer Science and Engineering Department Michigan State University: East Lansing, MI, USA, 2002. [Google Scholar]
  49. Belkaid, M.; Cuperlier, N.; Gaussier, P. Emotional metacontrol of attention: Top-down modulation of sensorimotor processes in a robotic visual search task. PLoS ONE 2017, 12, e0184960. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Oudeyer, P.-Y.; Kaplan, F.; Hafner, V.V. Intrinsic motivation systems for autonomous mental development. IEEE Trans. Evol. Comput. 2007, 11, 265–286. [Google Scholar] [CrossRef] [Green Version]
  51. Wickens, C.D.; Gordon, S.E.; Liu, Y. An Introduction to Human Factors Engineering; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2004. [Google Scholar]
  52. Chien, S.-Y.; Lin, Y.-L.; Lee, P.-J.; Han, S.; Lewis, M.; Sycara, K. Attention allocation for human multi-robot control: Cognitive analysis based on behavior data and hidden states. Int. J. Hum. Comput. Stud. 2018, 117, 30–44. [Google Scholar] [CrossRef] [Green Version]
  53. Lewis, M. Human Interaction With Multiple Remote Robots. Rev. Hum. Factors Ergon. 2013, 9, 131–174. [Google Scholar] [CrossRef]
  54. Chien, S.-Y.; Lewis, M.; Mehrotra, S.; Brooks, N.; Sycara, K. Scheduling operator attention for multi-robot control. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 473–479. [Google Scholar] [CrossRef] [Green Version]
  55. Crandall, J.W.; Cummings, M.L.; Della Penna, M.; de Jong, P.M. Computing the effects of operator attention allocation in human control of multiple robots. Syst. Man Cybern. Part A Syst. Hum. IEEE Trans. 2011, 41, 385–397. [Google Scholar] [CrossRef]
  56. Payton, D.; Daily, M.; Estowski, R.; Howard, M.; Lee, C. Pheromone robotics. Auton. Robot. 2001, 11, 319–324. [Google Scholar] [CrossRef]
  57. Daily, M.; Cho, Y.; Martin, K.; Payton, D. World embedded interfaces for human-robot interaction. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, 2003, Big Island, HI, USA, 6–9 January 2003. [Google Scholar] [CrossRef]
  58. Dragan, A.; Lee, K.C.T.; Srinivasa, S. Teleoperation with Intelligent and Customizable Interfaces. J. Hum. Robot Interact. 2013, 2, 33–57. [Google Scholar] [CrossRef]
  59. Kwok, K.-W.; Sun, L.-W.; Mylonas, G.P.; James, D.R.C.; Orihuela-Espina, F.; Yang, G.-Z. Collaborative Gaze Channelling for Improved Cooperation During Robotic Assisted Surgery. Ann. Biomed. Eng. 2012, 40, 2156–2167. [Google Scholar] [CrossRef] [Green Version]
  60. Goldberg, J.H.; Schryver, J.C. Eye-gaze-contingent control of the computer interface: Methodology and example for zoom detection. Behav. Res. MethodsInstrum. Comput. 1995, 27, 338–350. [Google Scholar] [CrossRef] [Green Version]
  61. Ali, S.M.; Reisner, L.A.; King, B.; Cao, A.; Auner, G.; Klein, M.; Pandya, A.K. Eye gaze tracking for endoscopic camera positioning: An application of a hardware/software interface developed to automate Aesop. Stud. Health Technol. Inform. 2007, 132, 4–7. [Google Scholar]
  62. Latif, H.O.; Sherkat, N.; Lotfi, A. TeleGaze: Teleoperation through eye gaze. In Proceedings of the 2008 7th IEEE International Conference on Cybernetic Intelligent Systems, London, UK, 9–10 September 2008; pp. 1–6. [Google Scholar]
  63. Latif, H.O.; Sherkat, N.; Lotfi, A. Teleoperation through eye gaze (TeleGaze): A multimodal approach. In Proceedings of the 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guilin, China, 19–23 December 2009; pp. 711–716. [Google Scholar]
  64. Zhu, D.; Gedeon, T.; Taylor, K. “Moving to the centre”: A gaze-driven remote camera control for teleoperation. Interact. Comput. 2011, 23, 85–95. [Google Scholar] [CrossRef]
  65. Steinfeld, A.; Fong, T.; Kaber, D.; Lewis, M.; Scholtz, J.; Schultz, A.; Goodrich, M. Common metrics for human-robot interaction. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on HRI, Salt Lake City, UT, USA, 2–4 March 2006; pp. 33–40. [Google Scholar] [CrossRef] [Green Version]
  66. Yanco, H.A.; Drury, J.L.; Scholtz, J. Beyond usability evaluation: Analysis of human-robot interaction at a major robotics competition. Hum. Comput. Interact. 2004, 19, 117–149. [Google Scholar]
  67. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors J. Hum. Factors Ergon. Soc. 1995, 37, 32–64. [Google Scholar] [CrossRef]
  68. Endsley, M.R. Situation awareness global assessment technique (SAGAT). In Proceedings of the IEEE 1988 National Aerospace and Electronics Conference, Dayton, OH, USA, 23–27 May 1988; pp. 789–795. [Google Scholar] [CrossRef]
  69. Endsley, M.R. Measurement of situation awareness in dynamic systems. Hum. Factors J. Hum. Factors Ergon. Soc. 1995, 37, 65–84. [Google Scholar] [CrossRef]
  70. Stein, E.S. Air Traffic Controller Workload: An Examination of Workload Probe; DOT/FAA/CT-TN84/24; U.S. Department of Transportation, Federal Aviation Administration: Atlantic City Airport, NJ, USA, 1985.
  71. Rubio, S.; Díaz, E.; Martín, J.; Puente, J.M. Evaluation of Subjective Mental Workload: A Comparison of SWAT, NASA-TLX, and Workload Profile Methods. Appl. Psychol. 2004, 53, 61–86. [Google Scholar] [CrossRef]
  72. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Hum. Ment. Work. 1988, 1, 139–183. [Google Scholar]
  73. Hart, S.G. NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, San Francisco, CA, USA, 16–20 October 2006; Volume 50, pp. 904–908. [Google Scholar] [CrossRef] [Green Version]
  74. NASA Human Systems Integration Division. NASA TLX Publications/Instruction Manual. Available online: http://humansystems.arc.nasa.gov/groups/TLX/tlxpublications.html (accessed on 18 October 2014).
  75. Veltman, J.; Gaillard, A. Physiological workload reactions to increasing levels of task difficulty. Ergonomics 1998, 41, 656–669. [Google Scholar] [CrossRef]
  76. Kramer, A.F. Physiological Metrics of Mental Workload: A Review of Recent Progress; University of Illinois at Urbana-Champaign: Champaign, IL, USA, 1990. [Google Scholar]
  77. Ware, C.; Mikaelian, H.H. An evaluation of an eye tracker as a device for computer input. In Proceedings of the ACM SIGCHI Bulletin, Toronto, ON, Canada, 5–9 April 1987; pp. 183–188. [Google Scholar] [CrossRef]
  78. Jacob, R.J. Eye movement-based human-computer interaction techniques: Toward non-command interfaces. Adv. Hum. Comput. Interact. 1993, 4, 151–190. [Google Scholar]
  79. Sibert, L.E.; Jacob, R.J. Evaluation of eye gaze interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, The Hague, The Netherlands, 1–6 April 2000; pp. 281–288. [Google Scholar] [CrossRef] [Green Version]
  80. Gartenberg, D.; Breslow, L.A.; Park, J.; McCurry, J.M.; Trafton, J.G. Adaptive automation and cue invocation: The effect of cue timing on operator error. In Proceedings of the 2013 ACM Annual Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 3121–3130. [Google Scholar] [CrossRef]
  81. Breslow, L.A.; Gartenberg, D.; Malcolm McCurry, J.; Gregory Trafton, J. Dynamic Operator Overload: A Model for Predicting Workload During Supervisory Control. IEEE Trans. Hum. Mach. Syst. 2014, 44, 30–40. [Google Scholar] [CrossRef]
  82. Duchowski, A.T. A breadth-first survey of eye-tracking applications. Behav. Res. MethodsInstrum. Comput. 2002, 34, 455–470. [Google Scholar] [CrossRef]
  83. Just, M.A.; Carpenter, P.A. A theory of reading: From eye fixations to comprehension. Psychol. Rev. 1980, 87, 329. [Google Scholar] [CrossRef] [PubMed]
  84. Hyrskykari, A. Utilizing eye movements: Overcoming inaccuracy while tracking the focus of attention during reading. Comput. Hum. Behav. 2006, 22, 657–671. [Google Scholar] [CrossRef]
  85. Rayner, K. Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 1998, 124, 372. [Google Scholar] [CrossRef]
  86. Jacob, R.J. What you look at is what you get: Eye movement-based interaction techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Seattle, WA, USA, 1–5 April 1990; pp. 11–18. [Google Scholar] [CrossRef] [Green Version]
  87. Jacob, R.J. The use of eye movements in human-computer interaction techniques: What you look at is what you get. ACM Trans. Inf. Syst. (TOIS) 1991, 9, 152–169. [Google Scholar] [CrossRef]
  88. Goldberg, J.H.; Schryver, J.C. Eye-gaze determination of user intent at the computer interface. Stud. Vis. Inf. Process. 1995, 6, 491–502. [Google Scholar]
  89. Goldberg, J.H.; Schryver, J.C. Eye-gaze control of the computer interface: Discrimination of zoom intent. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Seattle, WA, USA, 11–15 October 1993; pp. 1370–1374. [Google Scholar]
  90. Goldberg, J.H.; Kotval, X.P. Computer interface evaluation using eye movements: Methods and constructs. Int. J. Ind. Ergon. 1999, 24, 631–645. [Google Scholar] [CrossRef]
  91. Morimoto, C.H.; Mimica, M.R. Eye gaze tracking techniques for interactive applications. Comput. Vis. Image Underst. 2005, 98, 4–24. [Google Scholar] [CrossRef]
  92. Hansen, D.W.; Ji, Q. In the eye of the beholder: A survey of models for eyes and gaze. Pattern Anal. Mach. Intell. IEEE Trans. 2010, 32, 478–500. [Google Scholar] [CrossRef] [PubMed]
  93. Salvucci, D.D.; Goldberg, J.H. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, Palm Beach Gardens, FL, USA, 6–8 November 2000; pp. 71–78. [Google Scholar] [CrossRef]
  94. Jacob, R.J.; Karn, K.S. Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises. In The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research; Hyönä, J., Radach, R., Deubel, H., Eds.; North-Holland: Amsterdam, The Netherlands, 2003; pp. 573–605. [Google Scholar] [CrossRef]
  95. Fono, D.; Vertegaal, R. EyeWindows: Evaluation of eye-controlled zooming windows for focus selection. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 151–160. [Google Scholar] [CrossRef]
  96. Hutchinson, T.E.; White Jr, K.P.; Martin, W.N.; Reichert, K.C.; Frey, L.A. Human-computer interaction using eye-gaze input. Syst. Man Cybern. IEEE Trans. 1989, 19, 1527–1534. [Google Scholar] [CrossRef]
  97. Hansen, D.W.; Skovsgaard, H.H.; Hansen, J.P.; Møllenbach, E. Noise Tolerant Selection by Gaze-Controlled Pan and Zoom in 3D. In Proceedings of the 2008 Symposium on Eye Tracking Research and Applications, Savannah, GA, USA, 26–28 March 2008; pp. 205–212. [Google Scholar] [CrossRef]
  98. Kotus, J.; Kunka, B.; Czyzewski, A.; Szczuko, P.; Dalka, P.; Rybacki, R. Gaze-tracking and Acoustic Vector Sensors Technologies for PTZ Camera Steering and Acoustic Event Detection. In Proceedings of the 2010 Workshop on Database and Expert Systems Applications (DEXA), Bilbao, Spain, 30 August–3 September 2010; pp. 276–280. [Google Scholar] [CrossRef]
  99. Pandya, A.; Reisner, L.A.; King, B.; Lucas, N.; Composto, A.; Klein, M.; Ellis, R.D. A Review of Camera Viewpoint Automation in Robotic and Laparoscopic Surgery. Robotics 2014, 3, 310–329. [Google Scholar] [CrossRef] [Green Version]
  100. Doshi, A.; Trivedi, M.M. On the Roles of Eye Gaze and Head Dynamics in Predicting Driver’s Intent to Change Lanes. Intell. Transp. Syst. IEEE Trans. 2009, 10, 453–462. [Google Scholar] [CrossRef] [Green Version]
  101. McCall, J.C.; Wipf, D.P.; Trivedi, M.M.; Rao, B.D. Lane change intent analysis using robust operators and sparse Bayesian learning. Intell. Transp. Syst. IEEE Trans. 2007, 8, 431–440. [Google Scholar] [CrossRef]
  102. Trivedi, M.M.; Cheng, S.Y. Holistic sensing and active displays for intelligent driver support systems. Computer 2007, 40, 60–68. [Google Scholar] [CrossRef]
  103. Poitschke, T.; Laquai, F.; Stamboliev, S.; Rigoll, G. Gaze-based interaction on multiple displays in an automotive environment. In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Anchorage, AK, USA, 9–12 October 2011; pp. 543–548. [Google Scholar] [CrossRef] [Green Version]
  104. Gartenberg, D.; McCurry, M.; Trafton, G. Situation awareness reacquisition in a supervisory control task. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Las Vegas, NV, USA, 19–23 September 2011; pp. 355–359. [Google Scholar] [CrossRef]
  105. Ratwani, R.M.; McCurry, J.M.; Trafton, J.G. Single operator, multiple robots: An eye movement based theoretic model of operator situation awareness. In Proceedings of the HRI ’10, Osaka, Japan, 2–5 March 2010; pp. 235–242. [Google Scholar] [CrossRef]
  106. Di Stasi, L.L.; McCamy, M.B.; Catena, A.; Macknik, S.L.; Cañas, J.J.; Martinez-Conde, S. Microsaccade and drift dynamics reflect mental fatigue. Eur. J. Neurosci. 2013, 38, 2389–2398. [Google Scholar] [CrossRef]
  107. Ahlstrom, U.; Friedman-Berg, F.J. Using eye movement activity as a correlate of cognitive workload. Int. J. Ind. Ergon. 2006, 36, 623–636. [Google Scholar] [CrossRef]
  108. Beatty, J. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 1982, 91, 276. [Google Scholar] [CrossRef]
  109. Klingner, J.; Kumar, R.; Hanrahan, P. Measuring the task-evoked pupillary response with a remote eye tracker. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications, Savannah, GA, USA, 26–28 March 2008; pp. 69–72. [Google Scholar] [CrossRef]
  110. Iqbal, S.T.; Zheng, X.S.; Bailey, B.P. Task-evoked pupillary response to mental workload in human-computer interaction. In Proceedings of the CHI’04 Extended Abstracts on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004; pp. 1477–1480. [Google Scholar] [CrossRef]
  111. Azuma, R.T. A survey of augmented reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  112. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S.; MacIntyre, B. Recent advances in augmented reality. Comput. Graph. Appl. IEEE 2001, 21, 34–47. [Google Scholar] [CrossRef] [Green Version]
  113. Chintamani, K.; Cao, A.; Ellis, R.D.; Pandya, A.K. Improved telemanipulator navigation during display-control misalignments using augmented reality cues. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2010, 40, 29–39. [Google Scholar] [CrossRef]
  114. Pandya, A.; Siadat, M.-R.; Auner, G.; Kalash, M.; Ellis, R.D. Development and human factors analysis of neuronavigation vs. augmented reality. In Studies in Health Technology and Informatics; IOS Press: Amsterdam, The Netherlands, 2004; Volume 98, pp. 291–297. [Google Scholar] [CrossRef]
  115. Olson, E. AprilTag: A robust and flexible visual fiducial system. In Proceedings of the 2011 IEEE International Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 3400–3407. [Google Scholar] [CrossRef] [Green Version]
  116. Wang, J.; Olson, E. AprilTag 2: Efficient and robust fiducial detection. In Proceedings of the 2016 IEEE/RSJ Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 4193–4198. [Google Scholar] [CrossRef]
  117. Chen, J.Y.C. Effects of operator spatial ability on UAV-guided ground navigation. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; pp. 139–140. [Google Scholar]
  118. Chen, J.Y.C.; Barnes, M.J. Robotics operator performance in a military multi-tasking environment. In Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction, Amsterdam, The Netherlands, 12–15 March 2008; pp. 279–286. [Google Scholar] [CrossRef]
  119. Chen, J.Y.C. Concurrent performance of military tasks and robotics tasks: Effects of automation unreliability and individual differences. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, La Jolla, CA, USA, 11–13 March 2009; pp. 181–188. [Google Scholar] [CrossRef]
  120. Bolker, B.M.; Brooks, M.E.; Clark, C.J.; Geange, S.W.; Poulsen, J.R.; Stevens, M.H.H.; White, J.-S.S. Generalized linear mixed models: A practical guide for ecology and evolution. Trends Ecol. Evol. 2009, 24, 127–135. [Google Scholar] [CrossRef]
  121. Barr, D.J.; Levy, R.; Scheepers, C.; Tily, H.J. Random effects structure for confirmatory hypothesis testing: Keep it maximal. J. Mem. Lang. 2013, 68, 255–278. [Google Scholar] [CrossRef] [Green Version]
  122. Baayen, R.H.; Davidson, D.J.; Bates, D.M. Mixed-effects modeling with crossed random effects for subjects and items. J. Mem. Lang. 2008, 59, 390–412. [Google Scholar] [CrossRef] [Green Version]
  123. Bates, D.; Kliegl, R.; Vasishth, S.; Baayen, H. Parsimonious mixed models. arXiv 2015, arXiv:1506.04967. Available online: https://arxiv.org/abs/1506.04967 (accessed on 1 January 2021).
  124. Bolker, B.M.; Brooks, M.E.; Clark, C.J.; Geange, S.W.; Poulsen, J.R.; Stevens, M.H.H.; White, J.-S.S. GLMMs in Action: Gene-by-Environment Interaction in Total Fruit Production Wild Populations of Arabidopsis thaliana, Revised Version, Part 1; Corrected Supplemental Material Originally Published with https://doi.org/10.1016/j.tree.2008.10.008. Available online: http://glmm.wdfiles.com/local--files/examples/Banta_2011_part1.pdf (accessed on 1 January 2018).
  125. R Core Team. R: A Language and Environment for Statistical Computing. Available online: https://www.R-project.org/ (accessed on 7 October 2019).
  126. Bates, D.; Mächler, M.; Bolker, B.; Walker, S. Fitting Linear Mixed-Effects Models Using lme4. J. Stat. Softw. 2015, 67, 1–48. [Google Scholar] [CrossRef]
  127. Halekoh, U.; Højsgaard, S. A kenward-roger approximation and parametric bootstrap methods for tests in linear mixed models–the R package pbkrtest. J. Stat. Softw. 2014, 59, 1–30. [Google Scholar] [CrossRef] [Green Version]
  128. Chen, Y.; Liu, C.; Shi, B.E.; Liu, M. Robot navigation in crowds by graph convolutional networks with attention learned from human gaze. IEEE Robot. Autom. Lett. 2020, 5, 2754–2761. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The confidence model transformed indicators of operator attention to a confidence value for the robot. Elevated attention increased confidence according to the model parameters, while reduced attention resulted in decreased confidence over time as the value was decremented after each timestep. With prolonged inattention, confidence continued to drop until reaching a minimum value of 0. Robot behavior is adapted according to the confidence value and behavior rules.
Figure 1. The confidence model transformed indicators of operator attention to a confidence value for the robot. Elevated attention increased confidence according to the model parameters, while reduced attention resulted in decreased confidence over time as the value was decremented after each timestep. With prolonged inattention, confidence continued to drop until reaching a minimum value of 0. Robot behavior is adapted according to the confidence value and behavior rules.
Robotics 10 00071 g001
Figure 2. Example of discrete confidence value changes due to a series of inputs (top), with each input having two indicators of attention, using a model specified by the shown parameters p , minimum value c min , and decrement c d .
Figure 2. Example of discrete confidence value changes due to a series of inputs (top), with each input having two indicators of attention, using a model specified by the shown parameters p , minimum value c min , and decrement c d .
Robotics 10 00071 g002
Figure 3. The test platform used an eye tracker below the operator display to follow the operator’s gaze point on the screen and determine which objects the operator looked at. A forehead and chin rest assembly was used to stabilize the operator’s head to improve eye tracker calibration and the reliability of eye-tracking during study trials.
Figure 3. The test platform used an eye tracker below the operator display to follow the operator’s gaze point on the screen and determine which objects the operator looked at. A forehead and chin rest assembly was used to stabilize the operator’s head to improve eye tracker calibration and the reliability of eye-tracking during study trials.
Robotics 10 00071 g003
Figure 4. (a) Operator attention on each robot was estimated online using a multirobot test platform equipped with eye-gaze tracking; (b) Each robot’s confidence value (depicted by the green arc in the figure for illustrative purposes) increased in response to attention and decreased over time while the robot operated autonomously and unattended by the operator. The robot’s behavior is adapted according to its level of confidence; (c) Search task performance and efficiency were measured under various conditions, and relationships were observed between these measures and robot confidence.
Figure 4. (a) Operator attention on each robot was estimated online using a multirobot test platform equipped with eye-gaze tracking; (b) Each robot’s confidence value (depicted by the green arc in the figure for illustrative purposes) increased in response to attention and decreased over time while the robot operated autonomously and unattended by the operator. The robot’s behavior is adapted according to its level of confidence; (c) Search task performance and efficiency were measured under various conditions, and relationships were observed between these measures and robot confidence.
Robotics 10 00071 g004
Figure 5. The control interface software processed video from a camera mounted above the test environment (top-left), robot command input from the operator (top-right), and the screen coordinates of the operator’s gaze point from the eye tracker mounted below the operator display (bottom-right). The software projected graphics in relation to objects in the test environment, presented the resulting composite video via the operator display, computed confidence values for each robot based on operator attention, and issued commands to the robots according to motions requested by the operator, robot confidence, and behavior rules.
Figure 5. The control interface software processed video from a camera mounted above the test environment (top-left), robot command input from the operator (top-right), and the screen coordinates of the operator’s gaze point from the eye tracker mounted below the operator display (bottom-right). The software projected graphics in relation to objects in the test environment, presented the resulting composite video via the operator display, computed confidence values for each robot based on operator attention, and issued commands to the robots according to motions requested by the operator, robot confidence, and behavior rules.
Robotics 10 00071 g005
Figure 6. The control interface displayed video with superimposed graphics, including robot paths, obstacles, targets, and a countdown timer showing how many seconds remained during a user study trial.
Figure 6. The control interface displayed video with superimposed graphics, including robot paths, obstacles, targets, and a countdown timer showing how many seconds remained during a user study trial.
Robotics 10 00071 g006
Figure 7. Fitted lines grouped by robot behavior mode and target set from mixed-effects model fits for: (a) ratio of targets detected; (b) ratio of targets located; (c) search task efficiency.
Figure 7. Fitted lines grouped by robot behavior mode and target set from mixed-effects model fits for: (a) ratio of targets detected; (b) ratio of targets located; (c) search task efficiency.
Robotics 10 00071 g007
Table 1. Performance and efficiency mixed-effects model comparisons.
Table 1. Performance and efficiency mixed-effects model comparisons.
TermTargets DetectedTargets LocatedSearch Efficiency
behavior0.25p = 0.892.26p = 0.352.83p = 0.32
confidence10.53p = 0.0033 **12.66p = 0.0014 **7.32p = 0.013 *
target set0.83p = 0.683.63p = 0.185.48p = 0.074
time2.34p = 0.181.53p = 0.270.060p = 0.83
* p < 0.05; ** p < 0.01.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lucas, N.; Pandya, A. Multirobot Confidence and Behavior Modeling: An Evaluation of Semiautonomous Task Performance and Efficiency. Robotics 2021, 10, 71. https://doi.org/10.3390/robotics10020071

AMA Style

Lucas N, Pandya A. Multirobot Confidence and Behavior Modeling: An Evaluation of Semiautonomous Task Performance and Efficiency. Robotics. 2021; 10(2):71. https://doi.org/10.3390/robotics10020071

Chicago/Turabian Style

Lucas, Nathan, and Abhilash Pandya. 2021. "Multirobot Confidence and Behavior Modeling: An Evaluation of Semiautonomous Task Performance and Efficiency" Robotics 10, no. 2: 71. https://doi.org/10.3390/robotics10020071

APA Style

Lucas, N., & Pandya, A. (2021). Multirobot Confidence and Behavior Modeling: An Evaluation of Semiautonomous Task Performance and Efficiency. Robotics, 10(2), 71. https://doi.org/10.3390/robotics10020071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop