1. Introduction
Unmanned aerial vehicle (UAV) teleoperation technology enables UAVs to access hazardous or human-inaccessible environments, performing tasks such as search and rescue [
1] and surveillance [
2]. It not only ensures the safety of operators but also gives full play to the advantages of human judgment and skills. However, in the traditional direct teleoperation mode, operators are responsible for all control tasks of the UAV, including flight path planning and obstacle avoidance [
3], while the UAV itself lacks any high-level decision-making autonomy. This mode not only requires operators to possess extensive professional knowledge and experience but also leads to cognitive fatigue due to frequent manual operations [
4]. Therefore, the question of how to reduce operational difficulty and alleviate the operator’s burden, especially when performing tasks in complex and hazardous environments, remains a key challenge in UAV teleoperation technology [
3,
5].
With the advancement of UAV autonomous technology, the assistive aerial teleoperation mode has been widely recognized as an effective solution to address the aforementioned challenges, for example, by reducing high cognitive load and mitigating the need for extensive operator expertise [
3,
6]. In this mode, operators provide commands through remote control, while the UAV offers assistance by inferring the operators’ intent [
5,
6,
7] and autonomously plans appropriate action strategies based on mission requirements. By combining human decision-making with machine-assisted execution, this collaborative approach can significantly reduce the operator’s workload while improving task efficiency and safety.
However, while assistive aerial teleoperation technology has made significant progress in reducing operators’ workload and other aspects, it has also triggered the issue of trust imbalance in human–machine collaboration [
8,
9]. Due to the complex high-dimensional nonlinear dynamic characteristics of a UAV and the limited perceptual capabilities of operators through remote control terminals [
10], it is difficult for humans to intuitively understand the UAV’s status and performance. Such information asymmetry [
11] can lead to human trust imbalance, resulting in very serious consequences: over-trust may cause operators to fail to intervene in a timely manner in the UAV’s erroneous behaviors [
12], thereby leading to mission failure. while a lack of trust may cause operators to frequently conduct unnecessary interventions [
13], increasing the operational burden and reducing task execution efficiency.
Addressing this critical issue of trust imbalance reveals a twofold research gap, spanning from theory to application, that has hindered progress in the field. First, in the area of trust modeling, research specific to assistive aerial teleoperation remains limited. Most existing trust models are developed for other scenarios, such as ground mobile robots [
14,
15] and autonomous vehicles [
16,
17]. However, as these models do not account for the higher environmental complexity and the operator’s limited perceptual capabilities inherent to UAV teleoperation, they cannot be directly applied. Second, there is a practical gap in trust application. Even with a suitable model, the question of how to operationally use this dynamic trust information to guide the UAV’s behavior remains largely unanswered. Most current assistive planners focus on aligning with operator intent [
3,
5,
7], but they lack a mechanism to modulate their behavior based on the operator’s dynamic trust level. A planning framework that can integrate a real-time trust metric into its decision-making process is a crucial missing piece.
To address this twofold research gap, this paper introduces a novel framework for trust-aware trajectory planning in assistive aerial teleoperation. Our approach is founded upon the Machine-Performance-Dependent (MPD) human trust model framework, which provides a quantitative basis for linking objective system performance to an operator’s trust. We adapt and specialize this general framework for the UAV teleoperation scenario and, crucially, integrate it into a real-time trajectory planner. By tackling both the modeling and application gaps, the key contributions of this work are summarized as follows:
We propose an MPD trust model tailored for assistive aerial teleoperation. The model quantifies the UAV’s performance capacity using safety and visibility metrics, which, in contrast to methods relying on abstract or binary scores, enables a more nuanced and dynamic representation of human trust that can inform trajectory planning.
Building upon this model, we develop a trust-aware trajectory planner that, for the first time, integrates a dynamic trust level directly into the optimization loop. This enables the planner to continuously adapt its assistance strategy, moving beyond simple intent-matching to achieve true human–machine collaboration.
We validate our approach through extensive simulations in challenging, randomly generated forest environments. The results confirm that our trust-aware method significantly reduces operator workload and enhances trajectory smoothness, achieving superior collaborative performance without compromising task efficiency compared to a trust-unaware baseline.
Our approach assumes a stable communication link between the operator and the UAV. The trust model parameters are calibrated using experimental data to estimate human factors. However, the model does not account for real-time adaptation to individual differences. The proposed trust model is validated in simulated forest environments, and its applicability in highly dynamic urban settings remains a subject for future work.
The remainder of this paper is structured as follows.
Section 2 surveys the literature on assistive aerial teleoperation, along with related work on human-factor-aware trajectory planning and trust modeling.
Section 3 outlines the problem formulation and introduces the foundational MPD trust model. Building on this,
Section 4 details the proposed trust model tailored for UAVs, while
Section 5 presents our core trust-aware trajectory planning framework.
Section 6 describes the experiments and analyzes the results.
Section 7 interprets the findings, illustrates the framework’s practical significance through a scenario-based evaluation, and outlines open issues for future work. Finally,
Section 8 concludes this paper with a summary of our contributions.
4. MPD Trust Model for Assistive Aerial Teleoperation
This section specifies the general MPD trust model, presented in (
5), for the context of assistive aerial teleoperation. The initial step is to determine the machine’s performance
and its objective capability
. While
is often assumed to be a constant value
for a given task, as expressed in (
9),
this assumption may not hold in practice. Especially for UAVs operating in unknown environments, the machine’s capability can vary due to dynamic environmental factors. Therefore, this work focuses on quantifying both
and the context-dependent
to formulate a specialized trust model for assistive aerial teleoperation.
4.1. Quantifying Machine Performance and Objective Machine Capability
In practice, the objective machine capability is an idealized benchmark derived from data on actual machine performance collected over an extended period. Consequently, both the machine performance and the objective machine capability originate from the same set of evaluation metrics. Their primary distinction lies in the temporal scope of the assessment: is evaluated over a discrete time interval , whereas reflects the aggregated performance across a long-term horizon.
We first define the evaluation metrics for machine performance and objective machine capability in the context of assistive aerial teleoperation. We then present the specific algorithms used to compute and .
4.1.1. UAV Trajectory Performance Metrics
To ensure compliance with both flight safety and operator visibility constraints, this work proposes a multi-dimensional metric that incorporates a safety factor and a visibility factor.
- (a)
Safety
Safety is defined as a quantitative index of collision risk. This index reflects the principle that the collision risk increases as the radial distance to obstacles
decreases and the velocity component towards them
increases. Specifically, suppose that at time
k, the position vector from the UAV to the nearest obstacle is
. The corresponding minimum radial distance
is modeled as the Euclidean norm of this position vector, as given in (
10):
Meanwhile, the flight velocity vector is
. The velocity component
in the obstacle direction is calculated by vector projection, as expressed in (
11):
where
denotes the dot product of the velocity vector and the obstacle direction vector. To characterize the relative motion risk, we employ a kinematic risk indicator
, which represents the constant deceleration required to avoid a collision [
14]. The safety factor
is then formulated as given in (
12):
where
is a tunable coefficient that regulates risk sensitivity.
- (b)
Visibility
Visibility is a metric designed to quantify the level of occlusion within the UAV’s field of view (FOV) [
25]. When the occluded area within the FOV exceeds a threshold, the environmental information available to the operator will significantly diminish, thereby affecting decision-making efficiency. As shown in
Figure 1, the UAV trajectory point
, and the target position
. It is assumed that the UAV always faces the target. The blue area in the figure is the UAV’s FOV, and the blue dashed area is defined as the confident FOV, i.e., the core observation area where no occlusion is required in the ideal state. Considering that the computational complexity of the analytical solution for the occluded area of obstacles in the FOV is extremely high, we approximate the visibility by constructing a series of spherical regions
, where
N is the total number of spherical regions. The center
and radius
of the spherical regions are defined as given in (
13):
where
, and
is a parameter related to the size of the FOV. The visibility of the FOV is measured by comparing the minimum distance from the center point
to the nearest obstacle with the radius
of the corresponding sphere. The visibility of each area
is ensured by the following condition, as given in (
14):
where
denotes the minimum distance from
to the nearest obstacle. Accordingly, the visibility factor
at time
k is defined as given in (
15):
4.1.2. Determining Machine Performance
The machine performance
is quantified by aggregating trajectory performance metrics within the time interval
. Research in human–machine interaction suggests that trust formation is subject to two key temporal effects. First, recent performance immediately influences trust, an effect known as immediacy. Second, the cumulative effect of historical performance shapes long-term trust assessment [
26]. This dual influence necessitates a modeling approach for machine performance that balances short-term responsiveness with long-term stability.
Specifically, within the interval
, we define a sampling interval
, where the interval length is
, comprising
N equally spaced sampling points. To incorporate observation data dynamically, we employ a sliding window spanning the most recent
N sampling points, covering a duration of
. Within this window, the safety indicator
and visibility indicator
of the UAV trajectory are computed using an Exponentially Weighted Moving Average (EWMA) to emphasize recent performance, as follows in (
16):
where
represents the
i-th sampling point backward from the current time, and
is a decay factor that adjusts the degree of influence of historical data. A smaller value of
places greater emphasis on more recent data. The machine performance
is then defined as given in (
17):
where
and
are weighting coefficients for safety and visibility, respectively, satisfying
. The value
quantifies the overall trajectory performance within
.
4.1.3. Determining Objective Machine Capacity
Given the dynamic and uncertain environments that UAVs encounter during missions, their objective machine capability
may vary due to environmental factors. To calculate
, we integrate prior knowledge from the system’s predefined capability model with posterior knowledge derived from real-time environmental feedback. The machine capability at time step
k is thus expressed as given in (
18):
where
represents the actual machine performance over
, and
is a dynamic factor reflecting the contribution of recent performance to the capability assessment.
4.2. The MPD Human Trust Model for Assistive Aerial Teleoperation
The MPD human trust model for assistive aerial teleoperation can now be formulated as follows in (
19):
where
,
, and
can be similarly defined as in (
6), (
7), and (
8), respectively. In particular,
and
can be calculated as in (
17) and (
18), respectively.
Regarding the human factors from (
5), the initial trust value,
, must be determined before the system operates. In this work, we initialize this value using a pre-task trust scale. The three human-factor parameters,
,
, and
, must be estimated from runtime data. Various methods are suitable for this purpose, such as the Least Squares Estimation approach.
6. Experiments
and Results
In this section, we design and conduct an experiment on human–UAV collaboration to compare the effectiveness of our proposed method against a baseline approach that does not account for human trust. The experiments are performed on a high-performance desktop equipped with an Intel Core i9-14900KF CPU (32 cores, base frequency 3.2 GHz, maximum turbo frequency 6.0 GHz) and an NVIDIA GeForce RTX 4090 GPU (24 GB VRAM).
6.1. Implementation Details
To evaluate the proposed method, we conducted simulation experiments in three simulated random forest environments with varying tree densities. The simulation area was a rectangular volume measuring 70 m in length, 20 m in width, and 3 m in height, containing 50, 100, or 200 trees to represent sparse, medium, and dense scenarios, respectively. For the experiments, 15 students (ages 22–28) with no prior drone piloting experience were recruited. After a 5 min familiarization session, each operator was tasked with navigating a simulated quadrotor vehicle from one end of the forest to the other end. The experiment followed a within-subjects design, where each participant tested both our proposed method and a baseline approach. The order of the two methods was counterbalanced across participants. For each method, the operator performed the task once in each of the three random forest environments. The task was deemed complete upon reaching the opposite side. Operators were provided with a first-person view of the vehicle and sent commands via a joystick to guide its flight. They were permitted to intervene in the vehicle’s flight at any time.
We evaluate two aspects of the performance of our assistive aerial teleoperation system: (1) navigation efficiency, which mainly includes the distance, duration, and average velocity during the mission; and (2) the cost of human–UAV interaction, for which the core evaluation metrics are the number of operator inputs, jerk integral, and average trust. Specifically, the number of operator inputs indicates the intervention frequency of the assistive system, while the jerk integral reflects the piloting burden and trajectory smoothness. If the former two interaction indicators (operator inputs and jerk integral) are high, it indicates that the path searched by the assistive system does not match the human’s true intention, leading to poor flight quality and requiring more operator interaction to correct, which is often accompanied by lower trust. In contrast, if these indicators are low, it indicates that the operator agrees with the system’s path planning, which implies fluid human–machine collaboration and a smooth flight trajectory, ultimately building and maintaining a higher level of trust.
The motion primitive parameters used in the experiment are shown in
Table 1. The maximum linear velocity input by the operator is set to 2 m/s. The parameters of the visibility metric are
,
. The parameters of the safety metric are
. The specific parameters for the MPD trust model are detailed in
Table 2. In our implementation, these parameters are empirically determined by referencing previous work and our own experiments, and their systematic optimization is beyond the scope of this paper.
6.2. Results Analysis
The quantitative results of our experiments are summarized in
Table 3, which compares the performance of our proposed trust-aware approach against the trust-unaware baseline across four key dimensions: task completion efficiency, operator workload, trajectory quality, and human–machine trust. These experiments were conducted in three distinct environments to assess the robustness of both methods under varying levels of complexity. Trajectory comparisons are visually illustrated in
Figure 5 (2D) and
Figure 6 (3D), offering additional insights into the behavioral differences between the two approaches.
In terms of fundamental task completion metrics, both methods exhibit comparable performance across all environments. Specifically, flight distance and duration show no significant differences between the trust-aware and trust-unaware approaches. For instance, in the dense environment, the trust-aware method recorded a flight distance of 71.38 m and a duration of 49.86 s, compared to 71.73 m and 54.89 s for the trust-unaware method. Similar trends are observed in the sparse and medium environments, indicating that incorporating the trust model does not compromise basic task execution efficiency. Both methods guide the UAV to plan effective paths of similar lengths, where the paths for both methods closely align in terms of overall distance across sparse, medium, and dense settings. However, the trust-aware method demonstrates a slight advantage in average velocity, particularly in the dense environment. This improvement is primarily attributed to smoother flight trajectories and fewer unnecessary pauses. The trust-aware method produces paths with fewer sharp turns and abrupt changes compared to the trust-unaware baseline, especially in the dense environment, enhancing the overall efficiency of motion.
The number of operator inputs serves as a core metric for evaluating operator intervention frequency and effort, directly reflecting the workload imposed on the operator. As shown in
Table 3, the trust-aware method significantly reduces the number of operator inputs compared to the trust-unaware method across all scenarios. In the sparse environment, inputs decreased from 39 to 34 (a reduction of approximately 12.8%), and this gap widened in the dense environment, where inputs dropped from 56 to 43 (a reduction of approximately 23.2%). These results strongly demonstrate that our trust-aware method effectively alleviates the operator’s workload. The trust-aware system’s ability to dynamically adjust its assistance strategy based on real-time trust assessments underlies this improvement. When trust is high, the system operates more autonomously, reducing the need for operator fine-tuning. Conversely, when trust declines, the system adopts conservative behaviors that align better with the operator’s expectations, minimizing interventions caused by mismatches between system decisions and human intent.
The jerk integral metric reflects the smoothness of the UAV’s flight, with lower values indicating smoother trajectories with fewer abrupt changes. The experimental results in
Table 3 reveal that the trajectory quality of the trust-aware method consistently surpasses that of the trust-unaware baseline. In the sparse environment, the jerk integral was reduced by approximately 22.5%, and this improvement was even more pronounced in the dense environment, with a reduction of approximately 43.2%. These smoother trajectories imply safer, more energy-efficient flights and provide a more comfortable first-person view experience for the operator. The reduction in the jerk integral is closely tied to the decrease in operator inputs; fewer external interventions prevent frequent, abrupt changes in flight state. Furthermore, the trust-aware model’s ability to predict and align with the operator’s long-term intent results in more proactive and coherent paths, contrasting with the passive, jerky reactions of the trust-unaware baseline, particularly evident in the dense environment.
Finally, the average trust metric directly validates the effectiveness of our trust-aware approach. In all environments, the average trust level of operators was significantly higher with the trust-aware method compared to the trust-unaware baseline. For example, in the medium-density environment, the trust-aware method achieved an average trust of 0.750 compared to 0.657 for the trust-unaware method. Trust levels for both methods decreased as environmental complexity increased, which is expected due to the greater challenges in denser settings. However, the decline was gentler for the trust-aware method (e.g., from 0.813 in the sparse environment to 0.693 in the dense environment compared to a steeper drop from 0.744 to 0.611 for the trust-unaware method). This suggests that by actively modeling and adapting to the operator’s trust state, our system establishes and maintains a more robust and resilient human–machine trust relationship. The higher trust levels both result from and contribute to the benefits observed (fewer inputs, smoother trajectories). As the system’s behavior becomes more reliable and predictable, operators are more willing to reduce interventions, creating a virtuous cycle.
7. Discussion
7.1. Discussion of Findings
The empirical findings provide compelling evidence that our trust-aware framework substantially elevates the quality of human–UAV interactions, as evidenced by reductions in operator workload, smoother trajectories, and more resilient trust dynamics—all without compromising core task metrics such as flight efficiency. This aligns with the framework’s core innovation: integrating an MPD trust model into the real-time planner, which dynamically predicts and adapts to operator intent and trust states. Unlike traditional trust-unaware baselines that react passively to inputs, our approach proactively aligns system behavior with human expectations and fosters more intuitive collaboration. These improvements extend beyond mere performance gains, underscoring the framework’s value in making human–machine systems safer and more user-centric, particularly in variable environments where baseline methods falter.
The core of this success lies in the formation of a virtuous cycle in operator–UAV interaction, a phenomenon observed across different environments. Specifically, this cycle occurs when the system’s trust-aware behavior aligns with operator expectations, which reduces unnecessary interventions and in turn reinforces the operator’s trust, fostering smoother and more effective collaboration. This cycle not only validates the important role of trust in human–machine interaction (e.g., [
13]) but also provides a crucial extension to previous studies focused on intent-aligned trajectory generation [
3,
5]. While these prior works demonstrated that aligning with operator intent improves collaboration, our findings reveal that intent-alignment alone is insufficient under trust-eroding conditions. Specifically, our results in dense environments show that without a trust model, operator interventions remained high despite intent alignment, whereas our trust-aware approach successfully mitigated conflicts by dynamically increasing system caution. Our work thus extends these intent-based models by incorporating a vital second feedback loop, the operator’s dynamic trust state, which proves essential for robust performance in complex scenarios.
Furthermore, our framework makes a tangible contribution to the formal practice of risk analysis in human–UAV interaction. As conceptualized in frameworks like that of Sanz et al. [
34], effective risk management for UAVs involves a cycle of hazard identification, assessment, and reduction. Our work provides a concrete, real-time implementation of this cycle. Specifically, the MPD trust model functions as a dynamic risk assessment tool, quantifying the risk of human–machine goal conflicts based on system performance and the operator’s trust level. The adaptive trajectory planner then serves as the risk reduction mechanism, translating this assessed risk into tangible actions by modulating the safety and efficiency weights. By formalizing the link between trust dynamics and system safety behaviors, our approach offers a more structured method for managing risk in collaborative aerial tasks, moving beyond the static safety rules common in prior systems.
However, while these findings affirm the framework’s efficacy in simulated scenarios, they also reveal critical limitations that warrant careful consideration, primarily stemming from the inherent sim-to-real gap. It must be acknowledged that our controlled simulation cannot fully replicate the complex uncertainties of the physical world. These include unmodeled aerodynamics, complex sensor noise profiles, and unpredictable communication latencies. Crucially, such factors could directly corrupt the real-time safety and visibility metrics that are fundamental inputs to our MPD trust model. Furthermore, beyond these physical uncertainties, our study utilized a limited operator cohort, and the framework’s generalizability across operators with varying expertise levels and risk tolerances remains an underexplored but vital area. These limitations, therefore, highlight critical open research issues regarding the framework’s real-world robustness and its capacity for personalization, paving the way for future enhancements.
7.2. Practical Significance and Scenario-Based Illustration
To illuminate the practical significance of our trust-aware framework, we present an illustrative scenario in a high-stakes application: post-disaster search and rescue.
Imagine a rescue mission where an operator must remotely pilot a UAV through a collapsed building to locate survivors. When using a UAV equipped only with basic obstacle avoidance, the operator faces immense cognitive load, simultaneously interpreting video feeds, navigating complex 3D spaces, and fighting against a system that may refuse to approach cluttered areas where a survivor might be. This often leads to human–machine conflict; to complete the mission, the operator might disable the assistance system, risking a catastrophic crash due to a moment of inattention. The loss of an expensive UAV not only incurs a financial cost but, more critically, could prematurely end a life-saving search.
Now, let us envision the same mission executed with our trust-aware framework. Its practical value unfolds in several stages:
Initial trust calibration: As the UAV enters the building, it first navigates through less cluttered corridors. Its smooth, predictable trajectories, reflecting high safety and visibility, rapidly build the operator’s trust in the system’s competence.
Intent-driven investigation: The operator spots a potential sign of life and issues a command to move closer for a better view. Because the operator’s trust is high, our framework accurately infers a strong, deliberate intent. It then generates a precise trajectory that navigates assertively yet safely around debris, fulfilling the operator’s goal without resistance. This is a stark contrast to a trust-unaware system that might have simply halted.
Adaptive safety preservation: As the UAV ventures deeper, it enters an area with poor visibility, causing its onboard sensors to become less reliable. Our framework’s machine performance metrics objectively detect this degradation. Consequently, the trust model dynamically lowers its trust value, shifting the system into a more cautious state. If the anxious operator now commands a rapid forward movement, the system will provide stronger assistance, moderating the speed and trajectory to prioritize the UAV’s survival. It intelligently prevents a trust-induced error, safeguarding the mission’s most critical asset.
In summary, this scenario demonstrates clear practical significance. By fostering a more intuitive human–machine collaboration, our framework simultaneously enhances mission effectiveness and reduces operator cognitive load. Crucially, its ability to dynamically adapt to risk ensures the safety of critical assets, ultimately empowering human–machine teams to confidently undertake complex, high-stakes operations that were previously too hazardous or inefficient, thereby significantly expanding their operational boundaries.
7.3. Open Issues and Future Work
Future work will address several open issues by focusing on three core directions to further enhance the trust-aware framework’s generalization, accuracy, and application scope.
First, in human–machine interaction, a central open issue is how to bridge the sim-to-real gap while enabling genuine user adaptivity. Existing studies have primarily validated trust-aware frameworks in simulation [
17,
35], and their robustness against real-world uncertainties, such as sensor noise and communication latency, remains insufficiently tested. At the same time, current generic trust models often ignore differences in operator expertise, limiting their ability to meet personalized needs. To address this, our future research will encompass the following: (i) transition the framework from simulated to physical environments to verify robustness in practice; and (ii) develop personalized trust models that identify and adapt to the trust characteristics of operators with varying experience levels, thereby creating a system that is both robust and user-adaptive.
Second, in trust modeling, another open issue is how to reduce the over-reliance on empirical, manually tuned parameters. Existing approaches largely depend on human expertise [
14,
17] for parameter settings, which restricts scalability and adaptability. To overcome this, we will explore data-driven paradigms such as Inverse Reinforcement Learning (IRL) [
13], enabling the model to autonomously learn and optimize intrinsic parameters from interaction data, replacing manual tuning with a more robust self-learning mechanism.
Finally, at the application level, an important open issue is how to scale trust-aware frameworks beyond constrained laboratory tasks into complex, high-stakes real-world scenarios. Prior work has mostly focused on basic navigation [
3,
5,
7] in controlled settings, whereas missions such as large-scale inspection and search and rescue require balancing multiple performance metrics—including speed, safety, and coverage—while maintaining stability in highly dynamic environments that include challenges such as dynamic obstacles. To meet this challenge, we will extend the framework toward these high-stakes missions, using them as testbeds to rigorously evaluate and strengthen the adaptive capabilities of the trust-aware planner under realistic conditions.
8. Conclusions
This work proposes and validates a trust-aware trajectory planning framework to address the critical issue of human–machine trust imbalance in assistive aerial teleoperation. By embedding an MPD trust model into a real-time planner, the approach effectively reduces operator workload, improves flight safety, and establishes a more stable human–machine trust relationship, all while maintaining task efficiency. The results demonstrate that introducing dynamic trust quantification into the control loop of autonomous systems holds significant potential, enabling the creation of a virtuous cycle of reduced human intervention, greater system predictability, and enhanced collaborative performance. Taken together, this work establishes a foundation for developing more intuitive and efficient human–machine collaboration models, making an important contribution to the advancement of intelligent aerial systems. In the future, by addressing the open issues outlined in the Discussion section, this framework holds promise for extension to a broader range of autonomous system applications.