Next Article in Journal
Standardized Evaluation of Counter-Drone Systems: Methods, Technologies, and Performance Metrics
Previous Article in Journal
Hierarchical Reinforcement Learning for Viewpoint Planning with Scalable Precision in UAV Inspection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unified Monitor and Controller Synthesis for Securing Complex Unmanned Aircraft Systems

by
Dong Yang
,
Wei Dong
*,
Wei Lu
,
Sirui Liu
and
Yanqi Dong
College of Computer Science and Technology, National University of Defense Technology, Changsha 410074, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(5), 353; https://doi.org/10.3390/drones9050353
Submission received: 3 April 2025 / Revised: 28 April 2025 / Accepted: 2 May 2025 / Published: 5 May 2025

Abstract

:
Unmanned Aircraft Systems (UASs) have undergone rapid development over recent years, but have also became vulnerable to security attacks and the volatile external environment. Ensuring that the performance of UASs is safe and secure no matter how the environment changes is challenging. Runtime Verification (RV) is a lightweight formal verification technique that could be used to monitor UAS performance to guarantee safety and security, while reactive synthesis is a methodology for automatically synthesizing a correct-by-construction controller. This paper addresses the problem of the generation and design of a secure UAS controller by introducing a unified monitor and controller synthesis method based on RV and reactive synthesis. First, we introduce a novel methodological framework, in which RV monitors is applied to guarantee various UAS properties, with the reactive controller mainly focusing on the handling of tasks. Then, we propose a specification pattern to represent different UAS properties and generate RV monitors. In addition, a detailed priority-based scheduling method to schedule monitor and controller events is proposed. Furthermore, we design two methods based on specification generation and re-synthesis to solve the problem of task generation using metrics for reactive synthesis. Then, to facilitate users using our method to design secure UAS controllers more efficiently, we develop a domain-specific language (UAS-DL) for modeling UASs. Finally, we use F Prime to implement our method and conduct experiments on a joint simulation platform. The experimental results show that our method can generate secure UAS controllers, guarantee greater UAS safety and security, and require less synthesis time.

1. Introduction

Unmanned Aerial Vehicles (UAVs), commonly referred to as drones, serve as the operational core of Unmanned Aircraft Systems (UASs). These autonomous or remotely piloted platforms integrate advanced sensors, Global Positioning Systems (GPSs), and lightweight composite materials to enable cross-domain applications in agriculture, cinematography, infrastructure diagnostics, environmental surveillance, and military operations. Their functional superiority stems from critical attributes, including real-time data transmission capabilities, cost-efficiency compared to manned alternatives, operational autonomy, and exceptional adaptability to complex environments.
With the rapid development of UAS technology, safety and security risks have also escalated. Despite integrating advanced technologies, UASs remain vulnerable to faults caused by unpredictable state transitions and external interference. Attackers exploit system hardware, software, and networks to compromise confidentiality, integrity, and availability through techniques including malicious code injection, authentication bypass, GPS spoofing, and DDoS attacks [1]. For instance, Iranian forces successfully captured a U.S. UAV in 2011 by executing GPS spoofing on the RQ-170 [2]. He et al. [3] describe a physical attack on the Inertial Measurement Unit (IMU) of a UAS by ultrasonic waves, which is a typical side-channel attack. Michael Hooper et al. [4] implemented a buffer overflow attack on the Parrot Bebop 2. Moreover, complex network architectures, flexible physical environments, and excessive access interfaces may lead to more vulnerabilities and attack vectors when manufacturers attempt to improve the quality of drones. Attackers can exploit these vulnerabilities to cause sensitive data leakage, system hijacking, and even instant drone crashes [5].
Runtime Verification (RV) [6] is a lightweight formal verification technique that attempts to determine whether a target system’s runtime behavior satisfies or violates specified properties based on the current execution path. Unlike model checking, RV technology relies solely on the observed execution path, thereby avoiding the state space explosion inherent in verifying complex systems through model checking. These characteristics make RV particularly suitable for security protection in resource-constrained UASs [7]. Runtime Enforcement (RE) is a technique to ensure that a program will respect a given set of properties by adopting enforcement operations. When abnormal behavior occurs, RE dynamically intervenes to maintain critical security properties [8].
Reactive synthesis [9] is a formal methodology for automatically generating correct-by-construction reactive systems from temporal logic specifications. The synthesized system is typically formalized as an automaton that strictly satisfies the given specifications, where the input of the automaton represents the environmental variables acquired through UAV sensors (e.g., position, obstacle detection) and the output of the automaton dictates actuator-driven actions (e.g., trajectory adjustments, taking a photograph). Our synthesis algorithm builds upon the Generalized Reactivity (1) (GR(1)) game [10,11], which can be used in multi-robot systems [12,13].
Through reactive synthesis, reactive controllers can be constructed for UAS. By scheduling these controller events, a UAV can accomplish certain tasks. At the same time, RV monitors can also be deployed on a UAV to detect threats. However, when a monitor detects a threat and takes corresponding countermeasures (i.e., enforcement operations) to defend against it, these countermeasures may affect the scheduling of controller events, and even lead to the failure of UAV controller scheduling. In addition, task design based on GR(1) specification has difficulty when describing quantitative tasks, which limits its areas of application. Furthermore, due to the lack of a domain-specific language for UASs based on RV and reactive synthesis, traditional languages have proved inefficient to standardize the design of the UAS atomic propositions used in specifications. The designed monitor and controller specifications also lack portability and reusability.
To address the problems above, we expanded the work in [5] and designed the Unified Monitor and Controller Synthesis (UMCS) method for securing complex UASs. The main contributions can be briefly summarized as follows:
  • We designed a UMCS framework based on RV and reactive synthesis. Within this framework, monitor and controller events are classified, prioritized, and scheduled based on their priorities to secure the control of the UAV.
  • We designed a regular expression to express the task specification with metrics, and propose methods based on specification generation and re-synthesis to achieve task generation with metrics in controller synthesis.
  • Based on our UMCS framework, we designed the Unmanned Aircraft System Description Language (UAS-DL). This language provides a standardized, easy-to-use programming interface to facilitate users in configuring UAS resources and designing UAV tasks and monitors.

2. Related Work

Our work integrates methodologies from UAV control, UAV security, and domain-specific language design. This research builds upon established frameworks in RV and reactive synthesis. In this section, we analyze these foundational influences.

2.1. On UAV Control

Current UASs exhibit limited autonomy, relying heavily on human–machine interaction. Since the middle of 2009, the US Air Force has designated in-service UAVs as “Remotely Piloted Aircraft” to differentiate them from truly autonomous UASs [14]. Researchers consequently pursue technologies that enhance UAV autonomy, aiming to reduce manual control burdens, dependency on robust communication links, and mission decision latency. Pachter et al. [15] conceptualized autonomous control as “highly” automated operation in unstructured environments. Boskovic et al. [16] expanded this to include real-time perception, information processing, and control adaptation. Distinct from traditional automatic control systems with predefined responses, autonomous control emphasizes self-determination [17], representing an evolutionary advancement in control theory.
Current UASs exhibit limited autonomy, relying heavily on human–machine interaction. Since the middle of 2009, the US Air Force has designated in-service UAVs as “Remotely Piloted Aircraft” to differentiate them from truly autonomous UASs [14]. Researchers consequently pursue technologies that enhance UAV autonomy, aiming to reduce manual control burdens, dependency on robust communication links, and mission decision latency. Pachter et al. [15] conceptualized autonomous control as “highly” automated operation in unstructured environments. Boskovic et al. [16] expanded this to include real-time perception, information processing, and control adaptation. Distinct from traditional automatic control systems with predefined responses, autonomous control emphasizes self-determination [17], representing an evolutionary advancement in control theory.
The multi-UAV collaborative task allocation problem is a complex combinatorial optimization problem within task assignment and resource allocation. This problem focuses on optimal task-to-UAV assignments under resource, platform, and temporal constraints to maximize system-wide efficiency. Vincent et al. analyzed cooperative search strategies for UAVs operating in hazardous environments [18]. Current solutions for the multi-task allocation problem include network flow optimization [19,20] and mixed-integer linear programming [21]. Recent advances in distributed UAV control have spurred interest in decentralized approaches; for example, Akselrod et al. applied hierarchical Markov decision processes for distributed sensor management [22]. Game theory and deep reinforcement learning also emerge as prominent methodologies [12,13,23,24,25,26].
However, existing works prioritize task execution over security considerations like communication jamming or sensor spoofing. Our methodology generates a correct-by-construction controller while maintaining UAV safety/security properties.

2.2. On UAV Security

In recent years, commercial drones and smart cars have suffered from many malicious attacks, resulting in several traffic accidents. So the core issue in this paper is the automatic generation of UAS security behaviors, as studies on UAS vulnerabilities and corresponding attack chains have become essential and fundamental. Most UASs consist of several components: the control unit, the dynamics module, the navigation module, the sensor module, the communication module, etc. The authors of [1,27] introduced confidentiality, integrity, and availability from the perspective of information security. Confidentiality means that the system forbids unauthorized access or interception of data; Integrity is the property that protects the system from jamming by malicious data; Availability refers to a timely response for legitimate requests. Vrizlynn [28] and Leela [29] divided attack vectors into physical attacks and remote attacks.
RV is a technology that checks if the system’s execution satisfies the given properties [6]. In [7,30], the authors implemented an RV monitor on a UAS using LTL formulas to describe the security requirements, using Bayesian Networks and the FPGA hardware platform. Schneider et al. proposed a method that combines traditional RV, enforcement, and control prediction to automatically generate security enforcement code, in order to compensate for the lack of protection capabilities of RV theory against dynamic environmental uncertainties [31].
However, using RV to describe and detect safety and security threats of UAVs and take countermeasures against these threats may lead to task failure, while our method can schedule monitor and controller events based on their priorities and parallelizability and guarantees more UAV safety and security properties.

2.3. On GR(1)-Based Domain Specific Language

LTLMop [32] is software for specifying and synthesizing robot missions on 2D maps. The tool comes with a graphical map editor and allows specifications to be expressed in the GR(1) subset of LTL or structured English [33]. Spectra is a new specification language for reactive systems, specifically tailored for the context of reactive synthesis. Spectra comes with designated Spectra Tools, a set of analyses, including a synthesizer to obtain correct-by-construction implementations, several means for executing the resulting controller, and additional analyses aimed at helping engineers write higher-quality specifications [34].
The structured English of LTLMop and Spectra can be used to design GR(1) specifications conveniently, but the GR(1) specification also restricts the expressivity of the language for describing safety and security properties, and these languages cannot support the construction of propositions from low-level resources, while our UAS-DL supports the input of UAV properties in LTL/MTL and the construction of atomic propositions from the software and hardware resources of UAVs.

3. Preliminaries

3.1. Temporal Logic

Linear Temporal Logic (LTL) proposed by Pnueli [35] can be used to describe the specifications for UAVs. Compared to alternative temporal logic, such as CTL, CTL*, and so on, LTL is not only sufficient to express the complex specifications for UAVs, but also possesses an efficient synthesis algorithm [36]. LTL’s further expansion, such as LTL3, Signal Temporal Logic (STL) [37], Metric Temporal Logic (MTL) [38], and Discrete-Time MTL (DT-MTL) [39], can also be used for RV [6] of the real-time control system, describing the more complex properties. Before introducing RV and reactive synthesis, we first present the syntax and semantics of LTL.

3.1.1. LTL Syntax

Given a countable set of Boolean variables (propositions) A P , without loss of generality, we assume that all variables are Boolean. The general case in which a variable ranges over arbitrary finite domains can be reduced to the Boolean case. LTL formulas are constructed as follows:
φ : : = p A P | ¬ φ | φ φ | φ | φ U φ | φ | φ R φ
Based on the fundamental syntax, some frequently used Boolean constants and derivative operators can be defined as follows: First, list the constants:
T r u e p ¬ p F a l s e ¬ T r u e
and propositional logic operators:
φ ψ ¬ ( ¬ φ ¬ ψ ) φ ψ ¬ φ ψ
φ ψ ( φ ψ ) ( ψ φ )
and temporal operators:
φ T r u e U φ φ ¬ ¬ φ

3.1.2. LTL Semantics

Given an infinite sequence σ composed by subsets of A P and a position i N , where σ ( i ) denote the i-th element of σ and σ ( i ) 2 A P . The satisfaction relationship ⊨ between σ , i and an LTL formula φ is defined as follows:
σ , i p iff p σ ( i ) σ , i ¬ φ iff σ , i φ σ , i φ ψ iff σ , i φ o r σ , i ψ σ , i φ iff σ , i + 1 φ σ , i φ U ψ iff t h e r e e x i s t k i s u c h t h a t σ , k ψ a n d σ , j φ   f o r a l l j , i j < k σ , i φ iff i > 0 a n d σ , i 1 φ σ , i φ R ψ iff t h e r e e x i s t k , 0 k i s u c h t h a t σ , k ψ a n d σ , j φ   f o r a l l j , k < j i
Intuitively, the formula φ ( X φ ) with the next operator means φ is True in the next position of the sequence. φ U ψ with the until operator indicate that ψ will be True somewhere in the future, and φ must be maintained as True until ψ is True. And φ ( [ ] φ ) with always and φ ( φ ) with eventually express the properties that φ will always hold True in the sequence and φ will be True at least once in the future, respectively. Another kind of LTL formula in the form of φ ( [ ] φ ) is also quite common in the specifications for UAVs. It means that φ will be True infinite times, and is usually used to express the goal of systems which need to be achieved repeatedly. A pictorial representation of commonly used LTL temporal operators is shown in Figure 1, where p and q are atomic propositions and green/red circular nodes represent the occurrence of propositions.

3.1.3. Metric Temporal Logic

For requirements that express specific time bounds, we use a variant of LTL that adds these time bounds, called MTL [38]. For MTL, each of these temporal operators is accompanied by upper and lower time bounds that express the time period during which the operator must hold. Specifically, MTL includes the operators [ i , j ] φ and [ i , j ] φ and φ U [ i , j ] ψ , where the temporal operator applies in the time between time i and time j. For an event that occurs at a discrete time in UAS, we can use discrete-time MTL [39] to describe its properties. Similarly, a pictorial representation of the MTL temporal operators is shown in Figure 2.

3.2. Runtime Verification

Runtime Verification [6] is a lightweight formal verification technique that serves as an effective complement to traditional approaches, such as model checking and testing. The most important feature of RV is that its verification object focuses on the actual runtime operation of the monitored system, enabling timely adjustments when detecting anomalous UAV behaviors that violate the established properties of the UAS. This mechanism helps avoid the occurrence of UAV mission failures and to prevent behavioral malfunctions.
In this paper, we define a UAV execution fault as a deviation between the observed current behavior and the expected behavior of the UAS. In other words, we monitor the execution violations of the UAV’s established properties, which are typically identified by deviations between the current system state and the expected state. Such execution faults may ultimately lead to mission failures. Martin Leucker and Christian Schallhart have provided a widely accepted definition in the runtime verification domain through their research [6].
In RV for UAV, we abstract a run of the UAS as a series of continuous infinite system state sequences. We then employ a monitor to check whether the runtime execution satisfies the established UAV properties by generating verdicts (typically Boolean values of True/False). Formally, we denote a UAV property as φ , where [ [ φ ] ] represents the set of φ . Thus, the RV violation/satisfaction problem of UAV physical system can be abstracted as verifying whether the execution ω belongs to [ [ φ ] ] , i.e., mathematically determining if a given word is included in a formal language. This approach reduces the verification complexity significantly. We follow the definition of an RV monitor by Martin Leucker and Christian Schallhart in their seminal work [6]:
Definition 1. 
(Monitor)A monitor is a device that reads a finite trace and yields a certain verdict.
The RV monitor architecture for UAS is illustrated in Figure 3. The monitor is an independent algorithmic component connected to the target system via a broadcast bus. A system execution trace generally comprises a series of formally defined state data, which serve as the monitoring input. These states can be extracted and filtered through a system operation semi-formal mapping of our Monitor-Oriented Programming (MOP) tool. Ultimately, the monitor outputs the verdict, indicating whether the current UAV trace complies with the established specifications.

3.3. Reactive Synthesis

Given the system model and specifications in the form of LTL formulas, our goal is to synthesize a controller that generates behavior strategies satisfying the given specifications, if the formulas are realizable [25]. The synthesis algorithm is a two-player game between the UAS and its environment that synthesizes the winning strategies for UAV in the form of an automaton by solving the μ -calculus over the game structure [36].

3.3.1. GR(1) Specification

The standard approach to LTL synthesis involves constructing a Büchi automaton from the LTL formula followed by conversion to an equivalent deterministic Rabin automaton. This two-stage conversion process induces a double-exponential state blow-up. To circumvent this prohibitive complexity, a special restriction of LTL, called GR(1) specification, is taken into consideration [36].
For a UAV u i U in the multi-UAV system, its specification consists of the following six parts based on the atomic proposition set A P i = X i Y i :
  • φ i e and φ i s are propositional logic formulas without temporal operators which are defined on X i and Y i , respectively. They describe the initial conditions of the environment’s and UAV’s behaviors, respectively.
  • φ t e is a conjunction of several subformulas in the form of B i , where B i is a Boolean formula defined on X i Y i X i , and X i = { x | x X i } . φ t e limits the relation between the next behaviors of the environment and current state.
  • φ t s is a conjunction of several subformulas in the form of B i , where B i is a Boolean formula defined on X i Y i X i Y i , and Y i = { y | y Y i } . φ t s limits the relation between the next behaviors of UAV and the current state, as well as the next behaviors of the environment.
  • φ g e and φ g s are the conjunctions of several subformulas in the form of B i , where B i is a Boolean formula defined on X i Y i . They describe the final goals of the environment’s and UAV’s behaviors, respectively.
The formula ϕ = φ g e φ g s is called a Generalized Reactive(1) (GR(1)) formula [36]. Intuitively speaking, φ e (i.e., φ i e φ t e φ g e ) constrains the possible environments and φ s (i.e., φ i s φ t s φ g s ) limits the UAV’s behaviors, so the specification defines the rules of r i ’s behaviors under given environments.
Although GR(1) specifications possess strictly weaker expressive power than full LTL (e.g., p U q cannot be expressed by the fragment), this fragment remains sufficiently expressive for characterizing most practical system requirements, particularly in reactive system design [36].

3.3.2. Controller Synthesis

The GR(1) synthesis algorithm achieves cubic-time complexity O ( n 3 ) for strategy automaton construction, where n denotes the size of the state space [36].
The synthesis algorithm [36] resolves a two-player adversarial game between the environment and the UAV, in which the initial states of the environment and the UAV are limited by φ i e and φ i s , respectively, and φ t e and φ t s determine the state transitions. The game’s winning condition is specified by a GR(1) formula ϕ . If no matter how the environment changes, only within the φ t e can the UAV find a way to proceed, and the path of the game is compliant to ϕ , it is said that the UAV is winning. The synthesis algorithm is to find the winning strategy for the UAV. As described in [36], a game structure can be obtained on the basis of the specifications, and then the strategy for the UAV could be synthesized (if possible) by using μ -calculus over the game structure.
The strategy synthesized by the algorithm is represented in the form of the automaton A = ( X , Y , Q , Q 0 , δ , γ ) :
  • X is the set of input (environment) propositions,
  • Y is the set of output (UAV) propositions,
  • Q is the set of states,
  • Q 0 Q is the set of initial states,
  • δ : Q × 2 X 2 Q is the transition relation,
  • γ : Q 2 Y is the state labeling function, where γ ( q ) is the set of UAV propositions that are True in state q.
Given an admissible input sequence X 1 , X 2 , , X j satisfying the environmental constraint φ e , the strategy automaton enables the UAV to generate a discrete motion trajectory, which can guide the UAV’s regional navigation decisions while dynamically activating or deactivating specific actions.

4. Unified Monitor and Controller Synthesis Framework

A UAS typically consists of a UAV and a Ground Control Station (GCS); so, the actions of the UAV can be considered as the system, while the inputs from the UAV sensors and the commands from the GCS are considered as the environment. The communication between the UAV and the GCS can be abstracted into the process of the UAV sensing data from its environment.
For simple UAS implementations requiring monitoring of limited safety/security properties, we propose a modular architecture to enhance reusability, in which the UAV specifications are decoupled into two distinct components: Monitor and Actuator [5].
The Monitor module, grounded in RV theory, performs real-time compliance checking of the UAV executions against predefined properties through data stream analysis. Conversely, the Actuator module fulfills functional requirements by executing dynamic control actions.
All outputs from the Monitor will be set as sensor inputs to the Actuator. However, this design introduces controller synthesis complexity that escalates exponentially with the number of monitor propositions. Here, we consider an experiment against a risky command attack as an example:
Example 1. 
The UAV takes off from the StartZone in Figure 4 and patrols waypoints P1P4 in Guided mode. The formal specification is defined in Listing 1, with three runtime monitors deployed: Monitor 1 is designed to avoid obstacle (line 7), Monitor 2 is designed to monitor the power consumption and change to Return To Launch (RTL) mode when the power is low (lines 9–10), and Monitor 3 is designed for defending multi-class risky command attacks (lines 12–13). Although secure controllers can be synthesized by manually integrating security properties with task specifications, this approach exhibits three critical limitations:
Listing 1. Specification of Example 1. 
Drones 09 00353 i001
Limitation 1: Integrating UAV property monitor specifications with task specifications [5] leads to exponential growth in controller synthesis complexity. Concurrently, the LTL fragment supported by GR(1) inherently restricts the expressiveness of monitorable properties. Table 1 quantifies this relationship, demonstrating how the controller’s Finite State Machine (FSM) state number and synthesis time scale increase with increasing risky commands in Monitor 3 (line 12). When monitoring over four risky commands, synthesis fails to complete within the acceptable time frame (>15 min). Furthermore, GR(1)’s LTL subset cannot express the temporal properties requiring metric constraints (e.g., MTL formulae like [ 0 , 5 ] p ) or standard LTL that the RV monitors support. In addition, due to the lack of a priority mechanism in action scheduling, some high-priority actions may not be scheduled in time.
Limitation 2: Traditional GR(1)-based task specifications lack metric task generation capabilities. This manifests in two critical constraints: (a) inability to bound action execution frequencies (e.g., enforcing ≤3 retries for landing procedures), and (b) mandatory infinite recurrence of all UAV goals. These limitations significantly restrict the controller’s applicability in real-world scenarios requiring quantitative task design.
Limitation 3: The lack of domain-specific language for UASs based on formal methods leads to difficulties in designing secure controllers for UAVs. Mapping information from various software and hardware resources into Application Programming Interfaces (APIs) compatible with GR(1) specifications’ atomic propositions is undoubtedly challenging. Employing different programming languages for separately developing monitor and controller specifications further deteriorates the efficiency of UAV specification design, while generating code with compromised portability and usability.
To solve these limitations, we propose a unified monitor and controller synthesis framework for a UAS secure controller (Figure 5). First, we propose a specification pattern to describe frequently used UAS properties (e.g., task, safety, and security constraints) (Section 5). In addition, in order to facilitate users in designing various resource interfaces, tasks specification, and monitored UAS properties, we design the doman-specific language (UAS-DL) (Section 7). Based on the predefined UAV properties and UAS-DL, a novel monitor-controller synthesis methodology is developed. This approach integrates RV with reactive synthesis techniques to generate a secure controller for UASs (Section 4). Depending on the task requirements, the runtime monitor outputs are fed into the controller synthesizer’s environmental constraints. Monitor countermeasures will also be classified as critical/non-critical based on their impact on controller event scheduling. Critical countermeasures (which influence scheduling) and their corresponding monitor ouptuts are mapped to the system constraint outputs and the environmental constraint inputs, respectively. Non-critical monitors operate autonomously for real-time UAV property monitoring without disrupting controller event scheduling. The scheduler prioritizes both monitor countermeasures and controller actions/motions according to their urgency and parallelization capabilities (Section 6.2). We develop two methods for tasks generation with metrics to enhance the operational applicability of synthesized UAV controllers (Section 6.3). The framework was implemented via NASA’s component-based F’ framework, with validation conducted on a Gazebo/ArduPilot/ROS (Robot Operating System) co-simulation platform (Section 8). The experimental results demonstrate the following advantages:
Advantage 1: By classifying and handling RV monitor specifications based on their impact on the scheduling of controller events (instead of embedding all monitor specifications in the controller specifications), we can use more formal languages (e.g., LTL, STL, and MTL) to express the safety and security properties. This enhances the expressiveness of specifications while reducing both the controller’s synthesis time and the number of states in the generated automata. In addition, by dividing events into multiple categories and making the monitor and controller events loosely coupled, our priority-based scheduling method improves the system’s response speed and supports the deployment of additional monitors and the design of more sophisticated tasks.
Advantage 2: In practice, due to the limited resources of UAVs, few goals are required to be fulfilled infinitely, most need only to be fulfilled once or a finite number of times. To address this, we design a regular expression to express task specifications with metrics. We then propose methods based on specification generation and re-synthesis to achieve task generation with metrics in GR(1)-based controller synthesis. This approach enables users to design correct-by-construction UAV tasks with enhanced flexibility and precision through formal methods.
Advantage 3: Based on F Prime (F′), a component-based open-source framework [40], we design a domain-specific language (UAS-DL) that provides a programming platform for formal specification design of a UAS. UAS-DL supports embedding code in UAVs’ commonly used languages, such as C/C++, Python, and ArduPilot commands. The language enables (1) extraction of software/hardware resource information from UAS, (2) design of propositional interfaces and monitor/task specifications, and (3) implementation of advanced features, including task metrics and event priorities. Components developed with standardized interfaces through UAS-DL further demonstrate high reusability and portability.

5. UAS Properties and Specification Patterns

5.1. System Structure of UAS

A UAS is generally divided into two components: the UAV and the GCS. The GCS can be a Remote Controller (RC), a smart mobile device, a computer, or even a military base, establishing bidirectional communication with the UAV to enable remote state monitoring and action control. UAVs are of various types, including fixed-wing crafts, rotorcrafts, and others. In addition to remote human operation, UAVs can operate in autonomous or semi-autonomous modes. Autonomous UAVs sense the environment using active or passive perception systems, make decisions through real-time mission planning algorithms, and command actuators to execute specific behaviors for achieving desired goals. Functionally, most UASs consist of four layers, as illustrated in Figure 6 [41]:
  • Perception layer. UAVs are equipped with a variety of sensors: positioning and navigation are implemented by the Global Navigation Satellite System (GNSS) and IMU, where GNSS provides absolute positioning and IMU enables relative positioning and attitude estimation. Environment perception relies on radar, cameras, and the Automatic Dependent Surveillance-Broadcast (ADS-B) system for state broadcasting and collision avoidance. Sensor fusion methods integrate these data streams to construct environmental models and detect anomalies through statistical property analysis [42].
  • Execution layer. This layer comprises an application sublayer (high-level command parsing) and an action sublayer (low-level actuation). The onboard control system (e.g., Pixhawk or APM) processes perception and control inputs, generating actuator commands via algorithms such as the Strapdown Inertial Navigation System (SINS), Kalman filtering, and PID control. The action sublayer translates these commands to drive the physical actuators to execute the dynamic motions [43].
  • Control layer. The GCS serves as the UAS command center, monitoring real-time flight states (position, altitude, battery), payload status, and sensor feeds. It supports short-term decision-making (e.g., obstacle avoidance) and long-term mission planning (e.g., task rescheduling), with commands transmitted via MAVLink or similar protocols. The GCS usually integrates four modules: communication (data uplink/downlink), information display (GUI for operators), data storage (flight logs), and video processing (real-time analytics) [44].
  • Transmission layer. UAVs establish heterogeneous networks through Air-to-Ground (AG, e.g., 4G/5G), Air-to-Air (AA, e.g., swarm coordination), and satellite links. These networks prioritize low-latency command delivery and high-bandwidth data transmission (e.g., HD video streaming).

5.2. Specification Patterns for UAS

UAS vulnerabilities span multiple critical domains, including the perception layer (e.g., sensor spoofing), the execution layer (e.g., actuator hijacking), the control layer (e.g., command injection), and the transmission layer (e.g., man-in-the-middle attacks). To address these, a pattern-based framework is designed to formally model both safety properties (e.g., collision-free trajectory compliance) and security properties (e.g., encrypted data integrity). This framework enables the synthesis of runtime-enforced controllers through temporal logic constraints. A specification pattern within this framework comprises six components [5]:
  • Input is the user-defined proposition that provides the interface between the real-world environment and the abstract model of UAS. In real UAS control software (e.g., ArduPilot), inputs can be events parsed from flight log files.
  • Property defines safety/security requirements constraining UAS behaviors, typically expressed as shall not statements (e.g., shall not enter no-fly zones).
  • Specification formalizes properties using temporal logic formulas, such as L T L / M T L .
  • Output is the monitor’s verdict, which can be fed into the environment constraint ( φ e ) of the GR(1) synthesizer.
  • Countermeasure is the enforcement operation a UAS should take when the properties are violated, which can be fed into the system constraint ( φ s ) of the GR(1) synthesizer.
  • Priority quantifies the urgency of an event on a scale from 1 (low) to 10 (high). For example, triggering RTL mode terminates current missions (priority 5) and is assigned priority 9.
Therefore, we analyze the threats and summarize the specification patterns for their corresponding properties; we introduce one possible countermeasure and introduce its property specification based on the protective mechanism. The definitions and descriptions of the input and output interfaces of UAS are shown in Table 2.

5.3. Properties of UAS

Based on the property patterns, the corresponding properties are designed by analyzing the safety/security threats and task specifications of UAS. Various taxonomies are studied and discussed from different perspectives. We have detailed the classification of properties in another paper [41]. However, this study adopts methodological simplifications to enhance conceptual clarity in modeling UAV behaviors. Specifically, some three-dimensional operational dynamics are abstracted into two-dimensional representations, with spatial constraints and motion primitives intentionally reduced in parametric complexity. Real-world UAV applications inherently involve multi-domain environmental interactions and adaptive response mechanisms. Such implementation-specific considerations include, but are not limited to, aerodynamic disturbances, sensor noise compensation, and dynamic obstacle avoidance, which lie beyond the theoretical scope of this foundational investigation. The current abstraction preserves essential behavioral equivalences between control strategies and physical implementations, deliberately excluding practical engineering constraints to maintain analytical focus on core operational principles. Some specific examples are demonstrated in the following sections.

5.3.1. Security Properties of UAS

UAS security threats can be classified into three categories (hardware threats, software threats, and cyber-physical threats) and 13 specific threats [41]. The following are three representative threats among the 13.
Signal Traffic Blocking. As shown in Table 3, signal analysis and processing are restricted by the limited resources in embedded systems, such as UAS. Massive and continuous data sent to UAS can easily overwhelm and exhaust these resources. DoS or DDoS attacks on UAS are fatal and difficult to defend against. Because the packets sent to UAS are valid and consume too many resources, the system will become overloaded and fail to respond to normal requests. Gabriel Vasconcelos et al. [45] performed an experiment to evaluate the impact of DoS attacks on the AR.Drone 2.0 using three attack tools: LOIC, Netwox, and Hping3. Directional radio frequency interference is a simple way to jam the target UAS by emitting signals with specific directions, power levels, and frequencies.
Control Commands Spoofing. As shown in Table 4, this attack typically results from logic flaws or insufficient authorization. For example, the DJI Phantom III was totally hijacked by hackers during GeekPwn 2015. The attackers cracked the signals of the BK5811 chip mounted on the Phantom III’s RC controller and exploited design vulnerabilities in the chip to generate malicious commands.
Sensor Spoofing. As shown in Table 5, data from sensors are the most important input for the UAS. Beyond the control flow, attackers can also interfere with or spoof sensor data to degrade system performance. For example, GPS spoofing is the most common attack and can be categorized into autonomous spoofing and forwarding spoofing. Researchers have studied multiple cases in [29]. Countermeasures may include mechanisms to verify consistency among multiple sensors regarding the following: (1) motion speed (e.g., the coordinates of the UAV cannot change from Beijing to New York within a few seconds); (2) time synchronization between GPS and NTP [46]; and (3) directional consistency of the GPS and electronic compass.

5.3.2. Safety Properties of UAS

As shown in Table 6, unlike security threats that are mainly caused by attackers, safety threats primarily originate from the environment and the UAV’s intrinsic limitations, such as battery consumption, maximum flight speed, system memory, CPU utilization, and obstacle avoidance. For example, to prevent the UAV from being unable to return due to low battery charge, we can set its minimum battery level to 50% during flight.

5.3.3. Task Properties of UAS

As shown in Table 7, the UAV monitor-property pattern can be applied to design basic tasks following the Condition Action principle (i.e., triggering specific actions when predefined conditions are satisfied). For instance, during high-altitude airdrop operations targeting injured personnel, the UAV altitude must remain below 30 meters (m) to ensure delivery accuracy within a 1-meter radius of the target coordinates. This constraint is critical because atmospheric disturbances above 30 m altitude can induce unacceptable trajectory deviations.

6. Scheduling Algorithm and Task Generation with Metrics

To achieve accurate monitoring and control of UAS, our UMCS framework must address two fundamental challenges: the unified scheduling of multiple UAV events and the quantifiable design of GR(1)-based UAV tasks. To overcome these challenges, we introduce two novel approaches: priority-based scheduling and task generation with metrics based on specification generation and re-synthesis.

6.1. Priority-Based Scheduling

To coordinate RV monitor countermeasures with controller actions/motions, we implement a priority-based scheduling architecture.

Scheduler Fundamentals

The scheduling mechanism requires three core definitions:
1. Event Categories: The scheduler needs to manage controller-generated actions/motions and monitor-generated countermeasures, collectively termed events. As illustrated in Table 8, we classify events into three types based on execution time, interruptibility, and termination characteristics:
  • Instant events require short execution time (typically within one scheduling cycle), exhibit non-interruptible characteristics, and feature autonomous termination. These events are atomic operations that either: (1) execute in parallel when resource-independent, or (2) undergo sequential scheduling during resource conflicts. Example implementations include CmdDeny and ObstacleAvoid.
  • Short continuous events require extended duration with interruptible execution and self-termination capability. Termination triggers occur upon either: (1) natural task completion, or (2) proposition value transition to False (e.g., Move and RTL operations).
  • Long continuous events require persistent duration with interruptible execution. Termination triggers occur only upon the proposition value transition to False, as exemplified by the UAV being in Guide and Hover modes.
Table 8. Event characteristics.
Table 8. Event characteristics.
Event CategoriesExecution TimeInterruptable?Automatic Termination?
Instant EventShortNoYes
Short Continuous EventLongYesYes
Long Continuous EventLongYesNo
Figure 7 demonstrates temporal execution patterns, while Table 9 shows practical event examples.
2. Event Parallelizability: Events demonstrating concurrent execution capability are classified as parallelizable, or are otherwise deemed non-parallelizable. Non-parallelizability arises from either: (a) shared resource allocation (software/hardware), or (b) behavioral incompatibility. For instance, Rising and Falling events exhibit resource conflict through shared engine utilization, whereas UAV flight modes Hover and RTL demonstrate behavioral incompatibility through conflicting control objectives.
Under ideal conditions where task specifications incorporate comprehensive parallelizability analysis (particularly for action–motion interactions) and the controller is automatically synthesized, all events within each controller state should exhibit parallelizability. Our research specifically examines parallelizability interactions between the controller and RV monitor. Given the inherent parallelizability of instant events in our architecture, the core investigation focuses on the parallelizability between instant and continuous events. The implemented Priority-Based Preemptive Scheduling (PBPS) mechanism addresses six fundamental concurrency patterns shown in Figure 8, where:
  • A = Controller-generated event
  • B = Monitor-generated countermeasure
  • Executing = Active state
  • Blocked = Resource contention state
  • Priority levels: Low (L) vs. High (H)
Figure 8. Main scenarios of non-parallelizable events.
Figure 8. Main scenarios of non-parallelizable events.
Drones 09 00353 g008
In S2 and S3, high-priority instant events can preempt low-priority continuous event scheduling. For instance, when movement and obstacle avoidance exhibit resource conflicts, a high-priority instant obstacle avoidance during low-priority continuous movement proves both necessary and feasible. Such events typically avoid task failures and should be classified as non-critical countermeasures.
In S4 and S6, controller high-priority controller events block low-priority monitor events, inducing execution delays permissible for non-critical scenarios. Consider a UAV high-speed flight prioritizing mission-critical tasks over low-priority energy-saving alerts—deceleration countermeasure can be deferred until task completion.
In S1 and S5, high-priority monitor events interfering with low-priority controller events risk task failure. These require designation as critical events through synthesizer specification integration to resolve non-parallelizability issues, or alternatively as termination events aborting current tasks for emergency procedures.
3. Countermeasures Categories: To optimize non-parallelizable event scheduling efficiency, we divide the monitor countermeasures into the following three types, as shown in Figure 9:
  • Critical countermeasures: Task-conflicting events
  • Non-critical countermeasures:
    Termination type: Task-aborting events
    Deferrable type: Temporarily suppressible events
Figure 9. Countermeasures categories.
Figure 9. Countermeasures categories.
Drones 09 00353 g009
Critical countermeasures refer to those that block or delay the scheduling of controller events. S1 and S5 in Figure 8 are typical scenarios of critical countermeasures, which are generally non-parallelizable continuous countermeasures with higher priority than that of the controller events. The corresponding specifications of critical countermeasures need to be added to the task specification. The added specifications are generally composed of the trigger condition (e.g., ( Condition CounterMeasure ) ) and predefined operational specifications. For example, when the UAV mode is switched to RTL under GPS attacks, the synthesizer can automatically add the trigger condition specification ( ( GPS_spoof Hover ) ) and predefined operations for countermeasures Hover.
Non-critical countermeasures refer to those that can be parallelized with the controller events or terminate the execution of UAV tasks. These countermeasures are not added to the system proposition of the task specification. For example, CmdDeny is a typical non-critical countermeasure, as it generally does not affect the execution of other events.
Termination countermeasures are a subset of non-critical countermeasures that terminate UAV task execution. Since their occurrence causes task failure, they require immediate termination of existing controller event scheduling. Common examples include RTL and EmergLanding (Emergency Landing). Conversely, non-termination countermeasures refer to non-critical countermeasures, excluding termination ones.
For each event, non-parallelizable parameters should be configured as shown in Table 10. The non-parallelizable events of Move include RTL, EmergLanding, ObstacleAvoid, and Hover (numbers in parentheses indicate priority levels). Critical events are automatically determined by the synthesizer based on event parallelizability or manually set. Termination events (e.g., RTL (7) and EmergLanding (8)) must be manually defined. A high-priority termination event will override a scheduled low-priority one. For instance, if a UAV executing RTL (7) due to a DoS attack encounters a mechanical fault requiring EmergLanding (8), the latter terminates RTL. ObstacleAvoid, a high-priority instant event, can interrupt low-priority termination or controller events. Hover, a continuous event with the same priority as Move, is classified as critical, and its corresponding specifications must be added to the task specification.
4. Event States: For efficient scheduling, we define five states: Unscheduled, Running, Suspended, Scheduled, and Terminated.
As shown in Figure 10, the initial state of each event is Unscheduled. The scheduler handles events differently based on their types: (1) An unscheduled instant event transitions to Scheduled immediately after running. (2) An unscheduled short continuous event enters the Running state when scheduled. Running events can be suspended by higher-priority non-parallelizable events, moving to Suspended, and resume Running once those events complete. If its proposition value becomes False, it transitions to the Terminated state. Once it is completed, it transitions to the Scheduled state. (3) A long continuous event behaves similarly but lacking a Scheduled state, its termination depends solely on its proposition value.

6.2. Operating Principle of the Scheduler

Figure 11 illustrates the scheduler’s operational schematic. Before task execution, the UAV requires user-defined specifications for its controller, including task objectives, safety, and security constraints. General UAV properties (e.g., operational modes, environmental interactions) can also be predefined through specifications.
These specifications generate monitors, with critical ones (e.g., safety constraints) and other predefined properties (e.g., operational/environmental rules) being sent to the synthesizer alongside UAV specifications for conflict resolution and controller synthesis. Both controller events (via the parser) and monitor events are scheduled by the scheduler to produce the UAV’s task execution and safety/security enforcement.

6.2.1. Implementation of the Parser

The parser is designed to translate the synthesized FSM of the controller into event flows. Figure 12 illustrates a segment of the FSM, composed of nodes and edges. Nodes represent the UAV’s position and motion states, while edges denote sensor-triggered transitions. By monitoring sensor states, the UAV transitions between states and executes corresponding events. To manage position changes, the external action Move is designed to control UAV movement.
The parser converts the FSM into event flows based on real-time sensor states. Leveraging modifications to the classical algorithm in [25], we propose Algorithm 1 for FSM-to-event-flow parsing, which feeds the scheduler with structured event sequences.
Algorithm 1: Parsing an FSM into event flow
Drones 09 00353 i018
Initially, the UAV is placed at position p 0 (line 1) in region i such that r i γ ( q 0 ) (line 3), where q 0 is the initial automaton state satisfying q 0 Q 0 . Furthermore, based on all other propositions γ ( q 0 ) , the appropriate UAV actions and motions are output (line 6).
At each step, the UAV senses its environment and determines the values of the binary inputs X . Based on these inputs and its current state, it chooses a successor state q nxt (line 10) and extracts the next region r nxt (line 11), where it should go according to γ ( q nxt ) . Then, it outputs the new motion to the scheduler to drive it toward the next region.
If the UAV enters the next region in this step (line 15), the execution changes the current automaton state and region (lines 16–17), extracts the appropriate actions (line 18), and outputs them to the scheduler to activate/deactivate actions (line 19). If the UAV is neither in the current region nor in the next region, which could happen only if the environment violates its assumptions, then the execution is stopped with an error (line 23) [25].

6.2.2. Implementation of the Scheduler

Figure 13 shows the basic structure of the scheduler. The scheduler is composed of five components:
  • Parser, which parses the FSM of the Controller into event flows for the scheduler;
  • Scheduler, which coordinates actions and motions from the Controller and countermeasures from the Monitor based on priority;
  • Events Running, a set recording currently executing events;
  • Events Blocked, a set containing unscheduled or suspended events;
  • Events Scheduled, a set listing all scheduled events.
Figure 13. Schematic of scheduler operation.
Figure 13. Schematic of scheduler operation.
Drones 09 00353 g013
The Scheduler schedules events and records their states in different event sets based on their priorities and parallelizability. Events with high priority or parallelizable events can run directly. If an event is instant, it transitions from Unscheduled to Scheduled and is added to the scheduled event set. Otherwise, if it is a continuous event, it transitions from Unscheduled to Running and joins the running event set.
Similarly, low-priority non-parallelizable events and suspended events in the running event set are moved to the blocked event set to await scheduling. When events in the blocked event set are scheduled, instant events are added to the scheduled event set, while continuous events are added to the running event set.
Algorithm 2 shows the processing procedure of non-termination events for the scheduler. The scheduler takes all event states as input. First, it periodically checks the value of each event (line 1). When the value of event e is False, it indicates that the event has not occurred or has finished; thus, the running event is terminated and the event is removed from the event set (lines 3–8). If the value of e is True (line 9), the event e is either occurring or already running. The scheduler then checks for conflicting events with e in the running set (lines 12–14). If no conflicts exist, event e is executed (lines 15–16), with instant events added to the scheduled set and continuous events added to the running set (lines 17–20). If the conflict set is non-empty, the scheduler checks for higher-priority events in the conflict set. If such events exist, e is moved to the blocked set (lines 21–23). Otherwise, all conflicting events are suspended, and e is executed (lines 24–26). For instant events, the scheduler resumes the suspended events and adds e to the scheduled set (lines 27–29). For continuous events, e is added to the running set, and the suspended events are moved to the blocked set (lines 30–32).
Events in the blocked set monitor the status of events in the running event set to determine their activation timing. Algorithm 3 illustrates this monitoring mechanism. At the end of each event, a completion signal is sent, serving as input to the algorithm (line 1). To optimize efficiency, events in the blocked set are ordered by descending priority and insertion time; higher-priority events added earlier are placed at the front of the queue. For each blocked event e b , the algorithm traverses the running set to identify non-parallelizable conflicts confEVT (lines 4–6). If confEVT is empty, e b is executed and removed from the blocked set (lines 7–9). If conflicts exist, the algorithm checks for higher-priority events in confEVT (lines 10–12). If such events do not exist, all conflicting events are suspended, and e b is executed (lines 13–15). For instant events e b , the algorithm resumes suspended conflicts, removes e b from the blocked set, and schedules it (lines 16–19). For continuous events, e b is moved to the running set, while conflicting events are added to the blocked set (lines 20–24).
Short continuous events terminate automatically upon completion, transitioning from the running to scheduled set (this process is omitted here for simplicity).
The handling of termination events is similar to that of non-termination events. The key difference lies in the input: termination events do not require interaction with the controller’s FSM (as terminating all low-priority scheduling tasks inherently halts their execution), so the input solely comprises countermeasures from the monitor. This process is straightforward and omitted here for brevity.   
Algorithm 2: Handling of non-termination events in Scheduler
Drones 09 00353 i019
Algorithm 3: Listening process of the blocked events
Drones 09 00353 i020

6.3. Task Generation with Metrics

In the process of UAS task design, the design of goal specifications is the core content of task design. Our UAV task design is based on GR(1) specification, in which the system goal φ g s always requires at least one goal specification. When the system goal contains multiple goal specifications, these goals will be fulfilled infinitely often (if it can be fulfilled) in turn. However, in practice, due to the limited resources of UAV, goals can seldom be fulfilled infinitely; many goals need only to be fulfilled once or a finite number of times. To solve this problem, we propose two methods based on specification generation and re-synthesis.
Due to the complexity of the GR(1) specification, it is difficult to put forward a general method to solve the problem of task generation with metrics without limiting the expressivity of the GR(1) specification. Therefore, we design a regular expression for the system goal specification with the metric of fulfilment times, i.e., the expression ( C o n d i t i o n s A c t i o n s / M o t i o n s ) indicates that A c t i o n s / M o t i o n s will eventually be executed when C o n d i t i o n s are True. In this expression, C o n d i t i o n s is a composition of environment/system propositions and symbols (¬, ∧, ∨, →, ↔, and ( ) ), and A c t i o n s / M o t i o n s is a composition of system propositions and symbols (¬, ∧, ∨, →, ↔, and ( ) ). For example, the task “When UAV takes off, goes to region R2 or R3” can be expressed as ( T a k e O f f ( R 2 | R 3 ) ) . According to our experience, there does not seem to be a significant loss in expressivity as most UAV task specifications that we encountered can be either directly expressed or translated to this format.
In normal conditions, the value of an action/motion being True denotes the action/motion is executed and being False denotes not executed. Based on the concept of “Counters” introduced in [34] for GR(1) specifications, we give two methods to generate counter variables that can keep track of the numbers of occurrences of an action/motion and calculate its execution times. If execution times of an action are over 1, we need to count the number of its variation from True to False. If fulfilment times are 1, we need only to test if the value of this action is True.

6.3.1. Task Generation with Metrics Based on Specification Generation

Based on our regular expression, we can design the Algorithm 4 that can automatically generate auxiliary specifications to record the execution times of goal specifications with metrics.
We use an example to illustrate the basic idea of this method:
Example 2. 
In Figure 14, the UAV takes off from R1 and patrols R2 and R3 infinitely often. The UAV can take 1 picture each time when visiting R4. When instructions Take1P or Take2P are T r u e , UAV should take 1 or 2 pictures from R4. The original specification is shown below, where "(2,5)" denotes the action Pic’s fulfillment times is 2 and priority level is 5 (Listing 2).
Listing 2. Original specification with parameters.
Drones 09 00353 i002
To solve the problem of task generation with metrics, we can generate the specification as shown in Listing 3. The added propositions M0, M1, and M2 are used to record the fulfillment times of the system goal specifications with metrics and to deactivate these goals when they are fulfilled for certain times. The specification contains added propositions that are generated by the auxiliary specification.
Listing 3. New specification with generated specifications.
Drones 09 00353 i003
Algorithm 4: Specification generation algorithm
Drones 09 00353 i021
To obtain these generated specifications, in Algorithm 4, we first calculate the number of memory propositions (line 4)—m memory propositions can store at most 2 m 1 fulfilment times. Thus, the minimal value of m is log 2 ( t i + 1 ) .
Then, we extract Conditions and Actions / Motions from each goal by using function GetValue (line 5). In line 6, we add the memory propositions with initial value 0 to φ i ; function Mem m ( n ) can generate m propositions with the value of n (e.g., in Listing 3, Mem 2 ( 0 ) : ¬ M 1 ¬ M 0 , Mem 2 ( 2 ) : M 1 ¬ M 0 ).
In lines 7–8, we add to φ r specifications that:
  • Memory proposition values remain False when conditions/actions are False (line 7).
  • Limit the range of values of memory propositions (line 8).
In line 9, we replace the Conditions of g i with the conjunction of the original conditions and ¬ Mem m ( t i ) using AddToCond .
By checking the truth table of the generated specification, we find that when ¬ Mem m ( t i ) is True ( g i is not fulfilled for t i times), the value matches the original specification. When ¬ Mem m ( t i ) is False ( g i is fulfilled for t i times), the value of the generated specification becomes always True (i.e., this goal specification is deactivated).
In lines 11 to 22, we add some specifications that can record the variations in A c t i o n s / M o t i o n s .
For t i = 1 (lines 11–13), we test if C o n d i A c t i = T r u e . If satisfied, the memory proposition Mem m ( j ) changes from 0 to 1 and remains constant.
For t i > 1 (line 14), we verify if one of the following conditions is satisfied:
  • Variation occurs: Values of M e m m (j) and C o n d i A c t i are True in the current state and ¬ A c t i will be True in the next state.
  • Condition remains: Values of M e m m ( j + 1 ) = T r u e and C o n d i A c t i = F a l s e .
When satisfied, Mem m ( j ) increments to Mem m ( j + 1 ) or remains constant at Mem m ( j + 1 ) .
The specification (lines 16) is similar to the specification in lines 14, with the key difference being the last memory value tracking of A c t i only requires A c t i = T r u e rather than variation of A c t i from True to False in the next state.
In Listing 3, specifications in lines 3–5 and lines 6–7 are generated to record the fulfillment times of the goal in lines 11 and 12, respectively. By adding these specifications, when we give instructions Take2P or Take1P to UAV, it can go to R4 to take two or one pictures and then patrol R2 and R3 infinitely often.

6.3.2. Task Generation with Metrics Based on Re-synthesis

Due to the method of specification generation adding additional memory propositions (which increases controller complexity and may cause synthesis failure), by observing the fact that removing the system goals of a specification does not change its synthesizability, we can use an extra counter to record the metric n of each system goal, remove the fulfilled goals, and re-synthesize a new controller until all goals with metrics are fulfilled. This approach avoids adding memory propositions to the original specification through three key steps:
  • Step 1. During controller operation, maintain a counter t i N , t i = n that decreases by Δ t i = 1 when Actions / Motions state changes under Conditions = T r u e , until t i = 0 (indicating the goal is fulfilled n times);
  • Step 2. Remove the fulfilled goal specification, refresh the initial states with the current states, then re-synthesize the modified specification to obtain a new controller;
  • Step 3. Iterate Steps 1–2 until all system goal specifications with metrics are removed.
We show the basic procedure of this method in Algorithm 5. Algorithm 5 requires the specifications S, synthesized FSM A of S, FSM A’s transition function φ A : Q × 2 X Q , FSM state Q, system goal set φ g , metric goal set φ p = { g 1 , g 2 , , g n } , corresponding parameter set P = { t 1 , t 2 , , t n } , and memory state set M = { m 1 , m 2 , , m n } . The memory state set M is used to store the occurrence state of A c t i o n s / M o t i o n s .
In lines 5–6, we extract Conditions and Actions / Motions from each goal by using the function GetValue. In lines 7–8, if the fulfilment times of the current goal g i is 1, we test whether the propositions in state Q satisfy Cond i Act i via IsTrue. If True, the fulfilment times are reset to 0.
In lines 10–16, if the fulfilment times of g i exceed 1, we check if Q satisfies Cond i Act i . If True, A c t i is considered to have occurred under C o n d i , and m i is set to 1; otherwise, if g i has occurred, its fulfilment times decrease by 1 and m i is set to 0.
In lines 17–25, we first check if any counter in P reaches 0 (indicating eliminable goals). For each such case, we remove the corresponding elements from φ p , M, S, and P (lines 18–23), then refresh the initial specifications of S using the current state Q via RefreshInit (line 24), and finally, re-synthesize the modified S to generate a new FSM A (line 25). Finally, the state Q is updated via the transition function φ A ( Q , X ) (lines 26–27).
We use the specification in Listing 2 as an example. When first sending the instruction Take2P to the UAV, the UAV will visit R4 twice to capture two pictures while patrolling R2 and R3. During the second visit to R4, the value of action Pic becomes True, and the corresponding counter for goal Take2P decrements to 0 (indicating goal fulfillment). This triggers re-synthesis of a new controller by removing the specification for Take2P and updating the initial specification with the current state, as shown in Listing 4.
Algorithm 5: Re-synthesis algorithm
Drones 09 00353 i022
Listing 4. The specification to re-synthesize when goal Take2P is fulfilled.
Drones 09 00353 i004
Subsequently, after sending the instruction Take1P, the UAV visits R4 once to capture one picture, followed by another re-synthesis (specification in Listing 5). Table 11 demonstrates the reduction in FSM size after two re-synthesis processes, attributable to the elimination of fulfilled goal specifications.
Listing 5. The specification to re-synthesize when goal Take1P is fulfilled.
Drones 09 00353 i005

6.3.3. Advantages and Disadvantages of the Two Methods

As shown in Table 12, both methods exhibit distinct advantages and limitations.
The method based on specification-generation is suitable for scenarios with limited parameters and restricted parameter value ranges. This approach requires only a single synthesis iteration, which produces a more complex automaton structure while simplifying the scheduling process, albeit at the potential cost of higher computational resource consumption.
In contrast, the re-synthesis-based method imposes fewer constraints on both parameter quantity and value ranges. However, this flexibility necessitates multiple synthesis iterations and a more complex scheduling process.

6.3.4. Limitations

Our methods address the finite execution of goal specifications via regular expressions, yet three key limitations persist in practical design:
Limitation 1: Uncountable fulfilment times. To count the occurrence of some Actions/Motions, we need to detect transitions of Actions/Motions values from True to False. However, if certain Actions/Motions remain persistently True while their fulfilment times exceed 1, counting failures occur. This problem arises when system goals contain only goals with metrics, and other completed goals prevent the last one from being countable (e.g., values remain True indefinitely).
A naive solution involves adding specifications like ( C o n d ¬ A c t ) to force state transitions. While synthesis of such specifications may enable counting through cyclic behavior (e.g., moving between regions in Figure 14), it risks triggering unintended Actions/Motions. For instance, a UAV is designed to take off from R1 to visit R2 twice; the original system goal specification is uncountable (UAV will take off from R1 and always stay in R2). Adding goal specification ¬ R 2 to force the UAV to leave R 2 will make this goal countable, but this will introduces unnecessary complexity and unreasonable behaviors. Fundamentally, unreasonable fulfilment time assignments exacerbate this issue. A pragmatic approach is to terminate tasks when solitary goal actions with metrics stagnate in True states, rather than awaiting improbable transitions.
Limitation 2: Strict fulfilment times. When setting the fulfilment times of a goal to n, our methods guarantee that its Actions/Motions act n will execute at least n times if conditions are satisfied. However, we cannot ensure execution occurs exactly n times, primarily due to the following:
  • Propositions of act n may appear in other specifications, potentially altering their values
  • Infeasibility of strict finite fulfilment for certain goals (e.g., in Figure 14, a UAV required to visit regions R1 and R3 four times cannot simultaneously visit R2 exactly twice)
For the first case, the specification generation method can be augmented with:
( Mem ( n ) ¬ act n )
If synthesizable, this enforces precise n-time execution.
For the second case, similar specifications like ( M e m ( n ) ¬ a c t n ) may cause unsynthesizability; thus, assigning rational metrics may ease this problem.
Limitation 3: Conditional Dependency Problem. When the Condition of goal specification B depends on the Actions/Motions of goal specification A (i.e., propositions in B’s condition also exist in A’s action set), if goal A’s fulfilment times m are less than goal B’s times n, this may prevent B from being fulfilled n times.
To mitigate this:
  • Avoid propositional dependencies through modular specification design
  • Enforce the constraint n m for dependent goals
  • Implement dependency checking via:
    DepCheck ( A , B ) = 1 if p ( C B A Acts ) 0 otherwise
For instance, if UAV’s goal B (n = 5) depends on goal A (m = 3), either increase m or decouple the dependencies.

7. Modeling and Description of UAS

7.1. UAS Modeling

The purpose of synthesis is to construct a reactive controller for the UAV, i.e., generating execution trajectories τ T satisfying given temporal specifications. To achieve this, we formalize a system model comprising UAVs and their environment through GR(1) game theory.
Consider a multi-UAV system with agents U { u 1 , u 2 , , u m } . For any UAV u i U , other UAVs are treated as its dynamic environmental components. which means one UAV can obtain dynamic information from both the environment and other UAVs. Then, we construct the Position, Action, and Sensor models for the multi-UAV system as follows (see Table 13) [5]:
Position Model. We assume the workspace is a plane polygon, denoted by a position coordinate set Z partitioned into finite convex polygonal zones { z 1 , z 2 , , z n } , where z j z k = for 1 j , k n and j k . The Boolean proposition p i , j P i is True if UAV u i is in zone z j . Obviously, the proposition p i , j is a mutual exclusion constraint, namely, exactly only one element in P i is True at any step.
Action Model. UAVs must execute action sequences to satisfy system functionalities, including “patrolling”, “target navigation”, and “alarm triggering”. These actions are denoted by A i , and a i , k A i is True if UAV u i executes action a i , k such as “patrolling”. Here, we define Y i = P i A i as the behaviors set of u i .
Sensor Model. The environment is perceived through UAV sensors (e.g., GPS, LiDAR). We abstract the sensors as a set of binary environment propositions X i . UAV u i can receive data from sensor x i , n X i , which holds True if it is triggered (e.g., Lidar senses surrounding obstacles).
Summarily, we define X , P , and A as the sets of propositions { X 1 , X 2 , , X m } , { P 1 , P 2 , , P m } , and { A 1 , A 2 , , A m } , respectively, as well as Y = P A . Based on X and Y , the multi-UAV system model can be given with the sets of behaviors and sensors.

7.2. Unmanned Aircraft System Description Language

Based on our framework and UAS model, in order to facilitate users in flexibly configuring the hardware and software resources, designing the UAS properties and tasks for the multi-UAV system, we borrowed the concept of domain-specific languages (DSL) and designed the UAS-DL for our method.
Figure 15 illustrates the architecture of UAS-DL. A core feature lies in its capability to abstractly model heterogeneous UAS hardware/software resources as atomic propositions, which are directly applicable to the formal verification of UAS properties and task specifications.
The UAS-DL framework comprises four core components:
  • Sensor : Sensor information extracted from APIs of simulation platforms (e.g., ArduPilot and ROS)
  • Action : Actuator commands derived through Sensor data processing
  • Proposition : Propositional logic composed of the propositions from in S e n s o r and A c t i o n
  • Specification : Specifications describing UAV priorities and tasks in LTL/MTL containing:
    RV : Runtime verification monitor specification describing UAV priorities
    SYN : GR(1) synthesis specification mainly describing UAV tasks
The RV specifications can be converted into monitors in the form of code or FSM, while the SYN specifications can be synthesized into the FSM controller.
The simplified syntax of UAS-DL (Table 14) comprises five core components, which correspond to the four parts in the UAS-DL architecture (among them, the GeneralSetting and the UAVSetting correspond to the Specification):
  • Sensor : Sensor definition composed of key words “Inport”, “OutPort”, “Var” and “Set
  • Action : Action definition composed of key words “InCmdport”, “InDataport”, “OutPort”, etc.
  • Proposition : Propositions definition composed of propositions defined by Sensor or  Action
  • GeneralSetting : Global constraints in LTL/MTL specification
  • UAVSetting : UAV constraints in LTL/MTL specification
Production rules are printed in bold, and printable contents are enclosed in “quotation marks”. We use words with the suffix ID (e.g., PpID) (proposition ID) to denote element names. Multiplicity is defined using the following:
  • *: Zero or more repetitions
  • +: At least one occurrence
  • ?: Optional element (zero or one instance)
To demonstrate UAS-DL syntax, we use the highlighting conventions:
  • Sensor, VAR, Set (UAS-DL keywords in blue)
  • SenName, POS (user-defined identifiers in green)
  • Lat, Lng (other language (e.g., F Prime Prime (FPP) [40]) predefined keywords in orange)
  • "// Sensor initialization" (code comments in gray)
In the following sections, we will give several examples to demonstrate the usage of each part of the UAS-DL.
Table 14. Syntax of UAS-DL.
Table 14. Syntax of UAS-DL.
UAS-DL syntax:
UAS-DL::=Sensor Action* Proposition* GeneralSetting UAVSetting+
Basic definitions:
Number::=(’0’ .. ’9’) + (’.’ (’0’ .. ’9’)+)?
ID...PpID, Param, Atomic::=[A-Za-z0-9]+
Message::=[A-Za-z0-9’∼]+
Code::=∼[ $ > ]+
Url::=[A-Za-z0-9./]+
Basic syntax:
TimeInterval::=’[’ Number ’:’ Number ’]’
Expression::=Expression (’&’ | ’|’ | ’,’) Expression | Expr (’>’ | ’<’ | ’ > = ’ | ’ < = ’ | ’=’) Expr
Expr::=Expr (’*’ | ’/’) Expr | Expr (’+’ | ’−’) Expr | Number | (ID | Message) | ’(’ Expr ’)’
LTL::="True" | "False" | Atomic | ’(’ LTL ’)’ | LTL (’&’ | ’|’ | ’ > ’ | ’ < > ’) LTL | (’!’ | ’∼’) LTL
| (’[]’ | ’G’ | ’ < > ’ | ’F’ | ’X’ | "Next") LTL | LTL ’U’ LTL
MTL::="True" | "False" | Atomic | ’(’ MTL ’)’ | MTL (’&’ | ’|’ | ’ > ’ | ’ < > ’) MTL| (’!’ | ’∼’) MTL
| (’[]’ | ’G’ | ’ < > ’ | ’F’ ) TimeInterval? MTL | (’X’ | "Next") MTL | MTL ’U’ TimeInterval? MTL
GR1::="True" | "False" | Atomic | Shared | ’(’ GR1 ’)’ | GR1 (’&’ | ’|’ | ’ > ’ | ’ < > ’) GR1 | (’!’ | ’∼’)
GR1| (’[]’ | ’G’ | ’ < > ’ | ’F’ | ’X’ | "Next") GR1
SpecParm::=’(’ Param? (’,’ Param)? | (ParamID:Param (’,’ ParamID:Param)*) ’)’
SpecExp::=GR1SpecParm?
Map::="MapInfo" ’:’ (Url | GR1) ’;’
Default::="Default" ’:’ ModeID ’;’
RvPriority::=“RvProity” ’:’ Number ’;’
SYNPriority::=“SynPriority” ’:’ Number ’;’
EnvParms::="Env" ’:’ Atomic (’,’ Atomic)+ ’;’
SysParms::="Sys" ’:’ Atomic (’,’ Atomic)+ ’;’
Init::="Init:" (((’!’ | ’∼’)? Atomic) (’,’ ((’!’ | ’∼’)? Atomic))* SpecParm? ’;’)*
EnvRule::="EnvRule:" (GR1 ’;’)*
EnvGoal::="EnvGoal:" (GR1 ’;’)*
SysRule::="SysRule:" (SpecExp ’;’)*
SysGoal::="SysGoal:" (SpecExp ’;’)*
Monitor::=(MonitorID ’:’ (LTL | MTL) (’,’ CounterMeasureID)? (’,’ Number)? (’,’ Number)?
(’,’ ’N’|’C’|’T’)? (’,’ Number ’s’|’ms’)?’;’)*
RV::="RV" ’{’ Monitor ’}’
SYN::="SYN" ’{’ Default? EnvParms? SYSParms? Init? EnvRule? EnvGoal? SYSRule? SysGoal? ’}’
Sensor syntax:
Sensor::="Sensor" SenID ’{’
        ("InPort" ’:’ InID ’:’LogID (’,’ LogID)* ’;’)?
        "OutPort" ’:’ OutID (’,’ OutID)* ’;’
        ("VAR" ’:’ VarID (’,’ VarID)* ’;’)?
        "Set" ’{’ (SetID ":=" Expression ’;’ | ’EmbCode’ ’[’ CodeID ’]’ (’*’ | ’+’)?
        ’:’ (LogID(’*’)? (’,’ LogID(’*’)?)*)? " < $ " Code " $ > ;")+ ’}’
       ’}’
Action syntax:
Action::="Action" ActID ’{’
        "InCmdport" ’:’ CmdID (’,’ CmdID)* ’;’
        ("InDataport" ’:’ (DataID ’:’LogID (’,’ LogID)* ’;’)*)?
        ("OutPort" ’:’ OutID (’,’ OutID)* ’;’)*
        ("VAR" ’:’ VarID (’,’ VarID)* ’;’)?
        ("CMD" ’{’ (CmdID ’:’ < $ " Code " $ > ;")* ’}’)*
        "Set" ’{’ (SetID ":=" Expression ’;’ | "EmbCode" ’[’ CodeID ’]’ (’*’ | ’+’)?
        ’:’ (CmdID | LogID)(’*’)? (’,’ (CmdID | LogID)(’*’)?)* " < $ " Code " $ > ;")+ ’}’
       ’}’
Proposition syntax:
Proposition::="Proposition" PropsID ’{’ (PropID ’:’ PpID ((’,’ | ’&’ | ’|’) PpID)* ’;’)+ ’}’
GeneralSetting syntax:
GeneralSetting::="General" ’{’Map Default RvPriority? SYNPriority? RV? SYN? ’}’
UAVSetting syntax:
UAVSetting::="UAV" UAVID ’{’ Default? RV? SYN? ’}’

7.2.1. Sensor

Sensor is designed to process and convert the extracted UAS running trajectories (e.g., UAS logs) into Boolean expressions that can be directly used in propositions and specifications. In order to be compatible with the F’ development framework, the syntax of Sensor is designed to support fast conversion into F’ components. As shown in Listing 6, a sensor starts with the keyword Sensor followed by a sensor name. Additional keywords include InPort, OutPort, VAR, Set, and EmbCode.
InPort is used to define the input hardware and software resources, which are mainly obtained from the sensors, ROS topics, and UAS logs. For convenience, we first define the commonly used software and hardware resources of UAS-DL in FPP. As shown in Listing 7, it represents position information defined in FPP based on data extracted from ArduPilot logs. Using this predefined port information, we can conveniently define the input variables. As an example, to define the latitude and longitude variables (Listing 6, line 5), we analyze the UAS logs to identify related entries (e.g., GPS sensor recordings), define the log items (Lat, Lng) in FPP, and finally, reference these predefined log data.
Listing 6. Demo code of the keyword Sensor.
Drones 09 00353 i006
Listing 7. Demo code of port in FPP. 
Drones 09 00353 i007
OutPort is used to define the output variables that can serve as input variables for other components. For example, the output port variables (Listing 6, line 9) denote the following: InNe: whether the UAV is in the northeast of the origin; Guided: whether the flight mode is set to Guided; Rover: whether the flight mode is set to Rover; TakeOff: whether the UAV has taken off; OutOfCircle/OutOfRange: whether the UAV is within or outside a circular range.
VAR is used to define the intermediate constants computed by the embedded code. For example, variable Var1 (Listing 6, line 11) stores the value generated in the embedded code (Listing 6, line 28).
Set establishes relationships between the input and output variables. For simple cases, the output variable name is entered followed by a colon and an expression combining the inputs and operators. As an example, verifying whether the UAV is northeast of the origin (longitude: 116.3913447, latitude: 39.9053936) uses the expression:
Lat 39.9053936 > 0 & & Lng 116.3913447 > 0
(Listing 6, line 15), where the expression’s value should have a Boolean result.
EmbLang can embed other programming languages (e.g., Python, Java, and C/C++) in UAS-DL to describe complex relationships more efficiently. As shown in Listing 6 (line 20), users must first specify the following: language type (e.g., Python), code execution mode (default: execute once, * for zero/more repetitions, + for at least one repetition), input variables (marked with * to trigger execution only when these variables change). The embedded code is written between the delimiters <$ and >$ (Listing 6, lines 21–29). The Python code in this example takes Lat and Lng as inputs to determine whether the UAV is outside a circular area centered at (39.9053936, 116.3913447) with a 1-unit radius. The results are printed in the format “ U A S D L : v a r i a b l e n a m e = v a l u e “ for the parser to extract. The output variables (e.g., OutOfCircle) and intermediate variables (e.g., Var1) follow this convention.

7.2.2. Action

Action is designed to define the basic action, motion or countermeasure, including a sequence of UAS console commands or ROS API programs. As shown in Listing 8, an action starts with a keyword Action followed by a name. Additional keywords include InCmdPort, InDataPort, OutPort, CMD, VAR, Set, and EmbCode.
InCmdPort defines the Boolean input command that triggers the control signals. For example, the input commands in Listing 8 line 3 include CmdTakeOff (takeoff command), CmdGo (navigation command), CmdClimb10m (altitude adjustment command).
InDataPort defines the input data variables required for action execution. These variables are typically sourced from the sensors, actions, ROS topics, UAS logs, or GCS. For instance, the GoTo action relies on GCS-provided coordinates: (Lat, Lng, Alt).
OutPort defines the output variables for inter-component communication. Complex actions may depend on others; thus, the output variables enable data exchange between actions. For example, in Listing 8 (line 8), the variables are: ArmThrottle (throttle arming), TakeOffTo(40) (takeoff altitude: 40 meters). Among them, command TakeOffTo and parameter 40 are connected to the InCmdPort and InDataPort port of the sub-action component separately.
VAR declares parameters identical to those in the Sensor component.
CMD defines the UAV console commands. By sequencing predefined commands between <$ and >$, users generate executable command streams. For example, in Listing 8 (ll, 12–25): ModeGuided (switches flight mode to guided), GoTo (navigates to target coordinates), and Delay (inserts timed pauses).
Set configures action triggers via input variable changes. Logical operators (&(and), |(or), ,(sequence)) combine commands. For example, CmdTakeOff (Listing 8, line 29) executes: (1) ArmThrottle (arm throttle), (2) ModeGuided (activate guided mode), (3) Delay(5) (5-second delay), (4) TakeOffTo (40) (ascend to 40 m).
EmbCode embeds other languages identically in Sensor.
Listing 8. Demo code of the keyword Action.
Drones 09 00353 i008

7.2.3. Proposition

Proposition defines complex propositions by combining sensors and actions (Listing 9). For example, RiskyCMD configures dangerous UAV instructions through a combination of three sensors, Calibrate, and Disarm; Act_TakeOff implements the similar TakeOff action sequence defined in Listing 8 (line 29) through a combination of four actions.
Listing 9. Demo code of the keyword Proposition.
Drones 09 00353 i009

7.2.4. General

General defines the core configurations for multi-UAV systems (e.g., drone swarms), including the global properties and task parameters. As shown in Listing 10, it begins with the keyword General and supports 13 other keywords: MapInfo, RvPriority, SynPriority, RV, SYN, Default, Env, Sys, Init, EnvGoal, SysRule, SysGoal, and DefaultRule.
Listing 10. Demo code of the keyword General.
Drones 09 00353 i010
MapInfo defines the geospatial constraints through LTL specifications that describe the connectivity of each region on the map or external configuration files (e.g., "map_config.ltl").
Default sets the default action taken under extreme situations (e.g., communication loss or mission abort conditions).
RvPriority sets the default priority for countermeasures.
SynPriority sets the default priority for actions/motions.
RV defines the safety and task properties for UAVs. Each property contains six components:
  • Property variable: used as a sensor proposition in task specifications
  • Specification: an LTL/MTL formula describing the monitored behavior
  • Countermeasure: the action triggered on property violation
  • Type: countermeasure types (C (Critical), N (Non-critical), and T (Termination))
  • Priority: an integer (1–10) determining scheduling precedence (higher values indicate urgency)
  • RefreshRate: monitor state reset intervals (default: , i.e., never refreshed)
For example, in Listing 10, line 15, the specification ObstacleAvoid mandates a minimum obstacle distance of 5 meters after take off. Violations trigger the non-critical Avoidance action (priority 10), resetting the monitor state within 100 ms.
SYN defines global task specifications for a multi-UAV System based on GR(1) specification. Given a task description s , its semantics are structured as follows:
  • Env: X , i.e., the set of environment input propositions.
  • Sys: Y , i.e., the set of system output propositions.
  • Init: φ i e φ i s , i.e., the initial states of both the environment and UAS.
  • EnvRule: φ t e , transition constraints of the environment.
  • EnvGoal: φ g e , i.e., the environmental behavioral objectives.
  • SysRule: φ t s , i.e., transition constraints of UAS.
  • SysGoal: φ g s , i.e., mission goals for UAVs.
DefaultRule enforces the default UAV compliance properties. If all propositions of a rule exist in a UAV’s specifications, the rule is directly added; partial matches trigger Disjunctive Normal Form (DNF) conversion of the shared propositions.
Operators in EnvRule/SysRule and □ in EnvGoal/SysGoal may be omitted for brevity, as the parser completes them automatically. Operator precedence (strongest to weakest):
¬ , X , & , | , ,
All binary operators associate left-to-right (e.g., a b c ( a b ) c ).

7.2.5. UAV

UAV defines UAV-specific properties and tasks, inheriting global configurations from General. As shown in Listing 11, it begins with the keyword UAV followed by the UAV name, and supports 10 other keywords: Default, RV, SYN, Env, Sys, Init, EnvRule, EnvGoal, SysRule, and SysGoal, which have the same usage as the keywords in General:
As shown in Listing 12, to express the fulfilment times or action priority of a specification, we can add the parameter “ ( A c t N a m e : 6 ) ” or “ ( 3 , 1 ) ” to the end of the specification, where “3“ denotes the fulfilment times and “6“/“1“ are action priorities. To support multi-UAV cooperation, a UAV may reference other UAVs’ propositions in its task specification, e.g., UAV1_InjuredInR1 (Listing 12, line 7) represents the proposition InjuredInR1 of UAV1 defined in Listing 11.
Listing 11. Demo code of the keyword UAV.
Drones 09 00353 i011
Listing 12. A example of parameters setting and shared propositions of UAV.
Drones 09 00353 i012

8. Implement and Experiment

8.1. System Architecture Implementation

We implemented our framework using the F’ framework, a free, open-source flight software solution developed by NASA’s Jet Propulsion Laboratory (JPL) [40]. F’ provides: (1) a component-based architecture that decomposes systems into discrete components with port-based structured communication, (2) a C++ framework offering core capabilities, such as message queues, thread management, and real-time scheduling, (3) model-driven development tools for automatic code generation from components, (4) an extensible library of pre-built components for rapid deployment, (5) integrated testing utilities supporting unit-level and integration-level verification.
A set of component instances and their connections that form a directed graph is called a topology. Figure 16 shows our simplified running model topology with 10 components. Component Data Parser parses the runtime information (e.g., UAV logs). Component Input APIs converts the parsed data into Boolean values of propositions (e.g., sensor propositions) and sends it to components Controller and Monitor. Component Monitor tracks the UAS operational status based on inputs from Input APIs, containing two types: critical and non-critical. The output ports of the critical and non-critical monitors are connected to the input ports of components Controller and Scheduler, respectively. Component Controller parses the synthesized automaton using inputs from Input APIs and RV, then sends the action/motion states to the Scheduler. Component Scheduler prioritizes UAV events based on their priority and parallel execution constraints. Component Output APIs translates the event values into specific UAV actions, motions or countermeasures. Components BlockDrv, RateGroupDriver, RateGroup1Comp, RateGroup2Comp, and RateGroup3Comp generate clock signals of 1 Hz, 100 Hz, and 1000 Hz to drive other components.

8.2. Generation of Monitor and Controller

Controller specifications are defined using GR(1) temporal logic, while monitor specifications employ LTL/MTL for real-time monitoring. The generation methodology comprises the following:

8.2.1. LTL Monitor Synthesis

Given an LTL formula φ , we utilize the LTL2BA compiler [47] to generate a corresponding Büchi automaton A φ = ( Q , Σ , δ , q 0 , F ) , where:
  • Q: Finite state set { ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) }
  • Σ = 2 { TakeOff , AbnCmd , LandOn } : Propositional interpretations
  • δ Q × Σ Q : Transition function
  • q 0 = ( 0 , 0 ) : Initial state
  • F = { ( 1 , 1 ) ( 1 , 1 ) } : Final state set ((−1,1):violation state, (1,−1):acceptance state)
Consider the security property: ( TakeOff ¬ AbnCmd ) U LandOn
As shown in Figure 17, demo state transitions occur when: ( 0 , 0 ) TakeOff AbnCmd ( 1 , 1 ) (violation) and ( 0 , 0 ) LandOn ( 1 , 1 ) (acceptance). The monitor outputs iff it reaches a violation state, triggering runtime enforcement mechanisms.

8.2.2. MTL Monitor Synthesis

For an MTL specification, we utilize the PyMTL library [48] to synthesize runtime monitors. Consider the safety-critical requirement: ( G u i d e d ( ¬ [ 0 , 10 ] N o n N a v i C m d ) ( N o n N a v i C m d [ 0 , 5 ] C p u U s e < α % ) ) ) , it can be expressed in the Python code shown in Figure 18; we first define the corresponding formula, with the recent values of relevant variables stored in the list named data, then, by calling the corresponding function of the MTL library, we can realize real-time monitoring of the MTL properties.

8.2.3. Generation of Reactive Controller

The task specification in UAS-DL is formalized as a GR(1) temporal logic formula. Utilizing synthesis tools like JTLV [49] or Slugs [50], we generate an FSM that demonstrably satisfies the specification. By parsing this FSM, we can obtain a controller of the task specification. For example, Figure 19 is a part of the FSM generated by JTLV from the specification in Listing 13.

8.3. Experiment

In order to further explain the application of our methods, we will use two practical cases to illustrate.

8.3.1. Multi-UAV Rescue Experiment

Example 3. 
In the map shown in Figure 20, three UAVs are deployed to search for and airdrop supplies to injured individuals. The target-searching UAVs U1 and U2 , equipped with high-definition cameras, collaboratively patrol regions R1-R7. Their coverage areas overlap at R3 and R4, which are the only possible locations for the injured. The rescue drone U3, carrying medical supplies and a standard-resolution camera, activates when U1/U2 detects the injured. Upon confirmation, U3 navigates to coordinates of the injured found by U1 /U2 for precision airdrop. The global configuration follows the specifications in Listing 10, with modifications limited to the geospatial parameters.
The following specifications are designed for each UAV in UAS-DL:
As shown in Listing 13, U1 takes off at StartZone and patrols regions R1R4 in Guided mode for three times. When locating the injured, it notifies U3 via shared variables (lines 18–21). U3 integrates a GpsSpoof monitor that detect GPS spoofing attacks. Upon identifying spoofing attacks, U3 switches to Hover mode. Since Hover and Move are mutually exclusive events, Hover is designated as a critical countermeasure with its related specification added automatically (lines 12–16). Therefore, specification □(GpsSpoofHover) in line 12 defines the relation between the monitor and its countermeasure, where GpsSpoof is an environment proposition and Hover a system proposition.
The specifications (lines 14, 16) associated with GpsSpoof and Hover are extracted from the predefined operation property specifications in the general specification (Listing 10, lines 36–37). Other basic specifications inherited from Listing 10 are omitted here.
Listing 13. Specifications for U1
Drones 09 00353 i013
As shown in Listing 14, U2 shares functional similarities with U1, with the key distinction lying in its patrol coverage of regions R3R7 and specialized DoS attack detection capability, replacing the GpsSpoof detection module. The DoS countermeasure implements parallel execution with primary tasks, classified as a non-critical countermeasure.
Listing 15 specifies that U3 initiates takeoff at StartZone in Guided mode. When U1/U2 detect the injured, U3 activates Guided navigation to target coordinates and airdrops medical supplies. Listing 16 demonstrates the generated specifications with execution-cycle counters. This limits U3’s revisits to R3/R4 at most once.
We simulated the mission scenario using ArduPilot, Gazebo, and the ROS platforms.
Figure 21 demonstrates the detection of abnormal command attacks on UAVs. During flight, attackers may exploit system vulnerabilities to remotely issue abnormal commands for adversarial purposes. To address these security risks, our RV monitor analyzes whether abnormal behavior (e.g., significant oscillations) occur in the UAV flight dynamics upon receiving abnormal commands, which serves as an indicator of potential attacks. The formal verification specifications of AbnormalCmd can be expressed as follows:
TakeOff(∧(AbnormalCmd∧♢[0, 200]AbnBehavior)) (Listing 10, line 17)
To enhance detection efficiency, the sensor (AbnormalCmd) communicates with an independent FPGA module via API for detecting significant oscillations (AbnBehavior) [30].
Listing 14. Specifications for U2.
Drones 09 00353 i014
Listing 15. Specifications for U3.
Drones 09 00353 i015
Listing 16. Whole specification for U3 after adding generated metrics specifications.
Drones 09 00353 i016
Figure 21a illustrates the comparison of pitch angle variations between normal flight conditions (upper) and attacked scenarios (lower). Oscillation phenomena can be detected through periodic changes in the UAV’s pitch angle, which serves as a critical indicator for attack detection. For signal processing of oscillatory characteristics, we employ the Fast Fourier Transform (FFT) IP module provided by Altera to perform frequency domain analysis.
Figure 21. (a) Comparison of pitch angle variations between normal flight (upper) and attack flight (lower). (b) Comparison of oscillation spectrum between normal flight (upper) and attack flight (lower). (c) Pitch variations of attacked flight.
Figure 21. (a) Comparison of pitch angle variations between normal flight (upper) and attack flight (lower). (b) Comparison of oscillation spectrum between normal flight (upper) and attack flight (lower). (c) Pitch variations of attacked flight.
Drones 09 00353 g021
With a sampling period of 40 ms (25 Hz sampling frequency) for pitch angle signals, the frequency spectrum under attack (Figure 21b, upper)) exhibits distinct frequency distributions at the 32nd periodic component (indicated by an arrow), in contrast to the spectrum under normal flight conditions (Figure 21b, lower)). The oscillation frequency is calculated as 0.78125 Hz using the formula: f = m · f s / N , where m = 32 (target frequency index), f s = 25 Hz (sampling frequency), and N = 1024 (sampling points). This calculated frequency aligns with empirical oscillation measurements.
The attack detection trigger is visually confirmed by the arrow marker in Figure 21c, demonstrating effective monitoring capability.
Figure 22a illustrates the UAV patrol routes in Gazebo: U1 (green arrow-headed trajectory), U2 (blue arrow-headed trajectory), and U3 (yellow arrow-headed trajectory). U1 and U2 patrol their designated zones for three times, while U3 performs medical supply airdrop at the injured location identified by U2 for only one time.
Figure 22b demonstrates U3 executing medical supply airdrop to the injured located by U2, using the YOLOV5 detection module.
Experimental results demonstrate that the GR(1) controller and RV monitor can guide UAVs to operate according to the given specifications while maintaining safety/security properties.

8.3.2. Contrast Experiment with Example 1

We redesigned the specification of Example 1 in UAS-DL as shown in Listing 17. The monitoring functions are implemented as a decoupled module in the RV layer, while the controller specification focuses on the core task. Figure 23 shows the comparison test result between Listings 1 and 17, with the number of monitored risky commands (Listing 17, line 7) varying from 1 to 7.
Figure 23a demonstrates the FSM state comparison. The controller size in Listing 1 increases exponentially and becomes unsynthesizable at five risky commands, whereas the architecture in Listing 17 exhibits sustained linear growth.
Similarly, Figure 23b reveals temporal characteristics: synthesis time for Listing 1 escalates exponentially until failure at five commands, while Listing 17 exhibits a stable controller synthesis time and linearly increasing monitor generation time.
The experimental results validate that our methodology achieves loose coupling between the monitor and controller components, enabling efficient deployment of multiple property monitors and more complex UAV task design.
Listing 17. Redesigned specifications of Example 1 in UAS-DL. 
Drones 09 00353 i017

9. Conclusions

This paper proposed a unified monitor and controller synthesis method to generate secure controllers for a complex UAS, which combines the advantages of RV and reactive synthesis. To design the UAS properties and tasks more efficiently, we developed a pattern for describing the UAS properties and a domain-specific language (UAS-DL) for modeling the UAS. We also provided specific solutions for priority-based scheduling of UAS events and task generation with metrics. Our experimental results demonstrate that the proposed method can generate secure UAS controllers, guarantee more UAS safety and security properties, and cost less synthesis time. Further research is needed to explore the full potential of our method, to provide advanced language features (e.g., probabilistic task modeling) for UAS-DL, and to optimize the scheduler’s efficiency under dynamic resource constraints.

Author Contributions

Conceptualization, W.D.; methodology, D.Y.; validation, D.Y. and W.L.; writing—original draft preparation, D.Y. and W.L.; supervision, S.L. and Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China under Grant 62032019.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Javaid, A.Y.; Sun, W.; Devabhaktuni, V.K.; Alam, M. Cyber security threat analysis and modeling of an unmanned aerial vehicle system. In Proceedings of the 2012 IEEE Conference on Technologies for Homeland Security (HST), Waltham, MA, USA, 13–15 November 2012; pp. 585–590. [Google Scholar] [CrossRef]
  2. Lee, Y.S.; Kang, Y.J.; Lee, S.G.; Lee, H.; Ryu, Y. An Overview of Unmanned Aerial Vehicle: Cyber Security Perspective. In Proceedings of the IT Convergence and Security (ICITCS), Prague, Czech Republic, 26 September 2016; pp. 128–131. [Google Scholar] [CrossRef]
  3. Daojing, H.; Xiao, D.; Yinrong, Q.; Yaokang, Z.; Qiang, F.; Wang, L. A Survey on Cyber Security of Unmanned Aerial Vehicles. Chin. J. Comput. 2019, 42, 1076–1094. [Google Scholar]
  4. Hooper, M.; Tian, Y.; Zhou, R.; Cao, B.; Lauf, A.P.; Watkins, L.; Robinson, W.H.; Alexis, W. Securing commercial WiFi-based UAVs from common security attacks. In Proceedings of the MILCOM 2016–2016 IEEE Military Communications Conference, Baltimore, MD, USA, 1–3 November 2016; pp. 1213–1218. [Google Scholar] [CrossRef]
  5. Lu, W.; Shu, S.; Shi, R.; Li, R.; Dong, W. Synthesizing Secure Reactive Controller for Unmanned Aerial System. In Proceedings of the 2019 6th International Conference on Dependable Systems and Their Applications (DSA), Harbin, China, 3–6 January 2020. [Google Scholar]
  6. Leucker, M.; Schallhart, C. A Brief Account of Runtime Verification. J. Log. Algebr. Program. 2009, 78, 293–303. [Google Scholar] [CrossRef]
  7. Schumann, J.; Moosbruger, P.; Rozier, K.Y. R2U2: Monitoring and Diagnosis of Security Threats for Unmanned Aerial Systems; Springer International Publishing: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  8. Falcone, Y.; Mounier, L.; Fernandez, J.C.; Richier, J.L. Runtime Enforcement Monitors: Composition, synthesis, and enforcement abilities. Form. Methods Syst. Des. 2011, 38, 223–262. [Google Scholar] [CrossRef]
  9. Pnueli, A.; Rosner, R. On the Synthesis of an Asynchronous Reactive Module. In Proceedings of the ICALP; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 1989; Volume 372, pp. 652–671. [Google Scholar]
  10. Bloem, R.; Jobstmann, B.; Piterman, N.; Pnueli, A.; Sa’ar, Y. Synthesis of Reactive(1) designs. J. Comput. Syst. Sci. 2012, 78, 911–938. [Google Scholar] [CrossRef]
  11. Maoz, S.; Ringert, J.O. GR(1) synthesis for LTL specification patterns. In Proceedings of the ESEC/SIGSOFT FSE, Bergamo, Italy, 30 August–4 September 2015; ACM: New York, NY, USA, 2015; pp. 96–106. [Google Scholar]
  12. Shi, H.; Dong, W.; Li, R.; Liu, W. Controller Resynthesis for Multirobot System When Changes Happen. Computer 2020, 53, 69–79. [Google Scholar] [CrossRef]
  13. Shi, H.; Li, R.; Liu, W.; Dong, W.; Zhou, G. Iterative Controller Synthesis for Multirobot System. IEEE Trans. Reliab. 2020, 69, 851–862. [Google Scholar] [CrossRef]
  14. Tirpak, J.A. The RPA Boom. AIR FORCE Magzine 2010, 93, 36–40. [Google Scholar]
  15. Pachter, M.; Chandler, P.R. Challenges of autonomous control. IEEE Control. Syst. Mag. 1998, 18, 92–97. [Google Scholar]
  16. Boskovic, J.; Prasanth, R.; Mehra, R. A Multi-Layer Architecture for Intelligent Control of Unmanned Aerial Vehicles. In Proceedings of the 1st UAV Conference, Portsmouth, VI, USA, 20–23 May 2002. [Google Scholar]
  17. Antsaklis, P.; Passino, K. An Introduction to Intelligent and Autonomous Control; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1993. [Google Scholar]
  18. Vincent, P.; Rubin, I. A framework and analysis for cooperative search using UAV swarms. In Proceedings of the 2004 ACM Symposium on Applied Computing (SAC), Nicosia, Cyprus, 14–17 March 2004. [Google Scholar]
  19. Schumacher, C.; Chandler, P.R.; Rasmussen, S.R. Task allocation for wide area search munitions. In Proceedings of the American Control Conference, Anchorage, AK, USA, 8–10 May 2002. [Google Scholar]
  20. Nygard, K.E.; Chandler, P.R.; Pachter, M. Dynamic network flow optimization models for air vehicle resource allocation. In Proceedings of the 2001 American Control Conference, Arlington, VA, USA, 25–27 June 2001. [Google Scholar]
  21. Schumacher, C.; Chandler, P.; Pachter, M.; Pachter, L. UAV Task Assignment with Timing Constraints via Mixed-Integer Linear Programming. In Proceedings of the AIAA 3rd “Unmanned Unlimited” Technical Conference, Workshop and Exhibit, Chicago, IL, USA, 20–23 September 2004. [Google Scholar]
  22. Akselrod, D.; Sinha, A.; Kirubarajan, T. Collaborative distributed sensor management for multitarget tracking using hierarchical Markov decision processes. Int. Soc. Opt. Eng. 2007, 6699, 669912. [Google Scholar]
  23. Zong-Xin, Y.; Ming, L.I.; Zong-Ji, C. Mission Decision-making Method of Multi-aircraft Cooperative Attack Multi-object Based on Game Theory Model. Aeronaut. Comput. Tech. 2007, 37, 7–11. [Google Scholar]
  24. Zhang, Z.; Jiang, J.; Haiyan, X.U.; Zhang, W.A. Distributed dynamic task allocation for unmanned aerial vehicle swarm systems: A networked evolutionary game-theoretic approach. Chin. J. Aeronaut. 2024, 37, 182–204. [Google Scholar] [CrossRef]
  25. Kress-Gazit, H.; Fainekos, G.E.; Pappas, G.J. Temporal-Logic-Based Reactive Mission and Motion Planning. IEEE Trans. Robot. 2009, 25, 1370–1381. [Google Scholar] [CrossRef]
  26. Li, Z.; Xu, C.; Wu, Z.R. Deep reinforcement learning based trajectory design and resource allocation for task-aware multi-UAV enabled MEC networks. Comput. Commun. 2024, 213, 88–98. [Google Scholar] [CrossRef]
  27. Madan, B.; Banik, M.; Bein, D. Securing unmanned autonomous systems from cyber threats. J. Def. Model. Simul. Appl. Methodol. Technol. 2016, 16, 119–136. [Google Scholar] [CrossRef]
  28. Thing, V.L.L.; Wu, J. Autonomous Vehicle Security: A Taxonomy of Attacks and Defences. In Proceedings of the 2016 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Chengdu, China, 15–18 December 2016; pp. 164–170. [Google Scholar] [CrossRef]
  29. Krishna, C.G.L.; Murphy, R.R. A review on cybersecurity vulnerabilities for unmanned aerial vehicles. In Proceedings of the 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Shanghai, China, 11–13 October 2017; pp. 194–199. [Google Scholar] [CrossRef]
  30. Dong, Y.; Hao, S.; Wei, D.; Lin, L.Z.; Ge, Z. Security and Safety Threats Detection for Unmanned Aerial System Based on Runtime Verification. J. Softw. 2018, 29, 1360–1378. [Google Scholar]
  31. Schneider, F.B. Enforceable Security Policies. ACM Trans. Inf. Syst. Secur. 2000, 3, 30–50. [Google Scholar] [CrossRef]
  32. Finucane, C.; Jing, G.; Kress-Gazit, H. LTLMoP: Experimenting with language, Temporal Logic and robot control. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010. [Google Scholar]
  33. Kress-Gazit, H.; Fainekos, G.E.; Pappas, G.J. From structured english to robot motion. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots Systems, San Diego, CA, USA, 29 October–November 2007. [Google Scholar]
  34. Maoz, S.; Ringert, J.O. Spectra: A specification language for reactive systems. Softw. Syst. Model. 2021, 20, 1553–1586. [Google Scholar] [CrossRef]
  35. Pnueli, A. The temporal logic of programs. In Proceedings of the 18th Annual Symposium on Foundations of Computer Science (sfcs 1977), Providence, RI, USA, 31 October–21 November 1977; pp. 46–57. [Google Scholar] [CrossRef]
  36. Piterman, N.; Pnueli, A.; Sa’ar, Y. Synthesis of Reactive(1) Designs. In Verification, Model Checking, and Abstract Interpretation; Emerson, E.A., Namjoshi, K.S., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 364–380. [Google Scholar]
  37. Maler, O.; Nickovic, D. Monitoring Temporal Properties of Continuous Signals. In International Symposium on Formal Techniques in Real-Time and Fault-Tolerant Systems; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  38. Alur, R.; Henzinger, T.A. Real-Time Logics: Complexity and Expressiveness; Stanford University: Stanford, CA, USA, 1990. [Google Scholar]
  39. Reinbacher, T.; Rozier, K.Y.; Schumann, J. Temporal-Logic Based Runtime Observer Pairs for System Health Management of Real-Time Systems. In Proceedings of the TACAS, Grenoble, France, 5–13 April 2014. [Google Scholar]
  40. Bocchino, R.; Canham, T.K.; Watney, G.; Reder, L.; Levison, J. F Prime: An Open-Source Framework for Small-Scale Flight Software Systems. In Proceedings of the 32nd Annual Small Satellite Conference, Logan, UT, USA, 9 August 2018. [Google Scholar]
  41. Yang, D.; Dong, W.; Lu, W.; Dong, Y.; Liu, S. Vulnerabilities Analysis and Secure Controlling for Unmanned Aerial System Based on Reactive Synthesis. arXiv 2024, arXiv:2411.07741. [Google Scholar]
  42. Liu, W.; Wei, J.; Liang, M.; Cao, Y.; Hwang, I. Multi-sensor fusion and fault detection using hybrid estimation for air traffic surveillance. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2323–2339. [Google Scholar] [CrossRef]
  43. Yuanchao, M. The Research of Navigation and Control System for Quad-Roter Aerial Vehicle. Ph.D. Thesis, Harbin Engineering University, Harbin, China, 2013. [Google Scholar]
  44. Jinping, B.; Li, P.; Hu, G. The Design of Flight Surveillance and Control System Software for Unmanned Rotorcraft’s Ground Control Station. Artif. Intell. Robot. Res. 2015, 04, 1–7. [Google Scholar]
  45. Vasconcelos, G.; Carrijo, G.; Miani, R.; Souza, J.; Guizilini, V. The impact of DoS attacks on the AR. Drone 2.0. In Proceedings of the IEEE 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), Recife, Brazil, 8–12 October 2016; pp. 127–132. [Google Scholar]
  46. Dey, V.; Pudi, V.; Chattopadhyay, A.; Elovici, Y. Security vulnerabilities of unmanned aerial vehicles and countermeasures: An experimental study. In Proceedings of the IEEE 2018 31st International Conference on VLSI Design and 2018 17th International Conference on Embedded Systems (VLSID), Pune, India, 6–10 January 2018; pp. 398–403. [Google Scholar]
  47. Gastin, P.; Oddoux, D. Fast LTL to Buchi automata translation. In Proceedings of the 13th International Conference on Computer Aided Verification, Copenhagen, Denmark, 27–31 July 2002. [Google Scholar]
  48. Vazquez-Chanlatte, M. mvcisback/py-metric-temporal-logic: v0.1.1. 2023. Available online: https://github.com/mvcisback/py-metric-temporal-logic (accessed on 1 December 2023).
  49. Pnueli, A.; Sa’Ar, Y.; Zuck, L.D. Jtlv: A Framework for Developing Verification Algorithms. In Proceedings of the Computer Aided Verification, International Conference, Edinburgh, UK, 15–19 July 2010. [Google Scholar] [CrossRef]
  50. Ehlers, R.; Raman, V. Slugs: Extensible GR(1) Synthesis. In Proceedings of the International Conference on Computer Aided Verification, Toronto, ON, Canada, 17–23 July 2016. [Google Scholar]
Figure 1. Pictorial representation of LTL temporal operators.
Figure 1. Pictorial representation of LTL temporal operators.
Drones 09 00353 g001
Figure 2. Pictorial representation of MTL temporal operators.
Figure 2. Pictorial representation of MTL temporal operators.
Drones 09 00353 g002
Figure 3. RV monitor in UAV verification architecture.
Figure 3. RV monitor in UAV verification architecture.
Drones 09 00353 g003
Figure 4. The workspace of the UAS model.
Figure 4. The workspace of the UAS model.
Drones 09 00353 g004
Figure 5. Unified monitor and controller synthesis framework.
Figure 5. Unified monitor and controller synthesis framework.
Drones 09 00353 g005
Figure 6. System structure of UAS.
Figure 6. System structure of UAS.
Drones 09 00353 g006
Figure 7. Execution time of event.
Figure 7. Execution time of event.
Drones 09 00353 g007
Figure 10. Event states of instant and continuous events.
Figure 10. Event states of instant and continuous events.
Drones 09 00353 g010
Figure 11. Schematic diagram of the scheduler.
Figure 11. Schematic diagram of the scheduler.
Drones 09 00353 g011
Figure 12. Demo FSM.
Figure 12. Demo FSM.
Drones 09 00353 g012
Figure 14. Demo map.
Figure 14. Demo map.
Drones 09 00353 g014
Figure 15. Architecture of UAS-DL.
Figure 15. Architecture of UAS-DL.
Drones 09 00353 g015
Figure 16. Topology of our running model.
Figure 16. Topology of our running model.
Drones 09 00353 g016
Figure 17. The automata of demo formula.
Figure 17. The automata of demo formula.
Drones 09 00353 g017
Figure 18. Sample code of the MTL library.
Figure 18. Sample code of the MTL library.
Drones 09 00353 g018
Figure 19. The FSM of Listing 13.
Figure 19. The FSM of Listing 13.
Drones 09 00353 g019
Figure 20. Map information.
Figure 20. Map information.
Drones 09 00353 g020
Figure 22. (a) Simulation of multi-UAV rescue experiment. (b) U3 Executing medical supply airdrop to the injured located by U2.
Figure 22. (a) Simulation of multi-UAV rescue experiment. (b) U3 Executing medical supply airdrop to the injured located by U2.
Drones 09 00353 g022
Figure 23. (a) FSM states comparison, (b) Synthesis time comparison.
Figure 23. (a) FSM states comparison, (b) Synthesis time comparison.
Drones 09 00353 g023
Table 1. Synthesis time and FSM state number of Example 1.
Table 1. Synthesis time and FSM state number of Example 1.
Number of Risky Commands12345
Synthesis Time<1 s2 s12 s76 sNA
FSM State No.1041208141618321NA
Table 2. Definitions and descriptions of input, output, and countermeasure interfaces of UAS.
Table 2. Definitions and descriptions of input, output, and countermeasure interfaces of UAS.
Interface TypeNameDescription
Input U s e r I D The state of a user’s identity to access the system, e.g., account password.
F u s i o n E s t i m a t i o n System state estimation based on data fusion mechanism
C m d G e t The system receives a control instruction
P a r m L e n g t h The state of the length of each field in the instruction
C p u U s e The state of CPU usage in the system
M e m U s e The state of memory usage in the system
B a t t e r y The state of battery consumption in the system
M o d e The system is in a specific mode state
A b n o r m a l C m d The system has received an abnormal control instruction
R i s k y C m d s The system has received a risky control command
T a k e O f f UAS receives the take-off command
L a n d O n UAV has completed landing
G u i d e d UAV is in Guided mode
D e a u t h C m d The system receives the massive disconnecting requests repeatly
F l y i n g UAV is flying
N o n N a v i P r o c e s s The state of non-navigational processes running on the CPU in the system
A b n o r m a l B e h a v i o r The state of abnormal behaviors in the system
H e i g h t The height of the UAV
A u t h e n t i c a t i o n User permission state of a node in multi-UAV systems
R i s k y C m d The system has received a risky control command
R i s k y B e h a v i o r UAV exhibits risky behavior
G p s R e c v GPS sensor receives correct data from the satellites
D i r e c t i o n The direction of electronic compass should comply with the GPS data
D o p p l e r The Doppler shift of the inertial navigation sensor should comply with the GPS data
I n j u r e d UAV detects injured personnel
Output U n a u t h o r i z e d C o m p o n e n t Unauthorized component has been detected
U n a u t h o r i z e d I D Unauthorized ID has been detected
S e n s o r F a u l t Sensor fault has been detected
B U F O v e r F l o w Buffer overflow has been detected
M a l I n j e c t Malicous injection attack has been detected
A u t h B y p a s s Authorization bypass has been detected
S e n s o r J a m Sensor jamming has been detected
A P A t t a c k Access point attack has been detected
S i g T r a B l k Signal traffic blocking has been detected
C t r l C m d S p o o f i n g Control commands spoofing has been detected
S w a r m C o m A t t a c k Swarm communication attack has been detected
S e n s o r S p o o f i n g Sensor spoofing has been detected
R e p l a y A t t a c k Replay attack has been detected
D a n g e r o u s C l i m b R a t e Dangerous climb rate has been detected
D o s A t t a c k Dos attack has been detected
C t r l C m d S p o o f Control commands spoofing has been detected
G p s S p o o f i n g Sensor spoofing has been detected
L o w B a t t e r y The battery voltage is low
O v e r H e i g h t The flight altitude of the UAV is too high
Countermeasure R e c o g n i z e D e n y The system refuses to recognize the physical component
A u t h o r i z e D e n y The system rejects unauthorized access
M o d i f y D e n y The system rejects this modification
C m d D e n y The system refuses to execute the control command
P r o c e s s I n t e r r u p t The system interrupts the running process
R T L The system executes the RTL command and returns to launch position
M e s s a g e D e n y The system denies to receive the message
H o v e r The system executes the HOVER command and hovers in the air
V i d e o D e n y The system ground station denies to receive the video data
D e c e l e r a t e The system decelerates the flying speed
C l i m b R a t e D e c r e a s e The system decreases the climb rate
L a n d O n The UAV lands on the landing field
E m e r g L a n d i n g The UAV encounters emergency and lands on the landing field
O b s t a c l e A v o i d The UAV maneuvers to avoid obstacles
D e s e n d A n d D r o p UAV descends and airdrops the supplies
Table 3. Specification pattern P 1 : signal traffc blocking.
Table 3. Specification pattern P 1 : signal traffc blocking.
NameProcess monitoring
Input G u i d e d   A b n o r m a l C m d   C p u U s e
PropertyUAV shall not receive abnormal commands for at most 5 time
steps and the CPU usage of this kind of command shall not
always exceed α % for at most 5 time steps in Guided mode.
Specification ( G u i d e d ( ¬ [ 0 , 5 ] A b n o r m a l C m d ) ( A b n o r m a l C m d
[ 0 , 5 ] C p u U s e < α % ) ) )
Output D o s A t t a c k
Countermeasure C m d D e n y
Priority3
Table 4. Specification pattern P 2 : control commands spoofing.
Table 4. Specification pattern P 2 : control commands spoofing.
NameRisky behaviors detection
Input G u i d e d   R i s k y C m d s   R i s k y B e h a v i o r
PropertyRisky behavior shall not occur within 5 time steps after UAV
receives a risky command in the Guided mode.
Specification ( G u i d e d ( R i s k y C m d s [ 0 , 5 ] ¬ R i s k y B e h a v i o r ) )
Output C t r l C m d S p o o f i n g
Countermeasure R T L
Priority9
Table 5. Specification pattern P 3 : sensor spoofing.
Table 5. Specification pattern P 3 : sensor spoofing.
NameSensor faults detection
Input T a k e O f f   G p s R e c v   D i r e c t i o n   D o p p l e r   L a n d O n
PropertyDuring flight, the received GPS signal textbfshall remain consistent
with the arrival direction and Doppler shift measurements
of the inertial navigation sensor.
Specification ( ( T a k e O f f ¬ ( G p s R e c v D i r e c t i o n D o p p l e r ) ) U L a n d O n )
Output G p s S p o o f
Countermeasure H o v e r
Priority8
Table 6. Specification pattern P 4 : safety constraint
Table 6. Specification pattern P 4 : safety constraint
NameLow Battery
Input T a k e O f f   B a t t e r y   L a n d O n
PropertyDuring flight, the minimum State Of Charge (SOC) shall not be
less than 50%.
Specification ( ( T a k e O f f B a t t e r y > 50 % ) U L a n d O n )
Output L o w B a t t e r y
Countermeasure R T L
Priority9
Table 7. Specification pattern P 5 : task constraint.
Table 7. Specification pattern P 5 : task constraint.
NameMinimum altitude for UAV airdrop
Input I n j u r e d   F l y i n g   H e i g h t
PropertyDuring search and rescue operations, the UAV altitude shall not exceed 30 m when locating injured individuals.
Specification ( F l y i n g I n j u r e d ( H e i g h t > 30   m ) )
Output O v e r H e i g h t
Countermeasure D e s e n d A n d D r o p
Priority5
Table 9. Examples of common events.
Table 9. Examples of common events.
Event CategoriesEvent
Instant EventAuthorizeDeny, CmdDeny, ProcessInterrupt,
MessageDeny, VideoDeny, ObstacleAvoid
Short Continuous EventMove, RTL, ClimbRateDecrease,
Decelerate, LandOn
Long Continuous EventGuide, Auto, Hover
Table 10. Demo non-parallelizable events table.
Table 10. Demo non-parallelizable events table.
Event IDNon-Parallelizable EventsTypeCritical?Termination?
Move (5)RTL (7)ContinuousNoYes
EmergLanding (8)ContinuousNoYes
ObstacleAvoid (9)InstantNoNo
Hover (6)ContinuousYesNo
Table 11. The controller size after using the re-synthesis method.
Table 11. The controller size after using the re-synthesis method.
Original Specs.
(Listing 2)
Specs. After 1st
Re-Synthesis (Listing 4)
Specs. After 2nd
Re-Synthesis (Listing 5)
FSM Size211714
Table 12. Advantages and disadvantages of the two methods.
Table 12. Advantages and disadvantages of the two methods.
Synthesis
Methods
Params.
Value
FSM
Size
Synthesis
Times
Scheduling
Difficulty
Specification
Generation
SmallLarge1Easy
Re-synthesisLargeSmall⩾1Hard
Table 13. Propositions set for system model.
Table 13. Propositions set for system model.
UAVSetPropositions
u i X i { x i , 1 , x i , 2 , , x i , n }
Y i P i { p i , 1 , p i , 2 , , p i , j }
A i { a i , 1 , a i , 2 , , a i , k }
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, D.; Dong, W.; Lu, W.; Liu, S.; Dong, Y. Unified Monitor and Controller Synthesis for Securing Complex Unmanned Aircraft Systems. Drones 2025, 9, 353. https://doi.org/10.3390/drones9050353

AMA Style

Yang D, Dong W, Lu W, Liu S, Dong Y. Unified Monitor and Controller Synthesis for Securing Complex Unmanned Aircraft Systems. Drones. 2025; 9(5):353. https://doi.org/10.3390/drones9050353

Chicago/Turabian Style

Yang, Dong, Wei Dong, Wei Lu, Sirui Liu, and Yanqi Dong. 2025. "Unified Monitor and Controller Synthesis for Securing Complex Unmanned Aircraft Systems" Drones 9, no. 5: 353. https://doi.org/10.3390/drones9050353

APA Style

Yang, D., Dong, W., Lu, W., Liu, S., & Dong, Y. (2025). Unified Monitor and Controller Synthesis for Securing Complex Unmanned Aircraft Systems. Drones, 9(5), 353. https://doi.org/10.3390/drones9050353

Article Metrics

Back to TopTop