Abstract
This paper proposes a novel robotic animal herding system based on a network of autonomous barking drones. The objective of such a system is to replace traditional herding methods (e.g., dogs) so that a large number (e.g., thousands) of farm animals such as sheep can be quickly collected from a sparse status and then driven to a designated location (e.g., a sheepfold). In this paper, we particularly focus on the motion control of the barking drones. To this end, a computationally efficient sliding mode based control algorithm is developed, which navigates the drones to track the moving boundary of the animals’ footprint and enables the drones to avoid collisions with others. Extensive computer simulations, where the dynamics of the animals follow Reynolds’ rules, show the effectiveness of the proposed approach.
1. Introduction
Robot farming plays a critical role in preventing the food crisis caused by future population growth [1]. The past decades have seen the rapid development of robotic crop farming, such as automated crop monitoring, harvesting, weed control, and so forth [2,3]. Deploying robotic automation to improve crop production yields has actually become very popular among farmers. In contrast, research and implementations of robotic livestock farming have been mostly restricted in the fields of virtual fencing [4], animal monitoring and pasture surveying [5,6]. Such applications can improve livestock production yields to some extent. But animal herding as the vital step of livestock farming has long been the least automated. Sheepdogs that have been used for centuries are still the dominant tools of animal herding around the world, and the research on robotic animal herding is still in its infancy. Two main obstacles of robotic animal herding systems are: (1) lack of practical robot-to-animal interactions and the suitable robotic herding platform; and (2) lack of efficient robotic herding algorithm for abundant animals.
The applications of robots to the actual act of animal herding started from the Robot Sheepdog Project in the 1990s [7,8]. These groundbreaking studies achieved gathering a flock of ducks and manoeuvring them to a specified goal position by wheeled robots. The last three decades have only seen a handful of studies on robotic herding with real animals. Recent implementations of robotic animal herding mainly employ ground robots that drive animals through bright colours [9] or collisions [10,11,12]. Reference [9] shows that the robot initially repulsed the sheep at a distance of 60 m; however, after only two further trials, the repulsion distance drops to 10 m. Besides, such ground legged or wheeled robots might not be agile enough to deal with various terrains during herding. Furthermore, the employed Rover robot in [10], Spot robot in [11] and Swagbot in [12] all cost hundreds of thousands US dollars each and are still in the prototype stages. High prices limit their popularity. Interestingly, the real sheepdogs are also expensive because of their years-long training period. A fully trained sheepdog can cost tens of thousands of US dollars [13]. The most crucial drawback of sheepdogs is that they cannot get rid of biological limitations, that is, ageing and illness.
Besides the platforms, efficient algorithms are also critical to the study of robotic animal herding. Despite the stagnant progress of the herding platforms, research of the related algorithms has experienced considerable development. The prime cause of this contradiction is the recent rapid development of the study on swarm robotics and swarm intelligence [14]. Bio-inspired swarming-based control algorithms for herding swarm robots are receiving much attention in robotics due to the effectiveness of solutions found in nature (e.g., interactions between sheep and dogs). Such algorithms can also be applied to herd flocks of animals. A considerable amount of literature has been published on this topic. For example, paper [15] designs a simple heuristic algorithm for a single shepherd to solve the shepherding problem, based on adaptive switching between collecting the agents when they are too dispersed and driving them once they are aggregated. One unique contribution of [15] is that it conducts field tests with a flock of real sheep and reproduces key features of empirical data collected from sheep—dog interactions. To elaborate the results in [15], reference [16] tests the effects of the step size per unit time of the shepherd and swarm agents, and clarifies that the herding algorithm in [15] is mostly robust regarding swarm agents’ moving speeds. Its further study [17] extends the shepherd and swarm agents’ motion and influential force vectors to the third dimension.
References [18,19] propose the multi-shepherd control strategies for guiding swarm agents in 2D and 3D environments based on a single continuous control law. The implementation of such strategies requires more shepherds than swarm agents, thus cannot deal with tasks with abundant agents. The level of modulation of the force vector exerted by the shepherd on the swarm agents plays a critical role in herding task success and energy used. Paper [20] designs a force modulation function for the shepherd agent and adopts a genetic algorithm to optimise the energy used by the agent subject to a threshold of success rate. These algorithms and most of the studies in robotic herding, however, have only been carried out in the tasks with tens of swarm agents. The algorithm for efficiently herding abundant swarm agents has not been investigated.
Comparing with ground robots, autonomous drones have superior manoeuvrability and are finding increasing use in different areas, including agriculture [21,22], surveillance [23,24], communications [25] and disaster relief [26]. Particularly, references [21,22] demonstrate the feasibility of counting and tracking farm animals using drone cameras. Reference [27] develops an algorithm for a drone to herd a flock of birds away from an airport. Field experiments show the effectiveness of such an algorithm. With the ability of rapidly crossing miles of rugged terrain, drones are potentially the ideal platforms for robotic animal herding, if they can efficiently interact with animals like sheepdogs. Sheepdogs usually herd animals by barking, glaring, or nibbling the heels of animals. For example, the New Zealand Huntaway uses its loud, deep bark to muster flocks of sheep [28]. Drones can act like sheepdogs by playing a pre-recorded dog bark loudly through a speaker, referred to as the barking drone. Recently, some successful attempts have been made using human-piloted barking drones to herd flocks of farm animals [29]. Besides, studies show that comparing with sheepdogs, using drones to herd cattle and sheep is faster and causes little pressure on animals [30].
1.1. Objectives and Contributions
This paper’s primary objective is to design a robotic herding system that can efficiently herd a large number of farm animals without human input. The system should be able to collect a flock of farm animals when they are too dispersed and drive them to a designated location once they are aggregated. The main contributions of this paper are as follows:
- We propose a novel idea of autonomous barking drones by improving the design of the human-piloted barking drones and further propose a novel robotic herding system based on it. Comparing with the existing approaches of ground herding robots that drive animals through collisions or bright colours, the idea of autonomous barking drones can be a solution to the problem of effective robot-to-animal interaction with significantly improved efficiency;
- We propose a collision-free motion control algorithm for a network of barking drones to herd a large flock of farm animals efficiently;
- We conduct simulations of herding a thousand of animals, while the existing approaches usually herd tens of animals or swarm robots. The proposed algorithm can also be applied to herd swarm robots;
- Based on the animal behaviour model verified by real animal experiments and the proven shepherding examples by human-piloted barking drone, the proposed system has the potential to be the world’s first practical robotic herding solution for a large flock of farm animals;
- With their functions being limited on non-essential applications such as animal monitoring and data collection, current Internet of Things (IoT) platforms for precision farming have a low return on investment. Besides solving the rigid demand (i.e., herding) for farmers, the proposed system can also serve as the IoT platform to achieve the same functions. Thus, it has the potential to popularise the IoT implementations for precision farming.
Preliminary versions of some results of this paper were presented in the PhD thesis of the first author [31].
1.2. Organization
The remainder of the paper is organised as follows. In Section 2, we introduce the design of the drone herding system. Section 3 presents the system model and problem statement. Drones motion control for gathering and driving is proposed in Section 4 and Section 5, respectively. Simulation results are presented in Section 6. Finally, we give our conclusions in Section 7.
2. Design of the Drone Herding System
We now introduce the proposed drone herding system. It consists of a fleet of two types of drones. The duty of the first type of drones is to detect and track animals. For this purpose, each drone is equipped with cameras and fitted with some Artificial Intelligence (AI) algorithms that can detect and track animals from live video feeds with a sufficient accuracy. The first type of drones shares some similarity with the goat tracking drones [21]. But different from it, our system only requires the tracking information of the animals on the boundary of the flock. This definitely relaxes the workload of the drones, and many existing image processing techniques, such as edge detection, can be adopted in real-time.
A drone of the second type is attached with a speaker that plays sheepdogs’ barking. The speaker should have a clear voice, abundant volume, relatively small size and low weight. There have been some drone digital voice broadcasting systems on the market. For example, the MP130 from Chengzhi [32] can broadcast the voice for 500 m with a weight of 550 g. Moreover, the speaker is designed to be mounted on a stabiliser attached to the drone, so that it can stably broadcast to a desired direction, no matter which direction the barking drone is moving towards. It is worth to mention that the speaker on current human-piloted barking drone is not mounted on the stabiliser, so we improved the design of the current barking drones.
The observer and the barking drones have the communication ability. The communication between them can be realised by 2.4 gigahertz radio waves, which is commonly used by different drone products. The communication is mainly unidirectional from the observer to the barking drones. Specifically, the observer monitors the locations of animals on the edge of the flock and sends them to the barking drones in the real-time. A typical application scenario of the proposed system is herding a large flock of animals with one observer and multiple barking drones. Figure 1 shows a schematic diagram of a basic unit of the drone herding system, with one operator and one barking drone.
Figure 1.
A basic unit of the proposed drone herding system.
Limited battery life is always a problem of drones applications. Later we will show that the proposed herding system can usually accomplish the herding task in less than 50 min, which is the common endurance of some commercialised industrial drone products such as the DJI M300 [33] (Note that, such industrial drones usually cost thousands of US dollars each, i.e., might be cheaper than a fully trained sheepdog). Moreover, any drone in the system should be able to autonomously fly back to a ground drone base station to recharge the battery with automatic charging devices [34]. Besides, the advancement of solar-harvesting technology enables the drones to prolong the battery lifetime [35].
3. System Model and Problem Statement
In this section, we introduce the dynamics models of animal motion and drone motion. Then, we present the herding problem formulation and the preliminaries for designing drones motion controller.
3.1. Animal Motion Dynamics
Same as most research on robotic herding [20,27,36], we describe the dynamic of animal flocking by boids model based on Reynolds’ rules [37], which models the interaction of the agents based on attraction, repulsion and alignment models that are common in studies of collective animal behaviour. Field tests in [15] show that the boids model matches the behaviour of real flocks of sheep. The overall herd behaviour emerges from the decisions of individual members of the herd based on the four basic principles of collision avoidance, velocity matching (to nearby members of the herd), flock centring (the desire to be near the centre of the nearby members) and hazard avoidance. In detail, given the predefined constants , and , an animal under consideration behaves according to the following rules:
- If the distance to the barking drone is within , then an animal will be repulsed directly away from the barking drone;
- An animal is attracted to the centre of mass of its nearest neighbours;
- Animals are also locally repulsed from each other, so that if two or more animals are within a distance of of each other there will be a repulsive force acting to separate them;
- Otherwise, an animal is semistationary with some small random movements.
Let unit vectors , , , and denote the repulsive force from the barking drone to the animal, the attractive force to the centre of mass of the animal’s nearest neighbours, the repulsive force from other animals within a distance of of the animal, the inertial force to remain at the current location, and the noise element of the animal’s movement, respectively. Then, the animal’s moving direction vector is obtained by:
where , , , , and are the weighting constants. Let be the maximum speed of the animal.
3.2. Drone Motion Dynamics
In this paper, we assume the barking drones maintain at a fixed altitude that is relatively low to keep the barking drones close to the herding animals. With the fixed altitude, we study the 2D motion of a barking drone described by the following mathematical model. Let
be the 2D vector of the barking drone’s Cartesian coordinates. Then, the motion of the barking drone is described by the equations:
where , for all t, , , and the following constraints hold:
for all t. Here, denotes the standard Euclidean vector norm, and denotes the scalar product of two vectors. The scalar variable is the speed or linear velocity of the barking drone, and the scalar is applied to change the direction of the droneâ’s motion, given by . and are two control inputs in this model. and are constants depending on the manufacturing of the drone. The condition (6) guarantees that the vectors and are always orthogonal. Furthermore, is the velocity vector of the barking drone. The kinematics of many unmanned aerial vehicles can be described by the non-holonomic model (3)–(6); see, for example, [38] and references therein.
3.3. Problem Statement
Our goal is to navigate a network of barking drones to herd a flock of farm animals. A typical herding task consists of gathering and driving. In detail, we aim to navigate the barking drones to collect a flock of farm animals when they are too dispersed, namely gathering, and drive them to a designated location once they are aggregated, namely driving.
Let and be the sets of the two-dimensional (2D) positions of farm animals and barking drones, respectively. Let denote the position of the herding animals’ centroid. For gathering, the goal is to gather all the herding animals around until the distance between and any animal reaches a predefined constant , that is,:
After gathering, the animals need to be driven to a designated location such as the centre of a sheepfold. The barking drones have to keep the animals aggregated during driving. Let G be the designated location. Similar to gathering, the goal of driving is formulated as:
3.4. Preliminaries
We now introduce the preliminaries for presenting the drones motion control algorithms, including the system’s available measurement and the drones’ motion restriction. During gathering, we use the convex hull of all the herding animals to describe the animals’ footprint.
Available measurement: We assume that at any time t, the observer has the measurements of the positions of the vertices of the convex hull of all the herding animals, described by , and is the number of vertices of the convex hull, which is much smaller than . Besides, we assume the observer can estimate with image processing techniques. The accurate real-time locations of the barking drones should also be available. In practice, the real-time drone locations can be provided by embedded GPS chips since the pastures are often open-air.
Definition 1.
The extended hull is a unique polygon surrounds the convex hull. The edges of the extended hull and the convex hull are in one-to-one correspondence, with each pair of the corresponding edges parallel to each other and maintaining the same fixed distance.
Let be the set of 2D positions of all the extended vertices in a counterclockwise manner, . We now present the construction of the extended hull for a given convex hull of the herding animals. Let be the position of a convex hull vertex with two neighbour vertices and . Construct parallel lines and of edges and on the periphery of the convex hull, respectively. Both the distance between and and the distance between and are , where is the predefined drone-to-animal distance. Let be the position of the intersection of and . Then, is the extended hull’s vertex corresponding to , also called as the extended vertex of . Let be the position of the intersection of and the extension line of . Let be the position of the intersection of and the extension line of , as shown in Figure 2. Naturally, is a parallelogram. Let denote . Then, we have:
Figure 2.
Illustration of the convex hull of the herding animals (
) and the corresponding extended hull.
) and the corresponding extended hull.
Thus, given , and , can be computed by:
Motion restriction: For the purpose of efficient gathering and avoiding to disperse any herding animals, all the barking drones are restricted to move only on the extended hull during gathering.
We assume that the spread range of the barking from the drone is fan-shaped, and only animals within this range will be affected by the repulsion of the barking drone. The intensity of barking outside this range is below the minimum level that can cause the evasive behaviour of the animal. We call this fan-shaped range as the barking cone, with the effective broadcasting angle and distance , respectively. The number of repulsed animals of a barking drone will be influenced by four aspects: , , drone-to-animal distance and the distribution of animals. With the help of the stabiliser, the speaker should always face to , as illustrated in Figure 3.
Figure 3.
Illustration of the effective broadcasting angle and distance with (a) a small drone-to-animal distance and (b) a larger drone-to-animal distance , under the same animals’ distribution. Here (
) stands for the animal repulsing by the barking drone (
); (
) stands for the animal outsides the barking broadcasting range; (
) stands for .
) stands for the animal repulsing by the barking drone (
); (
) stands for the animal outsides the barking broadcasting range; (
) stands for .
Remark 1.
Figure 3 also shows that a larger may lead to fewer animals repulsing by the barking drone with the same β, and animals’ distribution, if fewer animals locating near the edges of the convex hull. But if there are more animals concentrated near the edges, the situation may be reversed.
4. Drones Motion Control for Gathering
This section introduces the motion control algorithms for a network of barking drones to quickly accomplish the gathering task. We first introduce the algorithm for navigating the barking drones to fly to the extended hull and to fly on the extended hull in Section 4.1 and Section 4.2, respectively. Then, Section 4.3 presents the optimal positions (steering points) for the barking drones to efficiently gather animals, as well the collision-free allocation of the steering points. A flowchart of the proposed method is shown in Figure 4.
Figure 4.
Overview of the proposed method.
Let A be any point on the plane of the extended hull. Let B be a vertex of the extended hull. We now introduce two guidance laws for navigating a barking drone from A to B in the shortest time:
- Fly to edge: navigate the barking drone from an initial position A to the extended hull in the shortest time. Note that the vertices of the extended hull can be moving. Let O denote the barking drone’s reaching point on the extended hull; see Figure 5.
Figure 5. Illustration of the path planning for a barking drone from A to B, where A is a point on the plane of the extended hull, B is a vertex of the extended hull, O is the reaching point of the drone on the extended hull. The animals’ convex hull is denoted by blue lines. The green arrows and black arrows are the planned trajectories with a given clockwise and counterclockwise direction, respectively. - Fly on edge: navigate the barking drone from O to B in the shortest time following a given direction, e.g., clockwise or counterclockwise, while keeping the barking drone on the extended hull. Since the drone has non-holonomic motion dynamics, it is allowed to move along a short arc when traveling between two adjacent edges, see Figure 5.
Note that A is not necessarily outside the extended hull. To avoid dispersing any herding animal, the speaker on the barking drone should be turned on only when it has arrived at the extended hull. Besides, if A is already on the extended hull, let and apply Fly on edge Guidance Law directly.
4.1. Fly to Edge Guidance Law
Let and be non-zero two-dimensional vectors, and . Now introduce the following function mapping from to as
where . In other words, the rule (14) defined in the plane of vectors and . The resulted vector is orthogonal to and directed “towards” . Moreover, introduce the function as follows
We will also need the following notations to present the Fly to edge guidance law. At time t, let be the extended hull edge that is the closest to the drone. We will show how to find later. Let denote the vector from vertex to . Let denote the vector from the vertex to the drone. Let O be the point on that is the closest to the drone. Let be the vector from the drone to O. If , we have and , see Figure 6a. If and , is always orthogonal to . Let be the vector from to O, see Figure 6b. can be obtained by the following equation:
Figure 6.
Illustration of (a) Fly to edge guidance with ; (b) Fly to edge guidance with and ; (c) Fly to edge guidance otherwise; (d) Fly on edge guidance navigates the drone from to .
Otherwise, we have and , see Figure 6c. Given and , we present the following Fly to edge guidance law:
The proposed Fly to edge guidance law belongs to the class of sliding-mode control laws (see, e.g., [39]). With the simple switching strategy, sliding mode control laws are quite robust and not sensitive to parameter variations and uncertainties of the control channel. Moreover, because the control input is not a continuous function, the sliding mode can be reached in finite time which is better than asymptotic behaviour, see for example, [39,40,41].
Assumption 1.
Let be the length of at time t. Then for all t for some given constants . Let and be some constants such that . Let be the distance between the drone and . is the distance between A and .
Assumption 2.
Let . Then
Theorem 1.
Suppose that Assumptions 1 and 2 holds. Then, the guidance law (17), (18) and (19) navigate the barking drone from an initial position A to and remains on after arrival.
Remark 2.
It should be pointed out that Assumptions 1 and 2 are quite technical assumptions, which are necessary for a mathematically rigorous proof of the performance of the proposed guidance law. However, our simulations show that the proposed guidance law often performs well even in situations when Assumptions 1 and 2 do not hold.
Proof.
From the definitions of and , the guidance law (18), (19) turns the velocity vector towards . Moreover, Equation (17) gives that is pointing from the drone to its closest point on , see Figure 6a–c. Furthermore, it follows from (22) together with Assumption 1 that there exists some time that for all , the vectors and are co-linear and . Introduce the function is the distance between the drone’s current location and . Then, it follows from (20) of Assumption 2 and the inequality that if then for some constant . Therefore, there exists a time such that the drone’s current position belongs to for all . Moreover, (21) implies that for all the drone will remain in the sliding mode of the system (3), (4), (17), (18), (19) corresponding to the position of the drone on , and the and the vector orthogonal to . This completes the proof of Theorem 1. □
Remark 3.
At time t, given and E, calculate for the barking drone to each edge of the extended hull. Then, the edge with the minimum is the closest edge of the extended hull to the drone.
4.2. Fly on Edge Guidance Law
Before introducing the Fly on edge guidance law, we first present the edge sliding guidance law for a drone flying along an edge of the extended hull, with possibly moving vertices. At time t, let be the edge that we want to keep the drone on. Let be the target position of the barking drone. Let denote the vector from the drone to , as shown in Figure 6d. We introduce that is given by:
Then, the edge sliding guidance law is as follows:
Theorem 2.
Suppose that Assumption 1 holds. Then the guidance law (23), (24) and (25) navigates the barking drone from to along , and enables the drone to stay at .
Note that, consists of two vector components: and . Where is for keeping the drone on and is for navigating the drone to . Thus, Theorem 2 can be proved similar to Theorem 1.
Suppose that the barking drone has arrived at the point O at time . Introduce a direction index for clockwise flying and for counterclockwise flying. Given O, , B and , the Fly on edge guidance law solves for the barking drone’s control input and by commanding one of the two following motions.
4.2.1. TRANSFER
The drone flies to the adjacent edge in the given direction from its current location through a straight line and an arc. Consider that the drone flies from its current edge to the adjacent edge . The drone first moves to following the edge sliding guidance law. Then, let and , where denote the vector points from the drone’s current location to the turning centre . The drone will turn with the minimum turning radius and arrives at , as shown in Figure 7.
Figure 7.
Illustration of the motion TRANSFER.
Let be the minimum turning radius of the drone. Let denote the angle . is the convex hull vertex corresponding to . Then, and can be computed by:
where denotes drone-to-animal distance, as defined above. can be obtained from the equation similar to (10). To avoid dispersing any animals, the turning trajectory should not touch the convex hull of the animals, which always holds in the case of . If , the following inequalities need to be satisfied:
Remark 4.
If (28) does not hold, the drone directly flies to following the edge sliding guidance law, then stops and changes direction to , which is slower than along the arc trajectory. An isolated animal far away from all the other animals may cause a very small θ and lead to this problem.
4.2.2. BRAKE
If the drone has arrived at an edge , where is the closest vertex of B on the opposite of the direction indicated by , then the drone flies to a point B following the edge sliding guidance law. Let be the closest vertex to the drone opposite to the given direction. Let be the set of vertices locating between O and B along the given direction.
We are now in a position to present the Fly on edge guidance law, as shown in Algorithm 1. Specifically, the drone approaches the edge that contains the destination B by performing TRANSFER multiple times. Afterwards, the drone starts BRAKE once is found to be an empty set (i.e., ), which means the drone has arrived at . The drone will then reaches B through BRAKE. The presented guidance law is designed to navigate the barking drone from any point on the extended hull to a selected vertex following a given direction, and stop the drone at the selected vertex.
| Algorithm 1: Fly On Edge Guidance Law. |
Input O, , B, , 1: Find and , if , go to line 4; 2: [, ] = TRANSFER(, ); 3: Repeat lines 1, 2; 4: [, ] = BRAKE(, B). |
4.3. Selection and Allocation of Steering Points
We now find the optimal positions for the barking drones to effectively gather animals. Aiming to minimise the maximum animal-to-centroid distance in the shortest time, at any time t, we choose the animals with the largest animal-to-centroid distance as the target animals. These animals are also the convex hull vertices that are farthest to . Since the barking drones have their motions restricted on the extended hull, we select the extended hull vertices corresponding to the target animals as the optimal drone positions for steering the target animals to approach . From now on, we call these corresponding extended hull vertices the steering points, denoted by the set , .
Remark 5.
For a large flock of animals, holds generally. In the cases of , let barking drones that are far from the steering points quit the gathering task, stand by at their current locations. These drones may rejoin the gathering task when increases afterwards.
Definition 2.
The allocation of steering points specifies which drone goes to which steering point through which direction, that is, clockwise or counterclockwise.
The optimal allocation of steering points should meet the following two requirements:
- No collision happens when each drone is flying to its allocated steering point along the extended hull;
- With requirement 1 met, the maximum travel distance of the drones is minimised.
Suppose that all the drones have arrived at the extended hull at time . We relabel the drones so that the index of the drone increases in the counterclockwise direction. Let M be the perimeter of the extended hull. Imagine that we disconnect the extended hull from the position of the first drone, that is, . Then, ’straighten’ the extended hull into a straight line segment with a length of M, so that , and become the points on the line segment. Based on this line segment we build a one-dimensional (1D) coordinate axis denoted as the z axis. Let be the 1D coordinates of the drones’ positions on the z axis. Let be the origin. We have , as shown in Figure 8a. It can be seen that the left and right flying on the z axis corresponding to counterclockwise and clockwise flying on the extended hull, respectively.
Figure 8.
The examples of (a) drones’ positions on the z axis, (b) steering points’ positions on the z axis, (c) steering points on the axis, and (d) drone’s positions, steering points’ positions and travel routes on the axis. Here (
) stands for , (
) stands for , (
) stands for , and the blue arrows stand for the drones’ travel routes.
) stands for , (
) stands for , (
) stands for , and the blue arrows stand for the drones’ travel routes.
We will also need the following notations to present our algorithm. Let be a set of allocated steering points with a corresponding z axis coordinates set , as shown in Figure 8b. is the destination of the drones on the z axis. Note that, may not hold. Let be the set of the drones’ travel distances for reaching their allocated steering points. We now define three variables , and to indicate the flying direction and extent of drone j. Specifically, let if drone j reaches by right flying on the z axis, and if drone j reaches by left flying on the z axis. Furthermore, let if drone j will pass by right flying to reach , and otherwise. Similarly, let if drone j will pass by left flying to reach , and otherwise. Let , and be the sets of and , respectively. Given , and , and can be computed by:
The main notations are listed in Table 1. Since the line segment is generated by straightening the enclosed extended hull, the drones passed by left flying will appear on the right side of the line segment, and the drones passed by right flying will be appear on the left side of the line segment. We now imagine extending the line segment to and build another 1D coordinate axis as shown in Figure 8c. On the axis, the drones passed by left flying will not appear on the right side of the line segment, but will appears on , and the drones passed by right flying will be appear on . Let be the 1D coordinates set on the axis corresponding to . Then, the mapping between and is obtained by:
Table 1.
Notations and Descriptions.
If place on the axis, as shown in Figure 8c, the travel route of any drone j will be . We obtain the expression for the travel distances as follows:
Then, the steering points allocation optimization problem is formulated as follows:
s.t.
where (33) minimises the travel distance of the drone farthest to its allocated steering point.
Assumption 3.
All the drones start flying to their allocated steering points at the same time, follow the proposed Fly on edge guidance law.
Theorem 3.
Suppose that Assumption 2 holds. Then, (34), (35) guarantees that no collision happens when the drones are flying to their allocated steering points.
Proof.
Suppose that all the drones start flying to their steering points at time . Let be the time of drone j arrives at its steering point, that is, . From (14) and (25), at any time , drone has:
From the proof of Theorem 1, is always been minimised after drone j arrived at the extended hull. Since drone j is moving from to along the z axis, it can be obtained from (23) and (3) that
Then, the distance between drone j and drone at time can be computed by
Since , it can be concluded from (35), (38), (39) and (40) that
Which means drone will not collide with drone before they arrived at their steering points. Moreover, the actual distance between drone 1 and drone is given by
Given (34), can be proved similarly. Therefore, (34) guarantees that drone 1 will not collide with drone , and (35) guarantees that each drone will not collide with their neighbours. This completes the proof of Theorem 3. □
Remark 6.
For drones, steering points and two possible directions for each drone, the number of possible allocations is .
Since is often a limited number, N will be limited as well. Therefore, the optimal allocation can be found by generating and searching all the possible allocations. We are now in a position to present the algorithm to find the optimal steering points allocation, as shown in Algorithm 2.
| Algorithm 2: Optimal Steering Points Allocation. |
Input , E, D 1: Find ; 2: Calculate from and ; 3: Generate possible allocations ; 4: For each , calculate ; 5: Solve (29)–(31) by searching . |
Suppose that the gathering task starts at . The proposed herding system first navigates all the barking drones to the extended hull by Fly to edge guidance law. Then, the system calculates the optimal steering points allocation after every sampling time and navigates the barking drones to their allocated steering points by Fly on edge guidance law, until (7) is satisfied, that is, . It is worth mentioning that the optimal allocation may change before some drones arrived at their allocated steering points due to the animals’ movement. The gathering task, however, will be interrupted. Because as long as the barking drones are flying on the extended hull, the animals inside the drones’ barking cone will be repulsed to move towards .
5. Drones Motion Control for Driving
Suppose that (7) is satisfied at , the goal is then transferred to drive the gathered animals to a desired location, for example, the centre of a sheepfold. The convex hull of the gathered animals will be close to a circle. For simplicity, from now on, we use the smallest enclosing circle to describe the footprint of the gathered animals. Let and be the radius and centre of the animals’ smallest enclosing circle during driving, respectively. Similar to the definition of the extended hull, we define the extended circle as a circle with a larger radius centred at .
According to (7), and when . Imagine a point is moving from to G with a constant speed when , where denotes the time when the driving task is finished (i.e., (8) is satisfied). Given and , can be computed by:
We aim to drive the animals so that can follow moving from to G, with as the driving speed. Note that, a smaller is preferred for a larger animals’ number , because a larger flock of animals tend to move slower. To this end, we adopt a side-to-side movement for the baking drones, which is a common animal driving strategy that can also be seen in [15,42], and so forth. Let be the perpendicular line of that passes . Let be the semicircle of the extended circle cut by that is farther to G; see Figure 9. Let be the set of points that evenly distributed on . Each drone j is then deployed to fly on with and as its start and end points, respectively, as shown in Figure 9. With approaching G, the side-to-side movements of the barking drones can ’push’ the animals to approach G while keeping them aggregated.
Figure 9.
Barking drones deployment for animal driving, where the star marker stands for the designated location G; the red dot stands for . The dark red arrows stand for the side-to-side trajectories of the barking drones.
Given and , can be computed by solving the following equations:
If is an odd number,
If is an even number,
where
Specifically, once (7) is satisfied, all the barking drones immediately fly to the extended circle following the guardians law similar to the Fly to edge introduced in Section 4.1. It is worth mentioning that the extended polygon is inscribed in the extended circle, so the process of the drones flying to their closest point on the extended circle will not disperse any animal. After reaching the extended circle, the barking drones fly to their allocated start points in following the guardians law similar to the Fly on edge introduced in Section IV.2. The allocation of can be found by the algorithm similar to Algorithm 2 introduced in Section IV.3. Then, drone j continuously flies between and along , as shown in Figure 9, until (8) is satisfied.
6. Results
In this section, the performance of the proposed method is evaluated using MATLAB. Each simulation runs for 20 times. The animal motions dynamic parameters are chosen based on the field tests with real sheep conducted by [15], as shown in Table 2. Table 2 also shows some parameters of the barking drones, if not specified in the following part.
Table 2.
Simulation Parameter Values.
For comparison, we introduce an intuitional collision-free method as the benchmark method. Specifically, the benchmark method divides the extended hull into segments with the same length at any time during gathering. Each drone is allocated to a segment and do the aforementioned side-to-side movement on the extended hull, until (7) is satisfied. The benchmark method adopts the same driving strategy as the proposed method. We consider that the animals are randomly distributed in an area with the size of 1200 m by 600 m as the initial field.
We first present some illustrative results showing 4 barking drones on two cases herding 200 and 1000 animals, respectively; see https://youtu.be/KMWxrlkU6t0, (accessed on 8 december 2021) and https://youtu.be/KPGrAcgPH8Q, (accessed on 8 december 2021). We can observe that the proposed method completes the gathering task in 9.5 min for the instance with 200 animals and 10.1 min for the case with 1000 animals. The total time for gathering and driving are 15 and 18.2 min for these cases. However, the benchmark method uses about 4.9 and 4.1 more minutes to complete these missions. Figure 10a shows how the animals’ footprint radius changes with time t for these cases. We also present snapshots when , and min for the case of 1000 animals in Figure 10b–f.
Figure 10.
(a) Animals’ footprint radius versus time t for herding 200 and 1000 animals with 4 barking drones using both the proposed and the benchmark method; (b) Initial distribution of the 1000 animals; (c–f) snapshots of the 1000 animals at min for both methods.
Interestingly, Figure 10a reveals that the difference between the time of gathering 200 animals and 1000 animals is not that obvious, with both the proposed method and the benchmark method. The proposed method, however, can always use less time to complete the gathering mission. This is because that the proposed method always chases and repulses the animals that are farthest to the centre, while the benchmark method is repulsing the animals indiscriminately. Therefore, the animals’ footprint with the proposed method becomes increasingly round-like during shrinking, while animals’ footprint with the benchmark method becomes long and narrow. This fact can be observed by comparing Figure 10c,e, and Figure 10d,f.
Note that, the time consumption of flying to the edge and the driving task mainly depend on the initial locations of the drones and the animals. From now on, we focus on evaluating the average gathering time after the drones have arrived on the extended hull. The aforementioned minor difference between herding 200 and 1000 animals is very likely because that the gathering time is strongly correlated with the size of the initial field, rather than the number of the animals. To confirm this, we change the initial field into square and investigate the relationship between the gathering time and the length of the initial square field; see Figure 11a. It reveals that the average gathering time increases significantly with the initial square field length. This supports the guess that the gathering time is strongly correlated with the size of the initial field. The reason is also that the gathering time mainly depends on the movement of the animals on the edge, and particularly the travelling time for them to move to the area close to . With fixed and the same repulsion from the barking drones, the travelling distances of these animals are dominated by the size of the initial field. Moreover, Figure 11a shows that the difference between the gathering time of the benchmark method and the proposed method increases with the initial square field length. It means the benchmark method is more ‘sensitive’ to the varying length of the initial square field. We further investigate the relationship between the gathering time and the the number of barking drones ; see Figure 11b. Not surprisingly, the average gathering time decreases significantly with the increase of for both methods. Besides, Figure 11b shows that the superiority of the proposed method becomes more obvious with increases when .
Figure 11.
Comparisons of the average gathering time for different values of (a) length of the initial square field; (b) number of barking drones .
Next, we investigate the impact of the drone speed and animal speed on the gathering time; see Figure 12. Figure 12a shows that slower drones will lead to a higher average gathering time, especially when the maximum drone speed m/s, for both the benchmark method and the proposed method. Moreover, the average gathering time of the benchmark method is more ‘sensitive’ to when m/s. In addition, in our simulations, drones with m/s cannot accomplish the gathering task using the benchmark method. In the implementation of the proposed method, drones with m/s is preferable. Furthermore, the average gathering time of the proposed method reduces with increases, the percentage of the reduction, however, is not significant when m/s. Figure 12b shows that animals with higher maximum speed can be gathered in a shorter time. Particularly, with the proposed method, the average gathering time reduces around (from 14.7 min to 9.1 min) when increases (from 2 m/s to 5 m/s). For the benchmark method, the average gathering time reduces around (from 20.8 min to 12.7 min). Therefore, the reduction of average gathering time is much slower than the increase of when 2 m/s m/s, for both methods.
Figure 12.
Comparisons of the average gathering time for different values of (a) the maximum drone speed ; (b) the maximum animal speed .
We investigate impact of the barking cone radius , the drone-to-animal distance , and angle of barking cone on the gathering time; see Figure 13. Figure 13a presents the relationship between the barking radius and the gathering time with 200 animals and 1000 animals, respectively. We can observe that increasing will accelerate the gathering when m. But when m, the average gathering time increases with , which is contradictory to our expectation. One possible reason is that the gathering time mainly depends on the animals on the edges. If is too lager, it may cause the mutual interference between the repulsive forces inflicted by the barking drones, which may pull down the gathering. Moreover, Figure 13a shows that the proposed method is more ‘sensitive’ to when gathering more animals with m. This is because that the proportion of the repulsed animals near the edges tend to be less when increases, for a fixed .
Figure 13.
Comparisons of the average gathering time for different values of (a) barking cone radius ; (b) drone-to-animal distance ; (c) angle of barking cone .
Figure 13b suggests that the average gathering time decreases with increases when m. One possible reason is that more animals will be repulsed to the directions that are not point to the centre if is too small, since the repulsive force from the barking drone is point to the opposite of it and the barking zone is fan-shaped. This result is also considered the interference. However, increasing will decelerate the gathering when m, and this becomes more obvious with more animals. It is reasonable because increasing is almost equivalent to decreasing when is fixed.
Figure 13c indicates that the gathering time of the proposed method is significantly less than the one of the benchmark method among and with 200 animals and 1000 animals. It can also be seen from Figure 13c that the gathering time does not show monotonicity with the increase of . The gathering time with , however, is slightly less than the cases with and , for both the proposed method and the benchmark method.
We are also interested in the impacts of the measurement errors from the ‘observer’ on our method. We add random noise to the measured positions of the animals, and the amplitude of the noise is from 2 to 10 m. We conduct 20 simulations with 200 animals and 1000 animals independently for each value with m. The results are as shown in Figure 12. Under the measurement error, the average impact on the gathering time is relatively small. Figure 12 shows that the average gathering time increased slightly with the measurement error increases from 2 to 10 m for both cases. For example, from no measurement error to 10 m error, the average gathering time for 1000 animals increased from 11.7 min to 12.6 min. The difference is less than 1 min. Moreover, the impact of measurement errors on the average gathering time is less significant in the case of 200 animals (see Figure 14).
Figure 14.
Impact of measurement errors.
In summary, we present computer simulation results in this section to demonstrate the performance of the proposed method. These results confirm that the proposed method can efficiently herd a large number of farm animals and outperform the benchmark method. By investigating the impact of the system parameters we can obtain that higher speed of drones leads to shorter gathering time. The barking cone radius and the drone-to-animal distance also significantly affect the gathering time. The optimal values of them can be obtained via experiments on real animals.
7. Conclusions
In this paper, we proposed a novel robotic herding system based on autonomous barking drones. We developed a collision-free sliding mode based motion control algorithm, which navigates a network of barking drones to efficiently collect a flock of animals when they are too dispersed and drive them to a designated location. Simulations using a dynamic model of animal flocking based on Reynolds’ rules showed the proposed drone herding system can efficiently herd a thousand of animals with several drones. A unique contribution of this paper is the proposal of the first prototype of herding a large flock of farm animals by autonomous drones. The future work is to conduct experiments on real farm animals to test the proposed method. Moreover, the sound from drones may have some unknown effect on the animals and their responses. The study of such an aspect can also be carried out in field experiments.
Author Contributions
Conceptualization, X.L.; methodology, X.L.; software, X.L.; validation, X.L.; formal analysis, X.L.; investigation, X.L. and H.H.; resources, X.L.; data curation, X.L. and J.Z.; writing—original draft preparation, X.L.; writing—review and editing, X.L., H.H. and J.Z.; visualization, X.L.; supervision, A.V.S.; project administration, J.Z.; funding acquisition, A.V.S. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the Australian Research Council. Also, this work received funding from the Australian Government, via grant AUSMURIB000001 associated with ONR MURI grant N00014-19-1-2571.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Elijah, O.; Rahman, T.A.; Orikumhi, I.; Leow, C.Y.; Hindia, M.N. An overview of Internet of Things (IoT) and data analytics in agriculture: Benefits and challenges. IEEE Internet Things J. 2018, 5, 3758–3773. [Google Scholar] [CrossRef]
- Birrell, S.; Hughes, J.; Cai, J.Y.; Iida, F. A field-tested robotic harvesting system for iceberg lettuce. J. Field Robot. 2020, 37, 225–245. [Google Scholar] [CrossRef] [PubMed]
- Ahmed, N.; De, D.; Hussain, I. Internet of Things (IoT) for smart precision agriculture and farming in rural areas. IEEE Internet Things J. 2018, 5, 4890–4899. [Google Scholar] [CrossRef]
- Marini, D.; Llewellyn, R.; Belson, S.; Lee, C. Controlling within-field sheep movement using virtual fencing. Animals 2018, 8, 31. [Google Scholar] [CrossRef] [PubMed]
- Yao, Y.; Sun, Y.; Phillips, C.; Cao, Y. Movement-aware relay selection for delay-tolerant information dissemination in wildlife tracking and monitoring applications. IEEE Internet Things J. 2018, 5, 3079–3090. [Google Scholar] [CrossRef]
- Achour, B.; Belkadi, M.; Aoudjit, R.; Laghrouche, M. Unsupervised automated monitoring of dairy cows’ behavior based on Inertial Measurement Unit attached to their back. Comput. Electron. Agric. 2019, 167, 105068. [Google Scholar] [CrossRef]
- Vaughan, R.; Sumpter, N.; Frost, A.; Cameron, S. Robot Sheepdog Project Achieves Automatic Flock Control. In Proceedings of the Fifth International Conference on the Simulation of Adaptive Behaviour; MIT Press: Cambridge, MA, USA, 1998; pp. 489–493. Available online: https://ieeexplore.ieee.org/document/6278703 (accessed on 28 May 2020).
- Sumpter, N.; Bulpitt, A.J.; Vaughan, R.T.; Tillett, R.D.; Boyle, R.D. Learning Models of Animal Behaviour for a Robotic Sheepdog; MVA: Chiba, Japan, 1998; pp. 577–580. [Google Scholar]
- Evered, M.; Burling, P.; Trotter, M. An investigation of predator response in robotic herding of sheep. Int. Proc. Chem. Biol. Environ. Eng. 2014, 63, 49–54. [Google Scholar]
- BBC. Robot Used to Round Up Cows is a Hit with Farmers. Available online: https://www.bbc.com/news/technology-24955943 (accessed on 28 May 2020).
- Sciencealert. Spot the Robot Sheep Dog. Available online: https://www.sciencealert.com/spot-the-robot-dog-is-now-herding-sheep-in-new-zealand (accessed on 28 May 2020).
- IEEE Spectrum. Swagbot to Herd Cattle. Available online: https://spectrum.ieee.org/automaton/robotics/industrial-robots/swagbot-to-herd-cattle-on-australian-ranches (accessed on 28 May 2020).
- Telegraph. Britains Most Expensive Sheepdog. Available online: https://www.telegraph.co.uk/news/2016/05/14/britains-most-expensive-sheepdog-sells-for-15000-at-auction/ (accessed on 28 May 2020).
- Gazi, V.; Fidan, B.; Marques, L.; Ordonez, R.; Kececi, E.; Ceccarelli, M. Robot swarms: Dynamics and control. In Mobile Robots for Dynamic Environments; eBooks; ASME: New York, NY, USA, 2015; pp. 79–107. [Google Scholar]
- Strömbom, D.; Mann, R.P.; Wilson, A.M.; Hailes, S.; Morton, A.J.; Sumpter, D.J.; King, A.J. Solving the shepherding problem: Heuristics for herding autonomous, interacting agents. J. R. Soc. Interface 2014, 11, 20140719. [Google Scholar] [CrossRef] [PubMed]
- Hoshi, H.; Iimura, I.; Nakayama, S.; Moriyama, Y.; Ishibashi, K. Robustness of Herding Algorithm with a Single Shepherd Regarding Agents’ Moving Speeds. J. Signal Process. 2018, 22, 327–335. [Google Scholar] [CrossRef]
- Hoshi, H.; Iimura, I.; Nakayama, S.; Moriyama, Y.; Ishibashi, K. Computer simulation based robustness comparison regarding agents’ moving-speeds in two-and three-dimensional herding algorithms. In Proceedings of the 2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS), Toyama, Japan, 5–8 December 2018; pp. 1307–1314. [Google Scholar]
- Pierson, A.; Schwager, M. Bio-inspired non-cooperative multi-robot herding. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1843–1849. [Google Scholar]
- Pierson, A.; Schwager, M. Controlling noncooperative herds with robotic herders. IEEE Trans. Robot. 2017, 34, 517–525. [Google Scholar] [CrossRef]
- Singh, H.; Campbell, B.; Elsayed, S.; Perry, A.; Hunjet, R.; Abbass, H. Modulation of Force Vectors for Effective Shepherding of a Swarm: A Bi-Objective Approach. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 2941–2948. [Google Scholar]
- Vayssade, J.A.; Arquet, R.; Bonneau, M. Automatic activity tracking of goats using drone camera. Comput. Electron. Agric. 2019, 162, 767–772. [Google Scholar] [CrossRef]
- Barbedo, J.G.A.; Koenigkan, L.V.; Santos, P.M.; Ribeiro, A.R.B. Counting Cattle in UAV Images’ Dealing with Clustered Animals and Animal/Background Contrast Changes. Sensors 2020, 20, 2126. [Google Scholar] [CrossRef] [PubMed]
- Huang, H.; Savkin, A.V. An Algorithm of Reactive Collision Free 3-D Deployment of Networked Unmanned Aerial Vehicles for Surveillance and Monitoring. IEEE Trans. Ind. Inform. 2020, 16, 132–140. [Google Scholar] [CrossRef]
- Li, X.; Huang, H.; Savkin, A.V. A Novel Method for Protecting Swimmers and Surfers From Shark Attacks Using Communicating Autonomous Drones. IEEE Internet Things J. 2020, 7, 9884–9894. [Google Scholar] [CrossRef]
- Huang, H.; Savkin, A.V. A Method for Optimized Deployment of Unmanned Aerial Vehicles for Maximum Coverage and Minimum Interference in Cellular Networks. IEEE Trans. Ind. Inform. 2019, 15, 2638–2647. [Google Scholar] [CrossRef]
- Savkin, A.V.; Huang, H. Navigation of a Network of Aerial Drones for Monitoring a Frontier of a Moving Environmental Disaster Area. IEEE Syst. J. 2020, 14, 4746–4749. [Google Scholar] [CrossRef]
- Paranjape, A.A.; Chung, S.J.; Kim, K.; Shim, D.H. Robotic herding of a flock of birds using an unmanned aerial vehicle. IEEE Trans. Robot. 2018, 34, 901–915. [Google Scholar] [CrossRef]
- RaisingSheep. Sheep Herding Dogs. Available online: https://http://www.raisingsheep.net/sheep-herding-dogs.html/ (accessed on 28 May 2020).
- The Washington Post. New Zealand Farmers Have a New Tool for Herding Sheep: Drones that Bark Like Dogs. Available online: https://www.washingtonpost.com/technology/2019/03/07/new-zealand-farmers-have-new-tool-herding-sheep-drones-that-bark-like-dogs/ (accessed on 28 May 2020).
- The Wall Street Journal. They’re Using Drones to Herd Sheep. Available online: https://www.wsj.com/articles/theyre-using-drones-to-herd-sheep-1428441684 (accessed on 28 May 2020).
- Li, X. Some Problems of Deployment and Navigation of Civilian Aerial Drones. arXiv 2021, arXiv:2106.13162. [Google Scholar]
- Chengzhi Drone. MP130 Drone Digital Voice Broadcasting System. Available online: https://www.gzczzn.com/productArgumentsServlet?productId=MP130/ (accessed on 28 May 2020).
- DJI. Matrice 300 RTK. Available online: Https://www.dji.com/au/matrice-300 (accessed on 28 May 2020).
- AIROBOTICS. Automated Industrial Drones. Available online: Https://www.airoboticsdrones.com/ (accessed on 28 May 2020).
- Sun, Y.; Xu, D.; Ng, D.W.K.; Dai, L.; Schober, R. Optimal 3D-trajectory design and resource allocation for solar-powered UAV communication systems. IEEE Trans. Commun. 2019, 67, 4281–4298. [Google Scholar] [CrossRef]
- Fujioka, K.; Hayashi, S. Effective shepherding behaviours using multi-agent systems. In Proceedings of the 2016 IEEE Region 10 Conference (TENCON), Singapore, 22–25 November 2016; pp. 3179–3182. [Google Scholar]
- Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 27–31 July 1987; pp. 25–34. [Google Scholar]
- Wang, C.; Savkin, A.V.; Garratt, M. A strategy for safe 3D navigation of non-holonomic robots among moving obstacles. Robotica 2018, 36, 275–297. [Google Scholar] [CrossRef]
- Utkin, V.I. Sliding Modes in Control and Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Drakunov, S.V.; Utkin, V.I. Sliding mode control in dynamic systems. Int. J. Control 1992, 55, 1029–1037. [Google Scholar] [CrossRef]
- Savkin, A.V.; Evans, R.J. Hybrid Dynamical Systems: Controller and Sensor Switching Problems; Birkhauser: Boston, MA, USA, 2002. [Google Scholar]
- Fujioka, K. Effective herding in shepherding problem in v-formation control. Trans. Inst. Syst. Control Inf. Eng. 2018, 31, 21–27. [Google Scholar] [CrossRef][Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).