Next Article in Journal
Operational Improvement of AlGaN/GaN High Electron Mobility Transistor by an Inner Field-Plate Structure
Next Article in Special Issue
Signal Source Localization of Multiple Robots Using an Event-Triggered Communication Scheme
Previous Article in Journal
Optimization of Split Transmitter-Receiver Digital Nonlinearity Compensation in Bi-Directional Raman Unrepeatered System
Previous Article in Special Issue
Exploration of Swarm Dynamics Emerging from Asymmetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-AUV Cooperative Target Hunting Based on Improved Potential Field in a Surface-Water Environment

1
School of Physics and Electronic Electrical Engineering, Huaiyin Normal University, Huaian 223001, China
2
Western Australian School of Mines, Curtin University, Kalgoorlie 6430, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(6), 973; https://doi.org/10.3390/app8060973
Submission received: 7 May 2018 / Revised: 28 May 2018 / Accepted: 31 May 2018 / Published: 14 June 2018
(This article belongs to the Special Issue Swarm Robotics)

Abstract

:
In this paper, target hunting aims to detect target and surround the detected target in a surface-water using Multiple Autonomous Underwater Vehicles (multi-AUV) in a given area. The main challenge in multi-AUV target hunting is the design of AUV’s motion path and coordination mechanism. To conduct the cooperative target hunting by multi-AUV in a surface-water environment, an integrated algorithm based on improved potential field (IPF) is proposed. First, a potential field function is established according to the information of the surface-water environment. Then, the dispersion degree, the homodromous degree, and district-difference degree are introduced to increase the cooperation of the multi-AUV system. Finally, the target hunting is solved by embedding the three kinds of degree into the potential field function. The simulation results show that the proposed approach is applicable and feasible for multi-AUV cooperative target hunting.

1. Introduction

To conduct the cooperative target hunting by multi-AUV in an underwater environment, the AUVs not only need to take into account basic problems (such as searching, path planning) but also need to cooperate in order to catch the targets efficiently [1]. The target hunting by multi-AUV has attracted much attention due to its complexity and significance [2,3]. Today, much research has been done on the multi-AUV hunting issue, and there are some approaches proposed that apply to this issue. Zhang et al. [4] presented a hunting approach derived from the virtual structure. The advantage of the virtual structure approach is that the formation can be maintained very well while maneuvering. Rezaee et al. [5] considered formation control of a team of mobile robots based on the virtual structure. The main advantage of this approach is that it is fairly easy to prescribe the coordinated behavior for the whole formation group and add a type of robustness to formation by using formation feedback.
However, the disadvantage of the virtual structure is that requiring the formation to act as a rigid virtual structure limits the class of potential applications.
Recently, some research has dealt the hunting process with simple obstacles. Yamaguchi [6] proposed a method based on making troop formations for enclosing the target and presented a smooth time-varying feedback control law for coordinating motions for multi-robots. Pan [7] has applied the improved reinforcement algorithm in the multi-robot hunting problem. However, in these studies, the hunting target is usually static.
For the shortcomings mentioned above, Ma [8] proposed a cooperative hunting strategy with dynamic alliance to chase moving target. This method can shorten the completion time to some extent. Wang [9] proposed a new hunting method with new definition concepts of occupy, overlapping angle and finally calculated an optimized path for hunting multi-robots, but the environment is too open, and the initial location of hunting robots is too close to a moving target.
Some work has been reported to adapt the leader-following algorithm to achieve target hunting. For example, Ni et al. [10] presented a bio-inspired neural network model with formation strategy to complete the hunting task. Liang et al. [11] proposed a leader-following formation control method for mobile robots with a directed tree topology. Qin et al. [12] have used leader-following formation algorithm to guide multi-agent cooperation for hunting task. These algorithms are easy to understand and implement, since the coordinated team members only need to maneuver according to the leader. However, the leader-following algorithm is no explicit feedback from the followers to leader, and the failure of the leader leads to the failure of the whole formation team.
There are many neural network approaches proposed for target hunting. For examples, Garcia et al. [13] proposed a simple ant colony optimization meta-heuristic (SACOdm) algorithm to solve the path planning problem of autonomous mobile robots. The SACOdm methods determine the robots’ path based on the distance from the source to the target nodes, where the ants remember the visited nodes. Zhu et al. [14] proposed a hunting algorithm based on a bioinspired neural network. The hunting AUVs’ paths are guided through the bio-inspired neural network and the results show that it can achieve the desired hunting result efficiency. Sheng [15] has proposed a method based on diffusion adaptation with network to research intelligent predators hunting for fish schools. Although neural networks can complete the hunting task, they often need to perform large amount of calculation, incurring prohibitive computational cost. Therefore, the neural network algorithm is not suitable for real-time system.
Potential field algorithms were proposed for real-time target hunting and became the most widely studied distributed control methods [16]. In the potential field methods, it is assumed that the robots combine attraction with the goal, and repulsion from obstacles [17,18]. Artificial potential field (APF) algorithms were proposed for real-time path planning for robots to solve obstacle avoidance problem and became the most widely studied distributed control methods [16,17,18]. In the APF methods, it is assumed that the robots combine attraction with the goal and repulsion from obstacles. Much work has been reported to adapt the APF algorithm to controlling swarm robots. Rasekhipour et al. [19] introduces a model predictive path-planning controller to potential functions along with the vehicle dynamics terms. Ge et al. [20] proposed formation tracking control based on potential field. However, the potential field algorithm is known for getting trapped easily in a local minimum.
This paper focuses on situation which the targets are intelligent and their motions are unpredicted and irregular. The multi-AUV hunting algorithm based on the improved potential field (IPF) is presented. The hunting AUVs’ paths are guided through potential field function, and the three degrees (the dispersion degree, the homodromous degree, and district-difference degree) are used to solve the target search and capture of AUVs in the whole hunting process. The proposed algorithm can overcome local minimum problem. The simulation results show that the hunting efficiency can be as desired.
The rest paper is organized as follows. The proposed improved potential field based integrated hunting algorithm is presented in Section 2. The simulations for various situations are given in Section 3. Finally, the conclusion is given in Section 4.

2. The Improved Potential Field Approach

The potential field approach is a commonly used method for AUV path planning in a surface-water environment. However, the potential field approach is not cooperative when it is applied to target hunting. In this section, applying the cooperation method into potential field is a novel idea for multi-AUV cooperation target hunting. The flow of the target hunting task is shown in Figure 1. In this approach, when one AUV detects the targets, the multi-AUV system will calculate the distance between the each AUV and the target and request the proper AUVs to accomplish the hunting tasks. At the same time, the three degrees (the dispersion degree, the homodromous degree, and district-difference degree) are introduced to increase the collaboration of multi-robot system.
In the scheme, the whole potential value U of AUV R i can be defined as
U = ω α U α + ω γ U γ
where U α denotes the attractive potential value, U γ represents the repellent potential value, and ω α and ω γ are weights in the distance potential. In static environments,   ω α can be set within a limited range, while in dynamic environments, a linear increasing value of ω α is applied.
The attractive potential U α at position X R i of AUV R i is defined as [21]
U α = { 1 2 k α 1 N u / ( N u + N α ) ( A A α ) / A d 0 ,     i f   R i   c o n t i n u e s   e x p l o r i n g , 1 2 k α 2 | X R i X T l | 2 ,     i f   R i   m o v e s   t o   t a r g e t s .    
where k α 1 and k α 2 are the position gain coefficients, X T l is the position of target Tl, l = 1, …, N, d0 denotes the shortest distance from the AUV to the explored area, and the item | X R i X T l | is the relative distance between the AUV R i and target T l . The number of the detected targets is N α while N u is undetected targets number. The variable A denotes the area of the environment, while A α represents the already explored area.
Because there are obstacles in the environment, the AUVs need to plan the collision-free paths to complete the task. In this case, the repulsive potential U γ is given by [22]
U γ = { ω γ D 1 H D + ω γ H 1 H H + ω γ D D 1 H D D , i f   R i   c o n t i n u e s   e x p l o r i n g , 0 , i f   R i   m o v e s   t o   t a r g e t s , 1 2 η ( 1 d 1 d 1 ) | X R i X T l | 2 , i f   R i   d e t e c t s   o b s t a c l e s .
where ω γ D ,     ω γ H and ω γ D D are the density weights; and H D , H H and H D D are the dispersion degree, homodromous degree and district-difference degree respectively. d is the nearest distance between the AUV and the detected obstacles. d1 is the influence scope of obstacles. η is a position gain coefficient. The relative distance | X R i X T l | between the AUV and target is added to the function, which ensures that the global minimum is only at the target in the entire potential field.
Dispersion degree H D evaluates how close the robots are to each other by distance. If there are m AUVs in a M × N area, the parameter H D is calculated by a Gaussian function as [23]
H D = e ( δ μ ) 2 2 σ 2
where δ, µ and σ are calculated by [24]
δ = D ¯ M 2 + N 2
μ = 1 t k = 1 t δ k
σ = 1 2 [ max ( δ k ) min ( δ k ) ]
D ¯ = j = 1 m f = j + 1 m D ( j , f ) C m 2 = 2 m ( m 1 ) j = 1 m f = j + 1 m D ( j , f )
where D is the real-time average distance between the AUVs; and D(i, f) is the distance between AUV R j and R f .
Homodromous degree H H evaluates how close the AUVs are to others by direction. If there are m AUVs and the AUV directions are { θ 1 ,   θ 1 , , θ m 0 } , where 0 ° θ 360 ° . The parameter H H is calculated by [25]
H H = 2 m ( m 1 ) j = 1 m f = j + 1 m a b s ( θ j , θ f ) m 0
where m 0 is the number of possible moving directions of the AUV. In this study, each possible direction area is regarded as a bound area of 45° angle. Therefore, there are eight possible direction areas in the simulations. The function abs() is absolute value function.
The district-difference degree is used to judge whether all the AUVs stay in the same area. Especially when both N u and A u are large, the district-difference degree is applied to provide a proper repulsive potential value to keep the AUVs from gathering. In the actual search task, the environment is usually divided into different parts based on the number of targets and search resources. If both the percentages of undetected targets (denoted as N u / N ) and unexplored area (denoted as A u / A ) are high, the district-difference degree can help the AUVs to explore separately rather than gather too close to each other. In other words, the density of the robots in a small part of the environment is supposed to be low under this situation. For the calculation, the environment is divided into N d parts A 1 ,   A 2 , ,   A N d , where N d is a square number and N d < N . The value of district-difference degree can be obtained by [26,27]
H D D = ω D 1 A u A + ω D 2 N u N + ω D 3 i = 1 m P ( R i , k ) m
where ω D 1 , ω D 2 and ω D 3 are weights; P ( R i , k ) is the function to judge whether the AUV R i is in the k-th part of the environment, which can be obtained by [26,27]
P ( R i , k ) = { 1 , i f   i n   t h e   p a r t 0 , o t h e r w i s e        

3. Simulation Studies

To demonstrate the effectiveness of the proposed approach for cooperative hunting by multi-AUV in surface-water environment, a simulation is conducted in MATLAB R2016a (The MathWorks, Inc., MA 01760-2098 UNITED STATES). In order to easy realization, the assumptions are as follows. (1) The turning radius of the AUV is ignorable in surface-water environment, thus the AUV is assumed to be able to move omni-directionally. (2) AUV is assumed to be able to recognize each other and identify their targets by the sonar. (3) The AUV velocity is set at a value more than the target velocity. (4) AUVs are capable of communicating with each other.
In this simulation, there are six AUVs, two targets, and several static obstacles of different size and shape. The area of the environment is 120 × 120 (m2). AUVs and targets are allowed to move within the given space. Among them, targets move at random, and AUVs move at the proposed algorithm. When one target moves into any AUV’s sensing range, this target is regarded as being found. Figure 2 shows the conditions where the target is successfully surrounded by hunting AUVs. When all targets have been surrounded by at least four AUVs, the targets are regarded as being caught, and the hunting task ends.

3.1. One Target

The first simulation is conducted to test the cooperative hunting process without obstacles. It is assumed that there are four hunting AUVs with only one target. Figure 3a shows the initial locations and stage of hunting condition. At the beginning of the hunting task, AUVs search for targets in different directions based on the proposed algorithm. Targets move at random before being discovered. After a while, the target T1 is found by the AUV R2. Figure 3b shows AUVs’ search trajectory for the target. Because the target has the same intelligence of AUV except the cooperation, target T1 will escape. The AUV R2 will track the target T1 and send the location information of the target to other AUVs. According to the location of the target T1, the proposed algorithm automatically plans a collision-free pursuing path for each hunter. Figure 3c shows R1, R2, R3, and R4 hunting trajectory for the target T1. Obviously, the simulation result shows that the proposed algorithm realizes cooperative hunting in surface-water experiments with obstacles.
In order to further validate the performance of the proposed algorithm, the Simulation of hunting process with dynamic obstacle is provided. Figure 4 shows this process of four AUVs hunting target and avoid dynamic obstacles clearly. The results show in the Figure 4, it validates that it is available to apply the proposed algorithm to the multi-AUV hunting task, and it can effectively avoid dynamic obstacle in path planning.

3.2. Multiple Targets

The second simulation is conducted to test the dynamic cooperation when two targets need to be caught. It is assumed that there are two targets and six AUVs. Figure 5 shows this process of six AUVs hunting two targets clearly. Figure 5a shows the distribution of AUVs, targets, and obstacles. As well as hunting one target, six AUVs began searching the work area in different directions. Figure 5b shows AUVs’ search trajectory for the first target. Because the target has the same intelligence of AUV except the cooperation, target T1 will escape. The AUV R1 will track the target T1 and send the location information of the target to other AUVs. According to the location of the target T1. The multi-AUV system selects the four AUVs closest to the T1. R1, R2, R3, and R4 are assigned to the target T1. Since R5 and R6 fail in the competition, they will not join in the pursuing task but keep search target. After the completion of the task assignment, the proposed algorithm automatically plans a collision-free pursuing path for each hunter. Figure 5c shows R1, R2, R3, and R4 hunting trajectory for the first target T1. Same principle, the second target T2 is found by the AUV R5, R6. The target T2 is hunted by the AUV R2, R4, R5, and R6. Figure 5d shows final trajectories of the AUVs hunting targets. Obviously, the simulation result shows that the proposed algorithm realizes multi-AUV cooperative hunting for two dynamic targets. The results show in the Figure 5, it validates that it is available to apply the proposed algorithm to the multi-AUV hunting task, and it can effectively avoid AUV coordination conflict problem in path planning.

3.3. Some AUVs Break Down

To prove the robustness of proposed approach, some AUV failures are added in this part of simulation. When search in real surface-water workspaces, it is likely that two AUVs suddenly break down due to mechanical problems. and then it is an important index for measuring the proposed algorithm’s cooperation to see whether the multi-AUV work system could complete its search task through internal adjustment. In this case, the simulation deals with AUV failures in the same simulation environment as that in the Section 3.2. There are six AUVs involved in search task for one target. At the beginning, six AUVs are normal search target in the surface-water workspaces. After a period of time, the target T1 is found by the AUV R3. The AUV R3 will track the target T1 and send the location information of the target to other AUVs. According to the location of the target T1, the multi-AUV system selects the four AUVs closest to the T1. R1, R2, R3, and R6 are assigned to the target T1. Since R4 and R5 fail in the competition, they will not join in the pursuing task but keep search target. One of the AUVs, R3, breaks down in time, but the remaining AUV members still function properly (shown as in Figure 6a). Despite the breakdown of R3, AUV R4 replaces AUV R3 by reassigning tasks, the whole team is not paralyzed but keeps working on for their hunting task. When coming to the 40th second, the AUV R6 also fails (shown as in Figure 6b). Since a distributed architecture is adopted, the rest one AUV R5 will not be affected but go on with their hunt. And at last, the AUVs R1, R2, R4, and R5 got the target T1. The final trajectories of the AUV team (see Figure 6c) show that the proposed algorithm can work satisfactorily in the case of unexpected events, and it does not need any added changes for different situations. From this simulation, it shows that the improved potential field algorithm has the ability to complete search task in the case of AUV mechanical failures through dynamical allocation. This also demonstrates an excellent cooperation of the proposed algorithm.

3.4. Comparison of Different Algorithms

In order to further validate the performance of the proposed algorithm, it is compared with potential field (PF) algorithm. The comparison studies involve six AUVs, two targets, and some obstacles with environments scale of 120 × 120 (m2). The target locations, AUVs and obstacles are randomly deployed. The both algorithms are applied to the multi-AUVs that are directed to hunt all the targets. In these conditions, the both algorithms simulation experiments of cooperative hunting were completed 50 times respectively. To make a clear distinction between the two algorithms, Table 1 lists the mean and standard deviation statistics of total path length and hunting time by both algorithms. It is reasonable to conclude that the integrated algorithm of IPF performs better than the PF in each item of simulation results. Hence, it distinguishes itself with the shorter path length and time. By analysis, the PF algorithm doesn’t have the function of cooperation for AUVs. However, IPF can not only perform the target hunting, but also it can better complete the task in the environment filled with obstacles.
In order to further validate the performance of the proposed algorithm, comparison studies with the particle swarm optimization (PSO) algorithm will be carried out. The PSO algorithm plans a path by iteratively improving a candidate solution with regard to the fitness function. The comparison studies involve one target locations, four AUVs, and some obstacles with environments scale of 120 × 120 (m2). The target locations, AUVs and obstacles are randomly deployed. Different algorithms are used to arrange the multi-AUVs to hunt target. Figure 7 shows the search process with four different algorithms. According to the result of Figure 7, the proposed algorithm completes the target hunting task. However, the PSO algorithm failed to hunt the target because R1 hits an obstacle.
In order to ensure the accuracy of the experiments, we conducted the experiments many times. In each experiment, the positions of the obstacles, targets, and AUVs are reset. The success rate of performing 50 times of target hunting using two different algorithms is depicted in Figure 7.
It is very clear to see that the proposed IPF algorithm reaches 100% success rate under a large number of experiments in Figure 8. It means that the most tasks are successfully executed. However, the PSO algorithm only reaches 100% success rate for a few experiments. In some special cases, the success rate of PSO algorithm is only 80%. By comparison, it is found that under certain circumstances, the success rate of the proposed IPF algorithm is below 100%, but it is still superior to the PSO algorithm. By analysis, the PSO algorithm only provides optimum solution under no obstacle conditions. However, IPF works properly for obstacle avoidance, therefore it deserves a high success rate in the environments filled with obstacles.

4. Conclusions

In this paper, an integrated algorithm combining the potential field and the three degrees (the dispersion degree, the homodromous degree, and district-difference degree) is proposed to deal with cooperative target hunting by multi-AUV team in surface-water environment. On the one hand, it makes full use of the advantages of potential field, i.e., no pre-learning procedure and good real-time. On the other hand, the three degrees could improve the multi-AUV’s cooperation and overcome local minimum problem. Despite these advantages, there are still practical problems to be researched further. For example, how should AUVs overcome the effects of ocean currents in a surface-water environment during their hunting process. The real surface-water environment is three-dimensional, while, in this paper, many factors are simplified into a two-dimensional simulation. There is still a necessity to carry on further studies on how to solve these problems.

Author Contributions

Conceptualization, H.G. and G.X.; Methodology, G.C.; Software, H.G.; Validation, H.G., G.C. and G.X.; Formal Analysis, H.G.; Investigation, G.C.; Resources, G.C.; Data Curation, H.G.; Writing-Original Draft Preparation, H.G.; Writing-Review & Editing, G.X.; Supervision, G.X.; Project Administration, H.G.; Funding Acquisition, H.G.

Funding

This work was supported by the University-industry cooperation prospective project of Jiangsu Province: Development of intelligent universal color-selecting and drying grain machine (BY2016062-01).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cui, R.; Ge, S.S.; How, B.V.E.; Choo, Y. Leader-follower formation control of under actuated autonomous underwater vehicles. Ocean Eng. 2010, 37, 1491–1502. [Google Scholar] [CrossRef]
  2. Huang, Z.R.; Zhu, D.Q.; Sun, B. A multi-AUV cooperative hunting method in 3-D underwater environment with obstacle. Eng. Appl. Artif. Intell. 2016, 50, 192–200. [Google Scholar] [CrossRef]
  3. Cao, X.; Sun, C.Y. A potential field-based PSO approach to multi-robot cooperation for target search and hunting. At-Automatisierungstechnik 2017, 65, 878–887. [Google Scholar] [CrossRef]
  4. Zhang, Q.; Lapoerre, L.; Xiang, X.B. Distributed control of coordinated path tracking for networked nonholonomic mobile vehicles. IEEE Trans. Ind. Inform. 2013, 9, 472–484. [Google Scholar] [CrossRef]
  5. Rezaee, H.; Abdollahi, F. A decentralized cooperative control scheme with obstacle avoidance for a team of mobile robots. IEEE Trans. Ind. Electron. 2014, 61, 347–354. [Google Scholar] [CrossRef]
  6. Yamaguchi, H. A distributed motion coordination strategy for multiple nonholonomic mobile robots in cooperative hunting operations. Robot. Autom. Syst. 2003, 43, 257–282. [Google Scholar] [CrossRef]
  7. Pan, Y.; Li, D. Improvement with joint rewards on multi-agent cooperative reinforcement learning. In Proceedings of the International Conference on Computer Science and Software Engineering, Wuhan, China, 12–14 December 2008; pp. 536–539. [Google Scholar]
  8. Ma, Y.; Cao, Z.; Dong, X.; Zhou, C.; Tan, M. A multi-robot coordinated hunting strategy with dynamic alliance. In Proceedings of the Control and Decision Conference, Guilin, China, 17–19 June 2009; pp. 2338–2342. [Google Scholar]
  9. Wang, C.; Zhang, T.; Wang, K.; Lv, S.; Ma, H. A new approach of multi-robot cooperative pursuit. In Proceedings of the China Control Conference, Xi’an, China, 26–28 July 2013; pp. 7252–7256. [Google Scholar]
  10. Ni, J.; Yang, S.X. Bioinspired neural network for real-time cooperative hunting by multirobots in unknown environments. IEEE Trans. Neural Netw. 2011, 22, 2062–2077. [Google Scholar] [PubMed]
  11. Liang, X.W.; Liu, Y.H.; Wang, H.; Chen, W.D.; Xing, K.X.; Liu, T. Leader-following formation tracking control of mobile robots without direct position measurements. IEEE Trans. Autom. Control 2016, 61, 4131–4137. [Google Scholar] [CrossRef]
  12. Qin, J.H.; Yu, C.B.; Gao, H.J. Coordination for linear multiagent systems with dynamic interaction topology in the leader-following framework. IEEE Trans. Ind. Electron. 2014, 61, 2412–2422. [Google Scholar] [CrossRef]
  13. Garcia, M.A.P.; Montiel, O.; Castillo, O.; Sepulveda, R.; Melin, P. Path planning for autonomous mobile robot navigation with ant colony optimization and fuzzy cost function evaluation. Appl. Soft Comput. 2009, 9, 1102–1110. [Google Scholar] [CrossRef]
  14. Zhu, D.Q.; Lv, R.F.; Cao, X.; Yang, S.X. Multi-AUV hunting algorithm based on bio-inspired neural network in unknown environments. Int. J. Adv. Robot. Syst. 2015, 12, 1–12. [Google Scholar] [CrossRef]
  15. Sheng, Y.; Sayed, A.H. Cooperative prey herding based on diffusion adaptation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic, 22–27 May 2011; pp. 3752–3755. [Google Scholar]
  16. Cetin, O.; Zagli, I.; Yilmaz, G. Establishing obstacle and collision free communication relay for UAVs with artificial potential fields. J. Intell. Robot. Syst. 2013, 69, 361–372. [Google Scholar] [CrossRef]
  17. Shi, W.R.; Huang, X.H.; Zhou, W. Path planning of mobile robot based on improved artificial potential field. Int. J. Comput. Appl. 2010, 30, 2021–2023. [Google Scholar] [CrossRef]
  18. Couceiro, M.S.; Vargas, P.A.; Rocha, R.P.; Ferreira, N.M.F. Benchmark of swarm robotics distributed techniques in a search task. Robot. Autom. Syst. 2014, 62, 200–213. [Google Scholar] [CrossRef] [Green Version]
  19. Rasekhipour, Y.; Khajepour, A.; Chen, S.K.; Litkouhi, B. A potential field-based model predictive path-planning controller for autonomous road vehicles. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1255–1267. [Google Scholar] [CrossRef]
  20. Ge, S.Z.S.; Liu, X.M.; Goh, C.H.; Xu, L.G. Formation tracking control of multiagents in constrained space. IEEE Trans. Control Syst. Technol. 2016, 24, 992–1003. [Google Scholar] [CrossRef]
  21. Chen, H.; Xie, L. A novel artificial potential field-based reinforcement learning for mobile robotics in ambient intelligence. Int. J. Robot. Autom. 2009, 24, 245–254. [Google Scholar] [CrossRef]
  22. Zhang, J.R.; Sun, C.; Mizutani, E. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence; Prentice Hall: New York, NY, USA, 1997. [Google Scholar]
  23. Wang, Z.X.; Chen, Z.T.; Zhao, Y.; Niu, Q. A novel local maximum potential point search algorithm for topology potential field. Int. J. Hybrid Inf. Technol. 2014, 7, 1–8. [Google Scholar] [CrossRef]
  24. Kao, C.C.; Lin, C.M.; Juang, J.G. Application of potential field method and optimal path planning to mobile robot control. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation Automation Science and Engineering (CASE), Gothenburg, Sweden, 24–28 August 2015; pp. 1552–1554. [Google Scholar]
  25. Liu, X.; Ge, S.S.; Goh, C.H. Formation potential field for trajectory tracking control of multi-agents in constrained space. Int. J. Control 2017, 90, 2137–2151. [Google Scholar] [CrossRef]
  26. Haumann, A.D.; Listmann, K.D.; Willert, V. DisCoverage: A new paradigm for multi-robot exploration. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 924–934. [Google Scholar]
  27. Li, B.; Du, H.; Li, W. A Potential field approach-based trajectory control for autonomous electric vehicles with in-wheel motors. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2044–2055. [Google Scholar] [CrossRef]
Figure 1. The flowchart for target hunting by Multiple Autonomous Underwater Vehicles (multi-AUV).
Figure 1. The flowchart for target hunting by Multiple Autonomous Underwater Vehicles (multi-AUV).
Applsci 08 00973 g001
Figure 2. Target is hunted by AUVs.
Figure 2. Target is hunted by AUVs.
Applsci 08 00973 g002
Figure 3. Simulation of hunting process with one target. (a) The initial state; (b) AUVs’ search trajectory for the target; (c) Final trajectories of the AUVs.
Figure 3. Simulation of hunting process with one target. (a) The initial state; (b) AUVs’ search trajectory for the target; (c) Final trajectories of the AUVs.
Applsci 08 00973 g003
Figure 4. Simulation of hunting process with dynamic obstacle. (a) The initial state; (b) AUVs’ search trajectory for the target; (c) Final trajectories of the AUVs.
Figure 4. Simulation of hunting process with dynamic obstacle. (a) The initial state; (b) AUVs’ search trajectory for the target; (c) Final trajectories of the AUVs.
Applsci 08 00973 g004aApplsci 08 00973 g004b
Figure 5. Simulation of hunting process with two targets. (a) The initial state; (b) AUVs’ hunting trajectory for the first target; (c) AUVs’ search trajectory for the first target; (d) Final trajectories of the AUVs.
Figure 5. Simulation of hunting process with two targets. (a) The initial state; (b) AUVs’ hunting trajectory for the first target; (c) AUVs’ search trajectory for the first target; (d) Final trajectories of the AUVs.
Applsci 08 00973 g005aApplsci 08 00973 g005b
Figure 6. Search process when two AUVs break down. (a) The first AUV break down; (b) The second AUV breaks down; (c) Final trajectories of the whole hunting process.
Figure 6. Search process when two AUVs break down. (a) The first AUV break down; (b) The second AUV breaks down; (c) Final trajectories of the whole hunting process.
Applsci 08 00973 g006
Figure 7. Hunting path with two different algorithms. (a) PSO algorithm; (b) IPF algorithm.
Figure 7. Hunting path with two different algorithms. (a) PSO algorithm; (b) IPF algorithm.
Applsci 08 00973 g007
Figure 8. Success rate comparison between IPF and PSO algorithms.
Figure 8. Success rate comparison between IPF and PSO algorithms.
Applsci 08 00973 g008
Table 1. Performance comparison between improved potential field (IPF) and potential field (PF).
Table 1. Performance comparison between improved potential field (IPF) and potential field (PF).
AlgorithmTotal Path Length (m)Hunting Time (s)
IPF845.3 ± 52.1681.9 ± 30.5
PF992.7 ± 75.6807.6 ± 57.3

Share and Cite

MDPI and ACS Style

Ge, H.; Chen, G.; Xu, G. Multi-AUV Cooperative Target Hunting Based on Improved Potential Field in a Surface-Water Environment. Appl. Sci. 2018, 8, 973. https://doi.org/10.3390/app8060973

AMA Style

Ge H, Chen G, Xu G. Multi-AUV Cooperative Target Hunting Based on Improved Potential Field in a Surface-Water Environment. Applied Sciences. 2018; 8(6):973. https://doi.org/10.3390/app8060973

Chicago/Turabian Style

Ge, Hengqing, Guibin Chen, and Guang Xu. 2018. "Multi-AUV Cooperative Target Hunting Based on Improved Potential Field in a Surface-Water Environment" Applied Sciences 8, no. 6: 973. https://doi.org/10.3390/app8060973

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop