An Integrated YOLOv5 and Hierarchical Human-Weight-First Path Planning Approach for Efficient UAV Searching Systems
Abstract
:1. Introduction
- It utilizes the existing YOLOv5 model to automatically recognize the search target and uses KNN color recognition to recognize clothing/pant colors in real time; it thus avoids wasting time and human effort in the manual identification of possible targets.
- According to the recognition results of the YOLOv5 model and KNN color recognition, the study proposes a weighting subroutine to calculate the human weights of each block and the hierarchical human-weight-first (HWF) path planning algorithm to dispatch the UAV to capture images repeatedly of the search area and each block at different altitudes.
- It proposes a complete flowchart of the integrated YOLOv5 and HWF framework to reduce the search time and avoid searching corners without cameras.
2. Related Work
2.1. Traditional Unmanned Aerial Vehicle Path Planning Methods for Search and Rescue Operations
2.2. Search Target Recognition Techniques
2.2.1. Color Space Exchange
2.2.2. Extracting Feature Colors of Image
2.2.3. Transformation of Color Space
2.2.4. K-Nearest Neighbors (KNN) Color Classification
2.2.5. UAV Systems for Human Detection
3. System Architecture and Algorithms
3.1. System Architecture
- Clothing type;
- Pant type;
- Clothing color;
- Pant color.
3.2. Search Algorithm
3.2.1. Hierarchical Flight Altitudes for the UAV
3.2.2. Block Weight in the Search Area
3.2.3. Hierarchical Human-Weight-First (HWF) Path Planning Algorithm
- (a)
- The UAV flies from the center point of the search area to an altitude of to begin recognition, which is shown in Figure 6a. Block weights are calculated using the image captured by the UAV at altitude .
- (b)
- The HWF algorithm selects the block with the highest block weight as the starting point for the block search and guides the UAV to descend to the center of the block at altitude , which is shown in Figure 6b. The UAV then captures images of the block and sends them to the server for further recognition.
- (c)
- If no search target is found in this block, HWF instructs the UAV to traverse to the block with the next highest block weight until the search target is found, i.e., the block weight exceeds the search target threshold, or all blocks with nonzero block weights have been visited, as shown in Figure 6c.
3.2.4. Convenient Visit Algorithms Based on HWF
3.2.5. Flow of the Integrated YOLOv5 and HWF Framework
- The user inputs the values of six parameters, i.e., the total search area size, the number of search blocks, and the features of the search target, including the types and colors of their clothing and pants, at the server side. At the same time, the UAV prepares for takeoff at the origin point.
- The server calculates the center location of the search area and two flight heights, i.e., and , for the searching UAV and sends this information to the Jetson Nano on the UAV through the mobile network communication.
- The UAV flies to the specified center coordinates of the search area at altitude and captures an image of the whole search area, which is then transmitted back to the server.
- The server executes the weighting subroutine to calculate the first-level weights of all blocks within the total search area.
- If the first-level weight of a particular block is greater than the search target threshold, the system proceeds to step 13; otherwise, it proceeds to step 6.
- The system plans the second-level traversal path for the blocks at altitude , based on the first-level block weights, using the HWF algorithm. Then, the server transmits the planned path for the second layer to the Jetson Nano on the UAV.
- The UAV flies to the center coordinates of the unvisited block with the highest first-level block weight at altitude , according to the planned path, and captures an image of the block. This block image is then transmitted back to the server.
- If the UAV receives a command to finish the search, it proceeds to step 10; otherwise, it proceeds to step 9.
- If all blocks with nonzero first-level weights have been visited by the UAV, the system proceeds to step 10; otherwise, it proceeds to step 7.
- The UAV concludes its flight and returns to the starting point.
- Whenever the server receives a block image transmitted by the UAV at step 7, it runs the weighting subroutine again to calculate the second-level block weight of the current search block.
- If the second-level block weight is greater than the search target threshold, the system proceeds to step 13; otherwise, it returns to step 8.
- The system outputs the coordinates of the detected target’s position along with its image to the user, which indicates that the search target has been found. The server then sends a command to the Jetson Nano on the UAV to finish the search mission.
3.2.6. Weighting Subroutine Flowchart
- As shown in Figure 11, the server first sets the initial values of the total weight value (W), human weights, and block weight as 0. It then executes YOLOv5 for human detection on UAV-captured images.
- If a human body is detected, the server extracts the human body image with its bounding boxes and proceeds to use YOLOv5 for clothing recognition at step 3.
- The server executes YOLOv5 for clothing recognition on the extracted human body image.
- If the clothing is recognized and the recognized clothing type () matches the search clothing type (), the total weight value (W) is incremented by the minimum value between the clothing accuracy () and the custom clothing fuzzy threshold of 0.3. Then, it proceeds to step 5. Otherwise, the system proceeds to step 6.
- The server performs the KNN color recognition on the recognized clothing.
- If the recognized clothing color () matches the search clothing color (), the total weight value (W) is incremented by the minimum value between the KNN color percentage and the custom clothing color fuzzy threshold of 0.2. Then, it proceeds to step 7.
- If the pants are recognized and the recognized pant type () matches the search pant type (), the total weight value (W) is incremented by the minimum value between the pant accuracy () and the custom pant fuzzy threshold of 0.3. The system proceeds to step 8; otherwise, it proceeds to step 9.
- The server performs KNN color recognition on the recognized pants.
- If the recognized pant color () matches the search pant color (), the total weight value (W) is incremented by the minimum value between the KNN color percentage and the custom pant color fuzzy threshold of 0.2.
- The weight score of a person (human_weight) is calculated by the weighted function of the human accuracy value () and the total weight value (W), with the coefficient of 0.1 and 0.9, respectively. The maximum person weight value within a block is defined as the block_weight.
3.2.7. KNN Color Recognition Process
- After correctly identifying the types of clothing and pants, the system proceeds to extract clothing and pant images using the detected bounding box coordinates. Subsequently, the system applies noise reduction techniques to the image, facilitating the extraction of feature colors.
- The captured clothing and pant images are converted from the RGB color space to the HSV color space. Subsequently, OpenCV [37] is used to generate a color gradient direction histogram for the image. From this histogram, the algorithm selects the interval with the highest proportion, obtaining a new set of HSV values, which serves as a representation of the image’s feature color.
- The feature color representing the HSV color attributes is converted back to RGB color attributes.
- The color distances between the image’s feature color and the RGB color table established in this study are computed. Subsequently, these distances are arranged in order, and k-nearest colors are chosen by a voting process. The color that receives the most votes is identified as the result of the KNN color recognition process.
4. Simulation Results
4.1. YOLOv5 Image Recognition Model
4.2. Simulation Environment for Search Algorithms
- The AP values for the search target’s human body, types of clothing and pants, and colors of clothing and pants are randomly distributed between 0.9 and 0.99. The search target is randomly assigned to a specific block. In contrast, the AP values of the human body and types of clothing and pants for the person that is not the search target are randomly distributed between 0.01 and 0.99. The AP values of their clothing and pant colors are randomly set between 0.1 and 0.9.
- Each block contains one to four persons within it, with the probability of 70%. It has a 30% probability of having no person in one block, which means that the weight of this block is set to zero accordingly.
- At a higher altitude , a larger error and probability variation of N% is applied to the given AP values and each feature of the person, respectively. Three different N% values, i.e., 10%, 20%, and 30%, are given to evaluate the performance metrics of these searching schemes.
- At a lower altitude , a smaller error and probability variation of 0.5 N% is applied to the given AP values and each feature of the person, respectively. Hence, three different 0.5 N% values, i.e., 5%, 10%, and 15%, are set accordingly.
- Because most available USB cameras support resolutions such as 640 × 480 or 1280 × 720, we assume that the side length of each block is limited to 3 m, and the area of each block is 9 square meters, such that the USB camera can capture vivid images for the whole search area and each block. Therefore, the total search area for an n × n block is square meters.
- As mentioned in Section 3.2.4, HWFR-S instructs the UAV to visit the block that exceeds the static and fixed convenient visit threshold. If the value of the convenient visit threshold is too small, the UAV has a higher probability of rerouting to a block without the search target, which increases the search path length and search time accordingly. In contrast, if the value of the convenient visit threshold is too large, the UAV may lose the opportunity to reroute to the block with the search target. Hence, the convenient visit threshold of HWFR-S is given intermediate values between (0, 1) as 0.4, 0.5, and 0.6 in this simulation. HWFR-D is given a fixed weight difference to calculate a dynamic convenient visit threshold by subtracting the weight difference from the next block’s weight value. If there is an intermediate block with a weight no less than the current convenient visit threshold, the UAV will take a detour to visit this intermediate block. If the value of the weight difference is too large, which results in a smaller convenient visit threshold, the UAV suffers from a longer search path length and search time due to HWFR-S. Hence, the weight difference of HWFR-D is given lower values of 0.1, 0.2, and 0.3 in this simulation.
4.2.1. Average Search Path Length
4.2.2. Average Number of Search Blocks
4.2.3. Average Search Time
4.2.4. Average Search Accuracy
5. System Implementation
5.1. Software and Hardware
5.2. Screenshots of the Implemented System
5.3. Limitations of the Proposed System
- The ideal search area must approximate a rectangle. If it is a concave or convex polygon or any irregular shape, the input range for HWF must be the smallest bounding rectangle that encompasses the entire search area. This would expand the search area, potentially including many non-search areas, leading to longer search paths and times, as well as increased power consumption for the UAV.
- The ideal altitude for the search area should be consistent across a single horizontal plane. This ensures that when the UAV captures the image of the entire search area at altitude , the distances between the UAV and the center points of different blocks are similar. Hence, the relationship among the block weight values obtained from the human body and clothing recognition at the first level will closely approximate those of the real search target. Further, using HWF path planning to visit blocks at the second level with the highest block weight value first and subsequently recognizing the results at the second level will yield more accurate outcomes. Conversely, if the altitudes of the center points of different blocks are not on the same horizontal plane, the UAV will be closer to blocks at higher altitudes. This results in clearer, more magnified images of human bodies, leading to better recognition results. Consequently, a block with higher block weight values might be prioritized in the HWF path planning algorithm. If the search target is not within this block, it could result in longer search paths and times.
- Since the UAV captures images at altitude in the first level, it must cover the entire area. Due to the limited resolution of the camera, the search area cannot be too expansive. Otherwise, the captured images of human bodies would appear smaller and blurrier, leading to poorer recognition results and subsequently affecting the accuracy of HWF path planning.
5.4. Performance Comparison of YOLOv5 and YOLOv8
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Sahingoz, O.K. Networking models in flying ad-hoc networks (FANETs): Concepts and Challenges. J. Intell. Robot. Syst. 2014, 74, 513–527. [Google Scholar] [CrossRef]
- Menouar, H.; Guvenc, I.; Akkaya, K.; Uluagac, A.S.; Kadri, A.; Tuncer, A. UAV-enabled intelligent transportation systems for the smart city: Applications and challenges. IEEE Commun. Mag. 2017, 55, 22–28. [Google Scholar] [CrossRef]
- Aasen, H. UAV spectroscopy: Current sensors, processing techniques and theoretical concepts for data interpretation. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 8809–8812. [Google Scholar]
- Ezequiel, C.A.F.; Cua, M.; Libatique, N.C.; Tangonan, G.L.; Alampay, R.; Labuguen, R.T.; Favila, C.M.; Honrado, J.L.E.; Canos, V.; Devaney, C.; et al. UAV aerial imaging applications for post-disaster assessment, environmental management and infrastructure development. In Proceedings of the International Conference on Unmanned Aircraft Systems, Orlando, FL, USA, 27–30 May 2017; pp. 274–283. [Google Scholar]
- Zhang, Y.; Li, S.; Wang, S.; Wang, X.; Duan, H. Distributed bearing-based formation maneuver control of fixed-wing UAVs by finite-time orientation estimation. Aerosp. Sci. Technol. 2023, 136, 108241. [Google Scholar] [CrossRef]
- Zheng, Q.; Zhao, P.; Li, Y.; Wang, H.; Yang, Y. Spectrum interference-based two-level data augmentation method in deep learning for automatic modulation classification. Neural Comput. Applic. 2021, 33, 7723–7745. [Google Scholar] [CrossRef]
- Mao, Y.; Sun, R.; Wang, J.; Cheng, Q.; Kiong, L.C.; Ochieng, W.Y. New time-differenced carrier phase approach to GNSS/INS integration. GPS Solut. 2022, 26, 122. [Google Scholar] [CrossRef]
- Zhang, X.; Pan, W.; Scattolini, R.; Yu, S.; Xu, X. Robust tube-based model predictive control with Koopman operators. Automatica 2022, 137, 110114. [Google Scholar] [CrossRef]
- Narayanan, S.S.K.S.; Tellez-Castro, D.; Sutavani, S.; Vaidya, U. SE(3) (Koopman-MPC: Data-driven learning and control of quadrotor UAVs. IFAC-PapersOnLine 2023, 56, 607–612. [Google Scholar] [CrossRef]
- Cao, B.; Zhang, W.; Wang, X.; Zhao, J.; Gu, Y.; Zhang, Y. A memetic algorithm based on two_Arch2 for multi-depot heterogeneous-vehicle capacitated arc routing problem. Swarm Evol. Comput. 2021, 63, 100864. [Google Scholar] [CrossRef]
- Erdelj, M.; Natalizio, E. UAV-assisted disaster management: Applications and open issues. In Proceedings of the International Conference on Computing, Networking and Communications, Kauai, HI, USA, 15–18 February 2016; pp. 1–5. [Google Scholar]
- Mukherjee, A.; De, D.; Dey, N.; Crespo, R.G.; Herrera-Viedma, E. DisastDrone: A Disaster Aware Consumer Internet of Drone Things System in Ultra-Low Latent 6G Network. IEEE Trans. Consum. Electron. 2023, 69, 38–48. [Google Scholar] [CrossRef]
- Pasandideh, F.; da Costa, J.P.J.; Kunst, R.; Islam, N.; Hardjawana, W.; Pignaton de Freitas, E. A Review of Flying Ad Hoc Networks: Key Characteristics, Applications, and Wireless Technologies. Remote Sens. 2022, 14, 4459. [Google Scholar] [CrossRef]
- Majeed, A.; Hwang, S.O. A Multi-Objective Coverage Path Planning Algorithm for UAVs to Cover Spatially Distributed Regions in Urban Environments. Aerospace 2021, 8, 343. [Google Scholar] [CrossRef]
- Das, L.B.; Das, L.B.; Lijiya, A.; Jagadanand, G.; Aadith, A.; Gautham, S.; Mohan, V.; Reuben, S.; George, G. Human Target Search and Detection using Autonomous UAV and Deep Learning. In Proceedings of the IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), Bali, Indonesia, 7–8 July 2020; pp. 55–61. [Google Scholar] [CrossRef]
- Bandeira, T.W.; Coutinho, W.P.; Brito, A.V.; Subramanian, A. Analysis of Path Planning Algorithms Based on Travelling Salesman Problem Embedded in UAVs. In Proceedings of the Brazilian Symposium on Computing Systems Engineering (SBESC), Fortaleza, Porto Alegre, Brazil, 3–6 November 2015; pp. 70–75. [Google Scholar] [CrossRef]
- Jain, A.; Ramaprasad, R.; Narang, P.; Mandal, M.; Chamola, V.; Yu, F.R.; Guizan, M. AI-Enabled Object Detection in UAVs: Challenges, Design Choices, and Research Directions. IEEE Netw. 2021, 35, 129–135. [Google Scholar] [CrossRef]
- Yu, X.; Jin, S.; Shi, D.; Li, L.; Kang, Y.; Zou, J. Balanced Multi-Region Coverage Path Planning for Unmanned Aerial Vehicles. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 3499–3506. [Google Scholar]
- Yaguchi, Y.; Tomeba, T. Region Coverage Flight Path Planning Using Multiple UAVs to Monitor the Huge Areas. In Proceedings of the IEEE International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 15–18 June 2021; pp. 1677–1682. [Google Scholar]
- Kurdi, H.A.; Aloboud, E.; Alalwan, M.; Alhassan, S.; Alotaibi, E.; Bautista, G.; How, J.P. Autonomous Task Allocation for Multi-UAV Systems Based on the Locust Elastic Behavior. Appl. Soft Comput. 2018, 71, 110–126. [Google Scholar] [CrossRef]
- Alotaibi, E.T.; Alqefari, S.S.; Koubaa, A. LSAR-Multi-UAV Collaboration for Search and Rescue Missions. IEEE Access 2019, 7, 55817–55832. [Google Scholar] [CrossRef]
- Cabreira, T.; Brisolara, L.; Ferreira, P.R., Jr. Survey on Coverage Path Planning with Unmanned Aerial Vehicles. Drones 2019, 3, 4. [Google Scholar] [CrossRef]
- Jünger, M.; Reinelt, G.; Rinaldi, G. The Traveling Salesman Problem. In Handbooks in Operations Research and Management Science; Elsevier B.V.: Amsterdam, The Netherlands, 1995; Volume 7, pp. 225–330. [Google Scholar]
- Ali, M.; Md Rashid, N.K.A.; Mustafah, Y.M. Performance Comparison between RGB and HSV Color Segmentations for Road Signs Detection. Appl. Mech. Mater. 2013, 393, 550–555. [Google Scholar] [CrossRef]
- Haritha, D.; Bhagavathi, C. Distance Measures in RGB and HSV Color Spaces. In Proceedings of the 20th International Conference on Computers and Their Applications (CATA 2005), New Orleans, LA, USA, 16–18 March 2005. [Google Scholar]
- Pooja, K.S.; Shreya, R.N.; Lakshmi, M.S.; Yashika, B.C.; Rekha, B.N. Color Recognition using K-Nearest Neighbors Machine Learning Classification Algorithm Trained with Color Histogram Features. Int. Res. J. Eng. Technol. (IRJET) 2021, 8, 1935–1936. [Google Scholar]
- Pradeep, A.G.; Gnanapriya, M. Novel Contrast Enhancement Algorithm Using HSV Color Space. Int. J. Innov. Technol. Res. 2016, 4, 5073–5074. [Google Scholar]
- Krishna, S.L.; Chaitanya, G.S.R.; Reddy, A.S.H.; Naidu, A.M.; Poorna, S.S.; Anuraj, K. Autonomous Human Detection System Mounted on a Drone. In Proceedings of the 2019 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 21–23 March 2019; pp. 335–338. [Google Scholar] [CrossRef]
- Mliki, H.; Bouhlel, F.; Hammami, H. Human activity recognition from UAV-captured video sequences. Pattern Recognit. 2020, 100, 107140. [Google Scholar] [CrossRef]
- Safadinho, D.; Ramos, J.; Ribeiro, R.; Filipe, V.; Barroso, J.; Pereira, A. UAV Landing Using Computer Vision Techniques for Human Detection. Sensors 2020, 20, 613. [Google Scholar] [CrossRef] [PubMed]
- Lygouras, E.; Santavas, N.; Taitzoglou, A.; Tarchanidis, K.; Mitropoulos, A.; Gasteratos, A. Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UAV for Search and Rescue Operations. Sensors 2019, 19, 3542. [Google Scholar] [CrossRef]
- Do, M.-T.; Ha, M.-H.; Nguyen, D.-C.; Thai, K.; Ba, Q.-H.D. Human Detection Based Yolo Backbones-Transformer in UAVs. In Proceedings of the International Conference on System Science and Engineering (ICSSE), Ho Chi Minh, Vietnam, 27–28 July 2023; pp. 576–580. [Google Scholar] [CrossRef]
- Wijesundara, D.; Gunawardena, L.; Premachandra, C. Human Recognition from High-altitude UAV Camera Images by AI based Body Region Detection. In Proceedings of the Joint 12th International Conference on Soft Computing and Intelligent Systems and 23rd International Symposium on Advanced Intelligent Systems (SCIS & ISIS), Ise, Japan, 29 November—2 December 2022; pp. 1–4. [Google Scholar] [CrossRef]
- Jetson Nano Developer Kit|NVIDIA. Available online: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano-developer-kit/ (accessed on 1 December 2023).
- Itkin, M.; Kim, M.; Park, Y. Development of Cloud-Based UAV Monitoring and Management System. Sensors 2016, 16, 1913. [Google Scholar] [CrossRef]
- Geng, X.; Chen, Z.; Yang, W.; Shi, D.; Zhao, K. Solving the Traveling Salesman Problem Based on an Adaptive Simulated Annealing Algorithm with Greedy Search. Appl. Soft Comput. 2011, 11, 3680–3689. [Google Scholar] [CrossRef]
- OpenCV—Open Computer Vision Library. Available online: https://opencv.org/ (accessed on 1 December 2023).
- VisDrone-Dataset-github. Available online: https://github.com/VisDrone/VisDrone-Dataset (accessed on 1 December 2023).
- Pixhawk. Available online: https://pixhawk.org/ (accessed on 1 December 2023).
- Welcome to DroneKit-Python’s Documentation. Available online: https://dronekit-python.readthedocs.io/en/latest/ (accessed on 1 December 2023).
- Mission Planner Home—Mission Planner Documentation (ardupilot.org). Available online: https://ardupilot.org/planner/ (accessed on 1 December 2023).
- Suparnunt, C.; Boonvongsobhon, C.; Eounes Baig, F.; Leelalerthpat, P.; Hematulin, W.; Jarawan, T.; Kamsing, P. Practical Parallel of Autonomous Unmanned Aerial Vehicle by Mission Planner. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 7831–7834. [Google Scholar] [CrossRef]
- Sary, I.P.; Andromeda, S.; Armin, E.U. Performance Comparison of YOLOv5 and YOLOv8 Architectures in Human Detection using Aerial Images. Ultim. Comput. J. Sist. Komputer. 2023, 15, 8–13. [Google Scholar] [CrossRef]
- Gašparović, B.; Mauša, G.; Rukavina, J.; Lerga, J. Evaluating YOLOV5, YOLOV6, YOLOV7, and YOLOV8 in Underwater Environment: Is There Real Improvement? In Proceedings of the 8th International Conference on Smart and Sustainable Technologies (SpliTech), Split/Bol, Croatia, 20–23 June 2023; pp. 1–4. [Google Scholar] [CrossRef]
Human Body Recognition Model | Dataset Used | Recognition of Human Clothing Types and Colors | Segmentation of the Search Area | Dynamic Route Planning for Search | Integration of Human Body and Clothing/Pant Color Recognition with Dynamic Route Planning | |
---|---|---|---|---|---|---|
[28] | Motion detection outputs a score of human confidence | No | No | No | No | No |
[29] | CNN | UCF-ARG dataset | No, proposes human activity classification algorithm | No | No | No |
[15] | CNN | Self-developed captured dataset | No | No | No, spiral search | No |
[30] | DNN with MobileNet V2 SSDLite | COCO dataset | No | No | Yes, estimates the person and moves in his direction with GPS | |
[31] | CNN with Tiny YOLOv3 | COCO dataset + self-developed swimmers dataset | No | No | No | No |
[32] | CNN with modified YOLOv8 | Self-developed UAV view real-world dataset | No | No | No | No |
[33] | CNN with YOLOv5 and Haar Cascade classifier | VisDrone dataset + COC0128 dataset | No, proposes a human body region classification algorithm | No | No | No |
HWF | CNN with YOLOv5 | VisDrone dataset + self-developed drone-clothing dataset | Yes, uses KNN color recognition | Yes | Yes, proposes the hierarchical human-weight-first (HWF) path planning algorithm | Yes, Proposes the integrated YOLOv5 and HWF framework |
Classification Metric | Training Set | Testing Set |
---|---|---|
Precision | 0.6773 | 0.6330 |
Recall | 0.4887 | 0.4074 |
mAP | 0.5020 | 0.3970 |
Parameter | Value |
---|---|
Search area () | {| n = 3, 4, …, 10} |
40 degrees | |
(m) | |
The percentage of error and probability variation at altitude | 10%, 20%, 30% |
The percentage of error and probability variation at altitude | 5%, 10%, 15% |
UAV velocity | 20 km/h |
UAV hovering and image capture time for a block | 5 s |
Search target threshold | 0.7 |
Convenient visit threshold of HWFR-S | 0.4, 0.5, 0.6 |
Weight difference of HWFR-D | 0.1, 0.2, 0.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chang, I.-C.; Yen, C.-E.; Chang, H.-F.; Chen, Y.-W.; Hsu, M.-T.; Wang, W.-F.; Yang, D.-Y.; Hsieh, Y.-H. An Integrated YOLOv5 and Hierarchical Human-Weight-First Path Planning Approach for Efficient UAV Searching Systems. Machines 2024, 12, 65. https://doi.org/10.3390/machines12010065
Chang I-C, Yen C-E, Chang H-F, Chen Y-W, Hsu M-T, Wang W-F, Yang D-Y, Hsieh Y-H. An Integrated YOLOv5 and Hierarchical Human-Weight-First Path Planning Approach for Efficient UAV Searching Systems. Machines. 2024; 12(1):65. https://doi.org/10.3390/machines12010065
Chicago/Turabian StyleChang, Ing-Chau, Chin-En Yen, Hao-Fu Chang, Yi-Wei Chen, Ming-Tsung Hsu, Wen-Fu Wang, Da-Yi Yang, and Yu-Hsuan Hsieh. 2024. "An Integrated YOLOv5 and Hierarchical Human-Weight-First Path Planning Approach for Efficient UAV Searching Systems" Machines 12, no. 1: 65. https://doi.org/10.3390/machines12010065
APA StyleChang, I. -C., Yen, C. -E., Chang, H. -F., Chen, Y. -W., Hsu, M. -T., Wang, W. -F., Yang, D. -Y., & Hsieh, Y. -H. (2024). An Integrated YOLOv5 and Hierarchical Human-Weight-First Path Planning Approach for Efficient UAV Searching Systems. Machines, 12(1), 65. https://doi.org/10.3390/machines12010065