An Image Processing Approach for Real-Time Safety Assessment of Autonomous Drone Delivery
Abstract
:1. Introduction
1.1. Challenges Facing Autonomous Drones
1.2. Image-Based Approaches
1.3. Motivation
1.4. Overall Aim
2. Related Work
2.1. Normal Landing Approaches
2.2. Emergency Landing Approaches
3. The Proposed Safety Assessment Classifier
3.1. Proposed Solution
3.1.1. Before the Dropping Process: Is the Drop Zone Safe?
Preprocessing
- Locating the dropping zone on the imageThe dropping area should be defined and cropped on the image to match the agreed size of the dropping area so that this cropped image represents the dropping zone. Working with a live camera introduces additional complexities concerning the camera characteristics and lighting factors. In addition, there are certain values related to the dropping preferences of the drone company that need to be determined beforehand. While GPS information helps to locate the center of the dropping zone, it is not enough to find the side length of the box that represents the dropping zone on the image. Camera parameters and dropping parameters must be known in advance. Camera parameters are the parameters that relate to camera characteristics, which are:
- (a)
- Focal length (F): the distance in millimeters between the lens of the camera and the camera sensor [32].
- (b)
- Sensor size (Z): the size of the camera film that receives the light coming through the lens into the camera, and then converts that into an image [32].
- (c)
- Camera resolution (E): the number of recorded pixels or the size of the image taken by the camera [33].
- The altitude (H): represents the height of the drone during the dropping.
- Zone side length (R): represents the length of the square that marks the dropping zone.
- Cropping the dropping zone imageWhen the drone is hovering vertically over the preselected GPS location, the GPS coordinates will be the center of the image (, ), and the dropping zone on the image will cover the ranges and
- Grayscale conversionGrayscale conversion is used to simplify the computational requirements; the value of each pixel in a grayscale image represents only an amount of light [35]. Each pixel with the position is converted from the three-channel format, red (R), green (G), and blue (B), into one channel, grayscale (), according to Equation (3).
- Gaussian blur transformationAfter obtaining the image in one dimension from the grayscale transformation in the previous step, a blur transformation is applied to each pixel value to reduce image noise and smooth the image according to the Gaussian formula in Equation (4) [36].
Phase I: Segmentation
Phase II: Threshold Function
3.1.2. During the Dropping Process: Drop Zone Remains Safe?
3.1.3. Improve the Performance: Motion Detection
Algorithm 1 Drone Delivery Dropping Safety Assessment Algorithm |
|
3.2. Evaluation Metric
- TS (True Safe): TS represents the number of correctly classified zones that are safe.
- FS (False Safe): FS represents the number of zones misclassified as safe but are actually unsafe.
- TU (True Unsafe): TU represents the number of zones that are properly classified as unsafe.
- FU (False Unsafe): FU represents the number of zones incorrectly classified as unsafe but are actually safe.
4. Evaluation and Discussion
4.1. Datasets
- The proposed solution aims first to examine the given dropping zone right before the dropping operation and show whether it is safe or unsafe to drop the delivery package. The dataset used to test this classifier consists of static images: 70 bounded boxes were selected as candidate dropping zones and cropped from 14 drone view images. These boxes represent different types of surfaces: 10 boxes of flat horizontal surfaces, 10 boxes of mottled surfaces, 10 boxes of vertical surfaces, 10 boxes of sloped surfaces, 10 boxes of elevated surfaces, 5 boxes of water surfaces, and 15 boxes that contain objects such as cars, humans, trees, and pets. Figure 9 displays samples of the first group of the dataset.
- To track the safety of the dropping zone during the dropping process, 18 recorded-by-drones videos were used, each of which represents the dropping area during the dropping process. Each video has a frame rate of 15 frames/s and are 30 s long, and videos were recorded from different heights (12 m, 14 m, 16 m, 18 m) using different side lengths of dropping area boxes (2 m, 3 m, 4 m, 5 m). Nine videos were for flat horizontal surfaces; the other nine were for surfaces mottled with irregular markings, shades, and colors. To simulate the change in safety situations, various objects were entered into each video at random times for 15 s.
4.2. Experiments on Static Images
4.2.1. Cluster-Based Segmentation
4.2.2. Binary Segmentation
4.2.3. Assumptions
- The first assumption is that the safe landing surface is a safe dropping surface, which means that the standards of the dropping surface are the same as those of the landing surface. The safe landing area is evaluated based on two main factors, slope and roughness, which are the factors by which the surface is classified as a safe landing surface or not [39].According to this assumption, a suitable surface for landing and harmless to the drone itself is also suitable for dropping and placing the package on. Knowing that a surface is preselected as suitable for a dropping surface is not a sufficient measure for dropping. It must also have enough empty space with no obstacles during the time of dropping, whether it is a fixed obstacle such as a box or a toy, or a moving obstacle such as a human or a pet.
- The second assumption differentiates between the landing requirements for both regular and emergency landings and dropping requirements: the dropping area is predetermined by the customer so it will not be a water surface, an elevated surface, or a sloping surface.In the case of a regular or normal landing, which typically uses GPS information to perform a landing either on a controlled area or on a landmark, the drone navigates to the destination point defined by the GPS coordinates, hovers, and then lands or recognizes a landmark to land on [40]. Figure 10 shows the normal landing on the landmark.
4.2.4. Comparison of Results
4.3. Experiments on Recorded Videos
4.4. The Significance of the Proposed Approach
4.5. Real-World Implementation Examples
- A video was created to demonstrate how the suggested method classifies the dropping region in a fast and accurate manner (see “Assessment of drone delivery dropping zone” YouTube, uploaded by Assem Alsawy, 16 January 2019, https://www.youtube.com/watch?v=zC9TMOhJ-2g&ab_channel=EasyLearning (accessed on 3 January 2024)).
- Another video was created to demonstrate the proposed method’s ability to segment and mask the territory beneath the drone into safe and unsafe areas as the drone navigates and hovers at different altitudes. (see “Safe/Unsafe areas masks for delivery drone dropping” YouTube, uploaded by Assem Alsawy, 16 January 2019, https://www.youtube.com/watch?v=nu_skGwiCUE&ab_channel=EasyLearning (accessed on 3 January 2024)).
4.6. Challenges and Limitations
- Since the proposed classifier is color-based, it does not provide a method that automatically recognizes the type of surface of the dropping zone.
- Since safety is not just about the dropping zone, the threat may come from outside this zone. There is a need for a method to examine the area around the dropping box and measure the risks that may occur.
- The proposed approach does not cover delivery on a moving platform such as ships, which would require a tracker that tracks the moving dropping area during the dropping process.
- The lack of safety in the dropping zone puts the dropping operation at risk of cancellation, which is costly for the delivery company. It is necessary to develop a method that finds another nearby and safe place to drop the package.
- Drones drop packages using a rope. We have not taken the rope into account in the assessment during the drop.
- GPS accuracy does not guarantee that the point the customer sets is the exact location the drone will go to. A visual approach is needed that integrates with GPS information for a more accurate and secure drop point.
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Mairaj, A.; Baba, A.I.; Javaid, A.Y. Application specific drone simulators: Recent advances and challenges. Simul. Model. Pract. Theory 2019, 94, 100–117. [Google Scholar] [CrossRef]
- Mohammed, F.; Idries, A.; Mohamed, N.; Al-Jaroodi, J.; Jawhar, I. UAVs for smart cities: Opportunities and challenges. In Proceedings of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), IEEE, Orlando, FL, USA, 27–30 May 2014; pp. 267–273. [Google Scholar]
- Ayamga, M.; Akaba, S.; Nyaaba, A.A. Multifaceted applicability of drones: A review. Technol. Forecast. Soc. Change 2021, 167, 120677. [Google Scholar] [CrossRef]
- Cummings, A.R.; McKee, A.; Kulkarni, K.; Markandey, N. The rise of UAVs. Photogramm. Eng. Remote Sens. 2017, 83, 317–325. [Google Scholar] [CrossRef]
- Hussein, M.; Nouacer, R.; Ouhammou, Y.; Villar, E.; Corradi, F.; Tieri, C.; Castiñeira, R. Key Enabling Technologies for Drones. In Proceedings of the 23rd Euromicro Conference on Digital System Design (DSD), IEEE, Kranj, Slovenia, 26–28 August 2020; pp. 489–496. [Google Scholar]
- Ko, Y.; Kim, J.; Duguma, D.G.; Astillo, P.V.; You, I.; Pau, G. Drone secure communication protocol for future sensitive applications in military zone. Sensors 2021, 21, 2057. [Google Scholar] [CrossRef] [PubMed]
- Krichen, M.; Adoni, W.Y.H.; Mihoub, A.; Alzahrani, M.Y.; Nahhal, T. Security challenges for drone communications: Possible threats, attacks and countermeasures. In Proceedings of the 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH), IEEE, Riyadh, Saudi Arabia, 9–11 May 2022; pp. 184–189. [Google Scholar]
- Krichen, M.; Lahami, M.; Cheikhrouhou, O.; Alroobaea, R.; Maâlej, A.J. Security testing of internet of things for smart city applications: A formal approach. In Smart Infrastructure and Applications: Foundations for Smarter Cities and Societies; Springer: Cham, Switzerland, 2020; pp. 629–653. [Google Scholar]
- Kyrkou, C.; Timotheou, S.; Kolios, P.; Theocharides, T.; Panayiotou, C. Drones: Augmenting our quality of life. IEEE Potentials 2019, 38, 30–36. [Google Scholar] [CrossRef]
- Shahzaad, B.; Bouguettaya, A.; Mistry, S.; Neiat, A.G. Resilient composition of drone services for delivery. Future Gener. Comput. Syst. 2021, 115, 335–350. [Google Scholar] [CrossRef]
- Gamulescu, O.M.; Musetoiu, O.M.; Leba, M. The Self-Piloting Function of a Multicopter. Ann. Constantin Brancusi Univ.-Targu-Jiu. Eng. Ser. 2017, 4, 119. [Google Scholar]
- Rokhsaritalemi, S.; Sadeghi-Niaraki, A.; Choi, S.M. Drone Trajectory Planning Based on Geographic Information System for 3D Urban Modeling. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC), IEEE, Jeju, Republic of Korea, 17–19 October 2018; pp. 1080–1083. [Google Scholar]
- Nassi, B.; Shabtai, A.; Masuoka, R.; Elovici, Y. SoK-security and privacy in the age of drones: Threats, challenges, solution mechanisms, and scientific gaps. arXiv 2019, arXiv:1903.05155. [Google Scholar]
- Alijani, M.; Osman, A. Autonomous Landing of UAV on Moving Platform: A Mathematical Approach. In Proceedings of the 2020 International Conference on Control, Automation and Diagnosis (ICCAD), IEEE, Paris, France, 7–9 October 2020; pp. 1–6. [Google Scholar]
- Ghosh, P.; Gasparri, A.; Jin, J.; Krishnamachari, B. Robotic wireless sensor networks. In Mission-Oriented Sensor Networks and Systems: Art and Science; Springer: Cham, Switzerland, 2019; pp. 545–595. [Google Scholar]
- Borstell, H. A short survey of image processing in logistics. In Proceedings of the 11th International Doctoral Student Workshop on Logistics, Magdeburg, Germany, 19 June 2018; pp. 43–46. [Google Scholar] [CrossRef]
- Cengil, E.; Çınar, A.; Özbay, E. Image classification with caffe deep learning framework. In Proceedings of the 2017 International Conference on Computer Science and Engineering (UBMK), IEEE, Antalya, Turkey, 5–8 October 2017; pp. 440–444. [Google Scholar]
- Kumar, A.S. Image Segmentation and Object Recognition. J. Res. Proc. 2021, 1, 101–112. [Google Scholar]
- Alsawy, A.; Hicks, A.; Moss, D.; Mckeever, S. An Image Processing Based Classifier to Support Safe Dropping for Delivery-by-Drone. In Proceedings of the 2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS), Genova, Italy, 5–7 December 2022; pp. 1–5. [Google Scholar]
- Kan, M.; Okamoto, S.; Lee, J.H. Development of drone capable of autonomous flight using GPS. In Proceedings of the International Multi Conference of Engineers and Computer Scientists, Hong Kong, China, 14–16 March 2018; Volume 2. [Google Scholar]
- Patrik, A.; Utama, G.; Gunawan, A.A.S.; Chowanda, A.; Suroso, J.S.; Shofiyanti, R.; Budiharto, W. GNSS-based navigation systems of autonomous drone for delivering items. J. Big Data 2019, 6, 53. [Google Scholar] [CrossRef]
- Huang, Y.P.; Sithole, L.; Lee, T.T. Structure From Motion Technique for Scene Detection Using Autonomous Drone Navigation. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2559–2570. [Google Scholar] [CrossRef]
- Nguyen, P.H.; Kim, K.W.; Lee, Y.W.; Park, K.R. Remote marker-based tracking for UAV landing using visible-light camera sensor. Sensors 2017, 17, 1987. [Google Scholar] [CrossRef] [PubMed]
- Lee, M.F.R.; Aayush, J.; Saurav, K.; Anshuman, D.A. Landing Site Inspection and Autonomous Pose Correction for Unmanned Aerial Vehicles. In Proceedings of the 2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS), Taipei, Taiwan, 19–21 August 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Chen, M.; Guo, J. Robotics Lab 2015 Project: Autonamous Landing Mobile Robotics Lab 2015: Autonamous Landing on a 652 Moving Target. Computer 2020, 1. [Google Scholar]
- Niu, G.; Yang, Q.; Gao, Y.; Pun, M.O. Vision-based Autonomous Landing for Unmanned Aerial and Mobile Ground Vehicles Cooperative Systems. IEEE Robot. Autom. Lett. 2021, 7, 6234–6241. [Google Scholar] [CrossRef]
- Loureiro, G.; Soares, L.; Dias, A.; Martins, A. Emergency Landing Spot Detection for Unmanned Aerial Vehicle. In Robot 2019: Fourth Iberian Robotics Conference, Porto, Portugal, 20–22 November 2019; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2019; pp. 122–133. [Google Scholar]
- Bektash, O.; Pedersen, J.N.; Gomez, A.R.; la Cour-Harbo, A. Automated Emergency Landing System for Drones: SafeEYE Project. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), IEEE, Athens, Greece, 1–4 September 2020; pp. 1056–1064. [Google Scholar]
- Gonzalez-Trejo, J.A.; Mercado-Ravell, D.A. Lightweight density map architecture for UAVS safe landing in crowded areas. J. Intell. Robot. Syst. 2021, 102, 7. [Google Scholar] [CrossRef]
- Loureiro, G.; Dias, A.; Martins, A.; Almeida, J. Emergency Landing Spot Detection Algorithm for Unmanned Aerial Vehicles. Remote Sens. 2021, 13, 1930. [Google Scholar] [CrossRef]
- Vidal, M.; Amigo, J.M. Pre-processing of hyperspectral images. Essential steps before image analysis. Chemom. Intell. Lab. Syst. 2012, 117, 138–148. [Google Scholar] [CrossRef]
- Banks, M.S.; Cooper, E.A.; Piazza, E.A. Camera focal length and the perception of pictures. Ecol. Psychol. 2014, 26, 30–46. [Google Scholar] [CrossRef]
- Burns, P.D.; Bauza, J.M. Intrinsic camera resolution measurement. In Proceedings of the Image Quality and System Performance XII, San Francisco, CA, USA, 8 February 2015; Volume 9396, p. 939609. [Google Scholar]
- Posamentier, A.S.; Lehmann, I. The Secrets of Triangles: A Mathematical Journey; Prometheus Books: Buffalo, NY, USA, 2012. [Google Scholar]
- Saravanan, C. Color image to grayscale image conversion. In Proceedings of the 2nd International Conference on Computer Engineering and Applications, Bali, Indonesia, 19–21 March 2010; Volume 2, pp. 196–199. [Google Scholar]
- Mobahi, H.; Zitnick, C.L.; Ma, Y. Seeing through the blur. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, RI, USA, 16–21 June 2012; pp. 1736–1743. [Google Scholar]
- Dhanachandra, N.; Manglem, K.; Chanu, Y.J. Image segmentation using K-means clustering algorithm and subtractive clustering algorithm. Procedia Comput. Sci. 2015, 54, 764–771. [Google Scholar] [CrossRef]
- Susmaga, R. Confusion matrix visualization. In Intelligent information Processing and Web Mining; Springer: Berlin/Heidelberg, Germany, 2004; pp. 107–116. [Google Scholar]
- Shen, Y.F.; Rahman, Z.U.; Krusienski, D.; Li, J. A vision-based automatic safe landing-site detection system. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 294–311. [Google Scholar] [CrossRef]
- Liu, J.S.; Liu, H.C. Visual Navigation for UAVs Landing on Accessory Building Floor. In Proceedings of the 2020 International Conference on Pervasive Artificial Intelligence (ICPAI), IEEE, Taipei, Taiwan, 3–5 December 2020; pp. 158–163. [Google Scholar]
- Lusk, P.C.; Glaab, P.C.; Glaab, L.J.; Beard, R.W. Safe2Ditch: Emergency landing for small unmanned aircraft systems. J. Aerosp. Inf. Syst. 2019, 16, 327–339. [Google Scholar] [CrossRef]
Cluster Segmentation | ||||
---|---|---|---|---|
Type | No. of Boxes | Correct Output | No. of Correct Results | Accuracy |
Objects | 15 | Unsafe | 15 | 100.0% |
Flat horizontal surfaces | 10 | Safe | 6 | 60.0% |
Mottled surfaces | 10 | Safe | 5 | 50.0% |
Vertical surfaces | 10 | Unsafe | 9 | 90.0% |
Sloped surfaces | 10 | Unsafe | 9 | 90.0% |
Elevated surfaces | 10 | Unsafe | 2 | 20.0% |
Water | 5 | Unsafe | 4 | 80.0% |
Predicted Class | |||
---|---|---|---|
Safe | Unsafe | ||
Actual Class | Safe | 11 | 9 |
Unsafe | 11 | 39 |
Binary Segmentation | ||||
---|---|---|---|---|
Type | No. of Boxes | Correct Output | No. of Correct Results | Accuracy |
Objects | 15 | Unsafe | 15 | 100.0% |
Flat horizontal surfaces | 10 | Safe | 10 | 100.0% |
Mottled surfaces | 10 | Safe | 9 | 90.0% |
Vertical surfaces | 10 | Unsafe | 1 | 10.0% |
Sloped surfaces | 10 | Unsafe | 3 | 30.0% |
Elevated surfaces | 10 | Unsafe | 0 | 0.0% |
Water | 5 | Unsafe | 1 | 20.0% |
Predicted Class | |||
---|---|---|---|
Safe | Unsafe | ||
Actual Class | Safe | 19 | 1 |
Unsafe | 30 | 20 |
Precision | Recall | Precision | Recall | |
---|---|---|---|---|
(Safe) | (Unsafe) | (Safe) | (Unsafe) | |
Clustering | 50.00% | 55.00% | 81.25% | 78.00% |
Binary segmentation | 38.70% | 95.00% | 95.20% | 40.00% |
Binary segmentation (under industry assumptions) | 100.00% | 95.00% | 93.75% | 100.00% |
Type | Actual Output | No. of Frames | No. of Correct Results | Accuracy |
---|---|---|---|---|
Empty flat horizontal surfaces | Safe | 2025 | 2025 | 100.00% |
Empty mottled surfaces | Safe | 2025 | 1830 | 90.37% |
Flat horizontal surfaces with objects | Unsafe | 2025 | 2025 | 100.00% |
Mottled surfaces with objects | Unsafe | 2025 | 2025 | 100.00% |
Predicted Class | |||
---|---|---|---|
Safe | Unsafe | ||
Actual Class | Safe | 3855 | 195 |
Unsafe | 0 | 4050 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alsawy, A.; Moss, D.; Hicks, A.; McKeever, S. An Image Processing Approach for Real-Time Safety Assessment of Autonomous Drone Delivery. Drones 2024, 8, 21. https://doi.org/10.3390/drones8010021
Alsawy A, Moss D, Hicks A, McKeever S. An Image Processing Approach for Real-Time Safety Assessment of Autonomous Drone Delivery. Drones. 2024; 8(1):21. https://doi.org/10.3390/drones8010021
Chicago/Turabian StyleAlsawy, Assem, Dan Moss, Alan Hicks, and Susan McKeever. 2024. "An Image Processing Approach for Real-Time Safety Assessment of Autonomous Drone Delivery" Drones 8, no. 1: 21. https://doi.org/10.3390/drones8010021
APA StyleAlsawy, A., Moss, D., Hicks, A., & McKeever, S. (2024). An Image Processing Approach for Real-Time Safety Assessment of Autonomous Drone Delivery. Drones, 8(1), 21. https://doi.org/10.3390/drones8010021