Next Article in Journal
Towards Bio-Inspiration, Development, and Manufacturing of a Flapping-Wing Micro Air Vehicle
Previous Article in Journal
An Open Simulation Strategy for Rapid Control Design in Aerial and Maritime Drone Teams: A Comprehensive Tutorial
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating Methods for Integrating Unmanned Aerial Systems in Search and Rescue Operations

School of Aviation and Transportation Technology, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Drones 2020, 4(3), 38; https://doi.org/10.3390/drones4030038
Submission received: 30 May 2020 / Revised: 20 July 2020 / Accepted: 21 July 2020 / Published: 24 July 2020

Abstract

:
Unmanned aerial systems (UAS) are increasingly being used in search and rescue (SAR) operations to assist in the discovery of missing persons. UAS are useful to first responders in SAR operations due to rapid deployment, high data volume, and high spatial resolution data collection capabilities. Relying on traditional manual interpretation methods to find a missing person in imagery data sets containing several hundred images is both challenging and time consuming. To better find small signs of missing persons in large UAS datasets, computer assisted interpretation methods have been developed. This article presents the results of an initial evaluation of a computer assisted interpretation method tested against manual methods in a simulated SAR operation. The evaluation performed focused on using resources available to first responders performing SAR operations, specifically: RGB data, volunteers, and a commercially available software program. Results from this field test were mixed, as the traditional group discovered more objects but required more time, in man hours, to discover the objects. Further field experiments, based on the capabilities of current first responder groups, should be conducted to determine to what extent computer assisted methods are useful in SAR operations.

1. Introduction

Aerial search and rescue operations are performed to cover large areas with the intent to rapidly locate a missing person or object. Unmanned aerial systems, hereafter referred to as UAS, have become a ubiquitous tool in many industries, including search and rescue (SAR), over the past decade [1]. Prior to the rise of UAS, SAR teams were required to look through the windows of manned aircraft [2], often with long response times and low success rates [2,3]. UAS, in contrast, are capable of rapid deployment, and can be utilized by remote search teams to further decrease response times [4,5,6]. These capabilities allow response teams to begin searching for signs of the missing person almost immediately. The rapid image collection capabilities of UAS have a huge potential in SAR operations, as time is a precious resource, and reach their full potential when applied in a methodical fashion.
Goodrich et al. [7] describe three methods of operation, or paradigms, when using UAS in SAR: remote-led, base-led, and sequential. Remote-led operations use UAS as a means to extend the vision of the ground searchers and are particularly useful in situations where the missing person is being tracked. This method was used to assist in the successful rescue of a missing climber in the Karakoram mountains [8]. A climber fell from an ice cliff and was presumed dead by the accompanying climbers before they eventually returned to the nearest base camp. An aerial cinematography group was present at the base camp and assisted the climbers in discovering their friend alive [8]. Base-led operations use UAS to scout high probability areas and deploy rescue teams to areas of interest. This method has shown effectiveness in mountain SAR experiments, where a small UAS, after finding a potential missing person, would hover over the area and act as a “beacon” for rescue teams on snowmobiles [9]. Sequential operations are performed when a missing person is likely to be anywhere over a large area, and a UAS is used to cover the area quickly. Sequential operations often employ a “lawnmower” style grid. This is a term used in the SAR community to describe operations where the UAS gathers imagery by following a pre-programmed route of a flight area in a manner where the flight line slightly overlaps one another. A lawnmower grid-based search provides exceptional coverage, providing volunteers with as many images as possible and giving the best chance for detection, but is a lengthy process that generates large data sets. These datasets have increased exponentially in size as UAS sensor resolutions continue to improve, creating a tradeoff between sensor quality, and dataset size. Higher quality sensors allow more minute objects to be discovered but comes at a tradeoff to where increasing dataset size tends to increase the time required for manual image interpretation; collected images are viewed by search volunteers who scan individual images, looking for patterns or colors that contrast with the surrounding background. In order to detect minute objects volunteers will zoom into an image until individual pixels can be discerned. Often, search volunteers look for a color that matches the last item of clothing the missing person was seen wearing. If volunteers find anything out of the ordinary in the images, they report the location and a ground team will be sent to investigate.
Manual image interpretation methods are extremely tedious, lead to mental and physical fatigues, and subsequently toward false object identification. These issues have generated a large amount of interest in the SAR community toward development of computer assisted object identification methods that can effectively replace manual methods by being accurate, easy to use, and cost effective. Computer assisted methods generally fall within two categories: one focusing on object-based classification, and the other focusing on pixel-based classification to support field teams. Object based classification research focuses on the creation and improvement of machine learning based object identification algorithms capable of correctly identifying humans in images. These algorithms have shown use in multiple studies simulating victim discovery. While the specifics of the models are not relevant to this research, they are well described and compared in Andriluka et al. [10]. The main difference between the algorithms compared is the shapes that each algorithm searches for. Algorithms based upon the shape of an entire human, or a large distinguishable part such as the torso, are reliable when a person is in full view but demonstrate inconsistencies when portions of the shape are obscured [11]. Due to this difficulty, algorithms based on smaller sections of a person, such as arms or legs, were created [12]. These two models, full body and smaller sections, were tested on UAS data collected from 1.5 to 2.5 m and the part-based models proved more useful [10]. Despite such accuracies, image acquisition conducted at such low altitudes negate the advantages provided when using UAS as a remote sensing platform in SAR missions. Imagery captured from higher altitudes, 100 m+, presents a unique problem to object identification as a human in full view will take up tens of pixels at most [13]. While a victim in full view may take up tens of pixels, an obscured victim may make up a smaller portion of the image.
Pixel based classification, in contrast with object-based classification, searches images for anomalies instead of specified shapes. Pixel based classification is useful when shapes are distorted, such as a humans seen from altitude, or the desired shape is unknown, such as with signs of the missing person. Even with a distorted shape a human or signs of them will form an anomaly within the image. The most common anomalies are body heat and the color of the victim’s clothing. UAS capable of gathering thermal imagery can be paired with computer assisted vision, allowing significant temperature divisions to be automatically detected [13]. Rasmussen et al. [14] suggest overlaying RGB and thermal data to create a hybrid image emphasizing thermal anomalies. While thermal data can provide search teams with a useful dataset, they are not available to all teams, and may not detect signs of a missing person, especially if that person is hypothermic, or ultimately, deceased [15]. RGB sensors, on the other hand, are available on almost all commercially available UAS platforms and can easily be used to find color anomalies associated with the victim, or signs of the victim. Two approaches exist when searching for color anomalies and they are dependent on the information available to the search team. If the search team has been provided an image of the missing person in their last known clothing or a description of the clothing, then specific colors can be searched for. If the search team is not provided with images or descriptions of the search target, then any unusual colors in the imagery become suspect. When searching for color anomalies a program searches the dataset for any unusual color, or rapid change in colors, and highlights these for the operator to review manually [16]. In areas with a monochromatic background, this is very useful, but when myriad colors are present in the imagery false positives increase [17]. Obtaining information about the missing person’s last known clothing allows the searchers to target specific color anomalies [17]. This will not eliminate false positives entirely, but it will reduce the number. For example, if analysts are looking for blue, they will not be alerted to orange colored anomalies within the image. Searching for a specific color will require the image analysts to account for color shifts due to altitude, illumination, and sensor differences. The difficulties presented by these factors have been addressed in other research, and are accounted for through image calibration [18]. Search and rescue events are unlikely to have the luxury of calibrated images because even simple calibration methods require spectrally stable reference targets in flat open areas [19]. Organizations responsible for search and rescue are unlikely to have the funds available for one of these calibration tiles, the time required to perform the calibration, or the organizational knowledge to perform the calibration.
Search and rescue operations vary regarding operational budgets, equipment inventories, and expertise. Todd et al. [20] describes emerging UAS programs in first responder groups. This description revealed that the majority of UAS programs within first responder groups contain five or fewer pilots, often as a collateral duty, and have existed for three or fewer years. Such infancies in SAR UAS programs, coupled with limitations in operating costs, inventories, and technical expertise translates into the need for a computer assisted object identification method that is both low cost and relatively easy use. While research in the remote sensing community has developed methods and approaches for automated identification of objects in imagery, the hardware and expertise required are often out of reach for those in the SAR community. Therefore, a void exists between what is needed in the SAR community for fast and accurate identification of objects, and current manual image interpretation methods. This research compares RGB based spectral signature identification software with traditional manual visual interpretation methods. The question posed in this research asks if a simple spectral identification computer assisted approach is a viable alternative to manual methods in terms of efficiency and accuracy.

2. Materials and Methods

This research targeted an approach that allows for responder teams to use computer assisted techniques so long as they have access to a UAS equipped with an RGB sensor, and a laptop. In order to keep computing times low, a weak classifier is suggested, as long as the system can account for false positives [13]. Such simplified software focuses a pixel-based classifier to detect specific colors within RGB datasets. The object identification software chosen to perform this research is Loc8, pronounced “locate”. Loc8 was chosen because it is a commercially available software package, and has seen use in multiple SAR operations [21]. The software was developed based upon the same ideas and concepts that drive manual image interpretation methods; the software scans each image in a data set to identify user defined spectral signatures. Moreover, the Loc8 user mirrors manual interpretation methods by making use of the last known image of the missing person and use this to determine what color they should be looking for in the mission dataset [22,23,24]. Providing image analysts an example image ensures they search for the color the victim was wearing when they were last seen, rather than what the searchers perceive as the color the victim was wearing. While volunteers will also look for shapes, the color of a person’s clothes is what most often leads to a successful identification [7]. Loc8 uses this same process by allowing the user to select a specific color from a sample image and search the mission dataset for that color. Every color can be defined by three numerical values between 0 and 255, representing the color’s red, green, and blue values present in the color wheel. The process begins with the last known image of the missing person, in this example object G, Figure 1.
When using manual methods, image analysts will define a search color based on an image of the missing person’s last known clothing. The clothing color will be assigned to memory before reviewing the dataset, and any colors matching this memory will catch the analyst’s attention. The analysts will annotate the image in such a way that the object of interest can be quickly discovered by the search teams. Loc8 mirrors this process by searching for the defined spectral signature in each image of the dataset. More specifically, the Loc8 user defines a spectral signature by selecting a color of interest within the software and the software records the numerical RGB value associated with the selected color (Figure 2). Each pixel of the dataset is reviewed and any pixel matching the defined signature, or falling within the range, is highlighted in red (Figure 3). All highlighted pixels are circled in red, and users can zoom to the red circles with a hotkey allowing for rapid assessment of objects.
One potential negative feature of object identification using this simplified spectral signature approach is that the software will return every pixel of a specified color within a dataset, especially if the color of interest is a color commonly present in nature. False returns can also be created through illumination differences, altitude, or by using different sensors. For example, in Figure 3, areas of the object G’s shirt have been highlighted as blue, even though the shirt is clearly black, white, and grey. These false positives are due to areas of shadow on the shirt distorting the colors. Therefore, it should be noted that while Loc8 provides a great deal of automation to object identification, there remains the need for the user to examine every image containing a color match to separate false positives from potential positives. When a potential positive is found, the user then has the ability to relate the GPS coordinates of the image for recovery teams to engage in further investigation.

Study Area

The study took place in Martell Forest, a 193-hectare area owned and managed by the Purdue University Department of Forestry and Natural Resources [25]. The property is mostly forested, consisting of a mixture of mature canopy forest and plantation plots of varying tree species at differing levels of maturity, thus making this area an excellent outdoor laboratory for SAR research efforts (Figure 4). Along these guidelines, a section of Martell forest was chosen for the following reasons: diversity in vegetation, ease of access, ease of flight operations, and size of the search area. The inclusion of diverse vegetation reflects the nature of SAR events, wherein the victim could be in any number of land covers. Although only 7 miles from the Purdue University campus, Martell forest is outside of the class D airspace surrounding Purdue University Airport, allowing UAS operations to be conducted without interfering with manned aircraft operations. The open field on the north end of the forest allowed for easy takeoff and landing and, most importantly, allowed for line of sight to be maintained over a large area. Federal Aviation Administration (FAA) regulations at the time of writing required UAS pilots to maintain visual line of sight with their unmanned aerial vehicle (UAV) as it was in flight. The 48-hectare section of Martell forest flown for data collection had a diverse array of vegetative cover but also allowed for continuous visual line of sight to be maintained during operations.
Providing adequate coverage of the study area required a lawnmower style grid to be flown with 80% frontal and lateral overlap to ensure that all simulated missing persons would be imaged in the mission (Figure 5). The UAS utilized for image acquisition was a C-Astral Bramor PPX fixed-wing platform. This UAS was equipped with a Sony RX1RII sensor payload capable of gathering imagery with a spatial resolution of 1.6 cm/px from 121 m. While flying at a higher altitude would have been much more ideal and efficient, 121 m was chosen due to current FAA regulations limiting UAS flight to 400 ft (121 m) AGL. Covering the 48-hectare section of Martell forest with this UAS took 38 min and collected 1075 images. This UAS can cover a much larger area with its 2.5-h flight capabilities, but line of sight concerns limited the size of the study area. The Bramor PPX is unlikely to be available to a first responder team, due to the high cost and skill requirements needed for operation, but the data quality is similar to that obtainable by less complex UAS platforms. For example, a DJI Mavic 2 Pro, equipped with a 20 MP Hasselblad camera, can achieve a Ground Sampling Distance (GSD) of 1.66 cm/px from 100 m and 1.99 cm/px from 120 m.
Missing people are often found by the color of their clothing. This has allowed past research to simulate missing persons by laying articles of clothing in search areas [7]. For this research, a double-blind simulated search and rescue operation was performed. Six articles of clothing, objects A–E, and one dressed mannequin, object G, (Figure 2) were placed by a third-party in a dispersed fashion over different land cover types [26] (Figure 6). A double-blind approach was chosen so neither those engaging in image interpretation, nor the researchers, would have information on the location of the simulated missing persons. Without knowledge of the location of the simulated missing persons, the researcher was unable to provide the image analysts with any information that would have produced bias in the search effort. This also better reflected search and rescue operations as no one knew the location of the missing persons.
The third party responsible for hiding the objects was instructed to ensure the objects had a good view of the sky, so they could be found from the air, but were given no other instructions on where to place the objects. This method of dispersal prevented any sort of bias during the search section because not only did the searchers not know the location, the researcher could not provide clues to object locations. Fourteen student volunteers participated in the search as image analysts. Four students (Group one) familiar with the Loc8 software were selected to search for the missing objects using computer assisted methods. The remaining ten students formed group two, and used traditional manual methods, known as squinting. The members of group two had no experience with manual image interpretation methods but were familiar with large datasets associated with UAS grid-based mapping missions. The difference in group size was designed to represent two possibilities for image analyst groups in search and rescue. It is likely that a SAR effort will attract many eager, but untrained volunteers, who are happy to contribute by performing non-technical tasks [27]. This represents our ten-person group. Most people are already familiar with viewing images on a computer and their image analysis requirements are not often resource intensive, requiring one screen per person. Alternatively, volunteers with significant related experience or qualifications are often present in smaller numbers and integrated into operations when possible [28]. This was reflected in the makeup of the student volunteers, as four had previously assisted one of the researchers in a search and rescue effort using the spectral signature analysis software.
While the four volunteers for spectral signature analysis did have previous experience with the software, they were provided 45 min of additional training to ensure they fully understood the software and how it related to the experimental design and methods in place. The training consisted of mission setup, RGB sample collection, spectral database creation, and image assessment. Researcher experience provided the group with information on best practices for color sampling, recommended software settings, and how to account for color shift within images. Upon the conclusion of training, group one was provided images of the missing persons (Figure 2) and then took 38 min to define the spectral signatures for each of the missing objects in the lawnmower search grid dataset of 1075 images. Group one was then instructed to record the time it took for the Loc8 software to locate and flag each of the missing objects. The time required between each discovered object was subsequently recorded. Upon conclusion of image analysis by the Loc8 software, flagged images were reviewed by a member of the team responsible for distributing the objects to determine if the object was one of the simulated missing persons.
Group two, using traditional squinting methods, was provided with an explanation of squinting techniques. Group two was then told how to use the zoom feature in windows photo viewer, the only assistance available during squinting, and how to alert the researcher when they found potential objects. Group two was also instructed that they represented a group of volunteers participating in an active SAR operation with the goal of discovering all simulated missing persons present in the dataset. The target images and mission images provided were the same images provided to group one. The 10 members of the group then worked together to divide up the 1075 images equally among group members. As with group one, group two was timed on how long it took for each missing object to be found. When a potential object was found, the time taken to find the missing object, along with the image number, were recorded. Following the activity, to ensure accuracy and a lack of false positives, one of the researchers responsible for placing the objects reviewed the images to confirm or deny the object as a simulated missing person.

3. Results and Discussion

The dataset used in this experiment contained 1075 images taken with a 40 MP camera, covering 66 hectares. This dataset contained over 45 billion pixels and the targets made up 14,257 pixels (Table 1) or 0.00003% of the dataset. The most difficult to find object was object G, only being visible in two images (Figure 7) and only vaguely resembling itself in Figure 2. This change in appearance, specifically color, due to shadows prevented group one from discovering object G. Group two was surprisingly able to discover object G. This may have been the result of a group member who proved rather adept in identifying object in the imagery where other group members could not; other members of group 2 could not identify object G, even when it was directly pointed out to them. Object A too closely resembled the background colors of the imagery. The third party responsible for hiding the objects was asked to find object A, and was unable to. While multiple objects evaded detection by the search groups, only object A could not be discovered by the researchers. Due to its functional invisibility object A has been removed from further discussion.
Group two separated the dataset into ten subsets that consisted of one hundred images each (100–200, 200–300 and so on), thus allowing for up to ten images to be assessed at any one given time for maximum efficiency. More importantly, the images being assessed were from ten different points throughout the dataset. Because the images were gathered with 80% overlap, missing objects had the potential of being found and confirmed between multiple individuals in the group, assuming that a target existed in those images. Group two was also able to assign individuals to specialized searches, such as investigating gaps in tree canopies, or following tracks. Both specialized searches proved useful and allowed group two to find objects G and E (Table 2). The discovery of object G within 30 min was surprising. As stated above object G can only be seen in two images due to tree canopy cover (Figure 7) and in those images, only vaguely resembles itself in Figure 2. Comparing the time taken to find objects (Table 2) group two was able to find more objects faster than group one, which runs contrary to previous experiences by the authors and those in the SAR community who utilize Loc8.
Group two was able to find five objects in under 30 min, and the software group was able to find three objects in a little over 36 min. Each group decided to work cooperatively in order to find the targets. Group one had each member search for a different portion of the blue color spectrum. This decision was made because four of the six objects contained blue colors (Figure 1). This allowed the group to search for a wide range of spectral signatures simultaneously, increasing the likelihood that they would find an object of interest. This method allowed the group to discover three of the four blue objects (Table 2) with the fourth object being object G. While group two was able to discover more objects in a shorter amount of time, their group consisted of more people. Accounting for man hours (Table 2), search group one was able to discover objects more quickly, with the only exception being object C, which was located towards the end of the dataset. Object C was found almost immediately by group two but group one had to wait for Loc8 to get to the end of the dataset before discovering it.
Along with time, false positives are an important metric in SAR operations (Table 3). When a potential object is discovered, the general operating procedure is to dispatch a ground team to investigate. False positives can lead ground teams toward wasting their most precious resource: time. Loc8 takes steps to reduce false positives by bypassing images with extremely high alert counts. One member of group one, who had the color closest to black, was alerted to tens of thousands of pixels in multiple images. Because of the high number of alerts, Loc8 determines that the image is likely full of false positives and will not save the image, in order to reduce the number of images that need to be inspected. While false positives are preferable to false negatives, where the victim is present but not detected, too many false positives waste valuable time during the search effort. Although not reflected in the experimental design, the number of false positives generated by Loc8 software is significantly reduced by an experienced user who knows how to manipulate the color wheel and spectral signature of the target object.
Images flagged by group one, and objects noted by group two were compared following the activity. In an actual SAR operation, teams would have been sent to investigate images flagged by volunteers engaged in image interpretation. In this activity, group one was able to find 76 positive alerts, meaning the number of images where a target was identified correctly. In a SAR operation a search team would have been sent when these objects were first alerted, and repeated notifications would not be necessary. Repeat finds were included in this comparison to demonstrate that the software was able to consistently find targets in certain land covers and present the targets to the viewer in a useful manner. The low rate of false positives also demonstrates the benefit of software functions like the zoom feature which allow users to rapidly assess an alerted object. The squinter’s false positive rate is high in comparison, but also includes fewer alerts. This is because the group did not continue to alert to objects that had already been found, and many of the false positives came from the search for objects A and D, with object A being later discarded. Four of the fourteen false positives appeared before the discovery of object G, the last object found, and ten more thereafter.
During their time searching images group two maintained constant communication between members. Most of the communication consisted of group members discussing potential finds and having other members confirm or deny the object as a victim. Once multiple members had confirmed an object, it was noted for a virtual ground search team to be sent out. In this case the ground team was a member of the third-party who hid the items. They reviewed the possible object and responded with either “yes” or “no”. The responses were limited to these two words so that no accidental hints could be provided. This approach mirrored real life operations. Sending recovery teams to locations is time consuming and potentially dangerous, depending on the environment. The image analysts will also only know the success or failure of their alert after the team has searched the location, and the response they receive is unlikely to contain any helpful hints. In this activity, image interpretation efforts lasted two hours and after an hour, those engaged in manual interpretation were showing signs of fatigue and boredom. Missing in this experiment is how much human emotion and adrenaline come into play during an actual search and rescue event. The manual image interpretation process for locating missing objects in a large image data set is tedious and exhausting, and the size of the data set used here is small compared to many actual SAR events [29]. In this situation the squinters did know that the missing persons were present in the dataset and continued until the search time was completed. While there may be a few dedicated squinters in any search it is unlikely that they will be effective after a few hours of searching.
Group one had less discussion during the search, where the main form of verbal communication was to have other members of the team confirm potential finds before flagging them. The software team experienced one technical difficulty while creating their spectral signatures. If the signature was not saved a specific way it would be overwritten, even if the name was changed. The difficulty was resolved during the 38 min used to define spectral signatures and was a nonfactor once the search began. This initially prevented members of the group from running multiple spectrums at once and was one of the driving reasons behind splitting up the signature between multiple members. Following the activity, each of the four group members stated that more training and a dry run of an actual SAR event would have allowed them to use the software more efficiently and to allocate duties amongst group members more effectively.
While the core concept of the software utilized in this research effort centered on an automated approach to searching an image for specific spectral signature ranges, several concluding discoveries were made following the event. Overall, it was noted that the software proved a reliable computer assisted method in SAR efforts. That stated, human intervention was still required in order to eliminate falsely flagged objects in the dataset. Furthermore, although the Loc8 software is much simpler than many Remote Sensing Software packages, it still requires ample training ahead of it being used in a SAR mission. In this activity, with the training provided, the Loc8 group was able to reliably find three of the six objects. Each of these objects was located somewhere along the blue color spectrum and their colors were not greatly affected by the environment. The fourth object containing blue was heavily affected by shadow in the mature canopy forest and the group was unable to determine its location. The group was equally unable to discover either of the black objects due to the interference of shadows. Additionally, the group ran into issues with different color values being recorded between cameras. Every camera registers the RGB values of color differently, and this provided a challenge to group one. These two issues, shadows and recorded color, are inherent in any search, but can be mitigated through adequate training. The effect of shadows on color, such as turning white to blue in Figure 4, is repeatable and can be demonstrated to software users. Since this effect is repeatable, future research should investigate whether or not this is a consistent shift.
Contrary to presumed assumptions leading into this research effort, group two was able to discover more objects than the Loc8 software group one. This outcome may be attributable to the size difference between the two groups. Group two was two and a half times larger than group one, allowing them to employ more specialized methods. The larger group size allowed the squinters to assess the images from up to ten different locations at any given time within the dataset and employ specialized searches. While the squinters were effective in discovering the simulated missing persons, group members did experience increasing fatigue and lack of motivation after one hour into the activity. Initially, the group was very motivated to discover the missing persons, and every discovery seemed to reinvigorate the group. Locating object G in the mature canopy forest environment was exceptionally electrifying as everyone seemed impressed by the find. However, after object G was found, enthusiasm and morale quickly fell as they struggled to find the remaining simulated missing persons, even after successfully discovering five of the six victims. An hour into the search, thirty minutes after object G was found, all the searchers were less motivated and putting less effort into the search. It is uncertain how this translates to active SAR operations, where volunteers are racing to save a life. Image analysts would likely be more motivated, but it is uncertain how long this would last as the search area and the number of images tend to be much larger in actual SAR operations. Additionally, in these SAR events, the manual image interpretation volunteers would not be certain that the missing person is in the dataset or not. Future research should explore how fatigue relates to the quality of results when using manual methods during a real SAR operation. In contrast to group two, group one became more efficient over time as they learned how to utilize the Loc8 software in a more efficient manner. Unlike human fatigue, the computer was only as slow as its computational abilities, which should be reflected in the experimental design of future research comparing computer assisted object identification with manual methods.

4. Conclusions

Advances in both the quality and quantity of UAS imagery present a unique challenge to first responders performing search and rescue operations. The increasingly small pixel sizes in aerial imagery allow for discovery of even minute signs of a missing person, but the number of images and the size of the data sets gathered in modern SAR missions make traditional manual image interpretation methods increasingly difficult. In this study, traditional manual interpretation methods were compared with computer assisted spectral signature software to determine if computational assisted methods were more efficient and accurate than the status quo. Research findings demonstrated mixed results, where manual interpretation methods resulted in faster location of missing objects at the onset, but at the cost of fatigue and loss of interest with time. Inversely, those using computer assisted methods became more efficient with time and better able to find objects once familiarized with the software. Future research comparing computer assisted object identification should incorporate more training on the software being used and try to best match conditions in place during an actual SAR event.

Author Contributions

Conceptualization, W.T.W.; Data Curation, W.T.W.; Formal analysis, W.T.W.; Funding Acquisition, J.H.; Investigation; W.T.W.; Methodology, W.T.W. and J.H.; Project administration, W.T.W.; Resources, J.H.; Software, W.T.W.; Supervision, J.H.; Validation, W.T.W.; Visualization, W.T.W. and J.H.; Writing—Original draft, W.T.W.; Writing—Review & Editing, W.T.W. and J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. All equipment was provided by Purdue University’s School of Aviation and Transportation Technology, SATT.

Acknowledgments

The authors would like to thank the UAS capstone students for assisting in this mock Search and Rescue operation. The authors would also like to thank Gene Robinson for providing his expertise in SAR operations which assisted in the development of the methods used. A final thanks to the Loc8 development team who provided the authors the training needed to use their software. This study was performed as part of William T. Weldon’s dissertation research under Joseph Hupy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Murphy, R.R.; Tadokoro, S.; Kleiner, A. Disaster Robotics. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 1577–1604. [Google Scholar]
  2. Giang, W.; Keillor, J. Effects of cue saliency in an assisted target detection system for search and rescue. In Proceedings of the 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), Toronto, ON, Canada, 26–27 September 2009; pp. 527–532. [Google Scholar]
  3. Ford, J.; Clark, D. Preparing for the impacts of climate change along Canada’s Arctic coast: The importance of search and rescue. Mar. Policy 2019, 108, 103662. [Google Scholar] [CrossRef]
  4. Bodas, M.; Peleg, K.; Shenhar, G.; Adini, B. Light search and rescue training of high school students in Israel—Longitudinal study of effect on resilience and self-efficacy. Int. J. Disaster Risk Reduct. 2019, 36, 101089. [Google Scholar] [CrossRef]
  5. Jain, T.; Sibley, A.; Stryhn, H.; Hubloue, I. Comparison of Unmanned Aerial Vehicle Technology Versus Standard Practice in Identification of Hazards at a Mass Casualty Incident Scenario by Primary Care Paramedic Students. Disaster Med. Public Health Prep. 2018, 12, 1–4. [Google Scholar] [CrossRef] [PubMed]
  6. Goodrich, M.A.; Morse, B.S.; Engh, C.; Cooper, J.L.; Adams, J.A. Towards using unmanned aerial vehicles (UAVs) in wilderness search and rescue: Lessons from field trials. Interact. Stud. 2009, 10, 453–478. [Google Scholar]
  7. Goodrich, M.A.; Morse, B.S.; Gerhardt, D.; Cooper, J.L.; Quigley, M.; Adams, J.A.; Humphrey, C. Supporting wilderness search and rescue using a camera-equipped mini UAV. J. Field Robot. 2008, 25, 89–110. [Google Scholar] [CrossRef]
  8. McRae, J.N.; Gay, C.J.; Nielsen, B.M.; Hunt, A.P. Using an Unmanned Aircraft System (Drone) to Conduct a Complex High Altitude Search and Rescue Operation: A Case Study. Wilderness Environ. Med. 2019, 30, 287–290. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Karaca, Y.; Cicek, M.; Tatli, O.; Sahin, A.; Pasli, S.; Beser, M.F.; Turedi, S. The potential use of unmanned aircraft systems (drones) in mountain search and rescue operations. Am. J. Emerg. Med. 2018, 36, 583–588. [Google Scholar] [CrossRef] [PubMed]
  10. Andriluka, M.; Schnitzspan, P.; Meyer, J.; Kohlbrecher, S.; Petersen, K.; Stryk, O.v.; Roth, S.; Schiele, B. Vision based victim detection from unmanned aerial vehicles. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 1740–1747. [Google Scholar]
  11. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  12. Bourdev, L.; Malik, J. Poselets: Body part detectors trained using 3D human pose annotations. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 1365–1372. [Google Scholar]
  13. Vempati, A.S.; Agamennoni, G.; Stastny, T.; Siegwart, R. Victim detection from a fixed-wing uav: Experimental results. In Proceedings of the International Symposium on Visual Computing, Las Vegas, NV, USA, 14–16 December 2015; pp. 432–443. [Google Scholar]
  14. Rasmussen, N.D.; Morse, B.S.; Goodrich, M.A.; Eggett, D. Fused visible and infrared video for use in Wilderness Search and Rescue. In Proceedings of the 2009 Workshop on Applications of Computer Vision (WACV), Snowbird, UT, USA, 7–8 December 2009; pp. 1–8. [Google Scholar]
  15. Chrétien, L.-P.; Théau, J.; Ménard, P. Visible and thermal infrared remote sensing for the detection of white-tailed deer using an unmanned aerial system. Wildl. Soc. Bull. 2016, 40, 181–191. [Google Scholar] [CrossRef]
  16. Rasmussen, N.D.; Thornton, D.R.; Morse, B.S. Enhancement of unusual color in aerial video sequences for assisting wilderness search and rescue. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 1356–1359. [Google Scholar]
  17. Sun, J.; Li, B.; Jiang, Y.; Wen, C.Y. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes. Sensors 2016, 16, 1778. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Lussem, U.; Hollberg, J.; Menne, J.; Schellberg, J.; Bareth, G. Using calibrated RGB imagery from low-cost UAVs for grassland monitoring: Case study at the Rengen Grassland Experiment (RGE), Germany. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 229. [Google Scholar] [CrossRef] [Green Version]
  19. Smith, G.M.; Milton, E.J. The use of the empirical line method to calibrate remotely sensed data to reflectance. Int. J. Remote Sens. 1999, 20, 2653–2662. [Google Scholar] [CrossRef]
  20. Todd, C.; Werner, C.; Hollingshead, J. Public Safety UAS Flight Training and Operations; Drone Responders: Miami, FL, USA, 2019; pp. 1–11. [Google Scholar]
  21. Loc8. Loc8: Image Scanning Software. Available online: https://loc8.life/news/ (accessed on 13 May 2020).
  22. Adams, G.K. An Experimental Study of Memory Color and Related Phenomena. Am. J. Psychol. 1923, 34, 359–407. [Google Scholar] [CrossRef]
  23. Bruner, J.S.; Postman, L.; Rodrigues, J. Expectation and the Perception of Color. Am. J. Psychol. 1951, 64, 216–227. [Google Scholar] [CrossRef] [PubMed]
  24. Duncker, K. The Influence of Past Experience upon Perceptual Properties. Am. J. Psychol. 1939, 52, 255–265. [Google Scholar] [CrossRef]
  25. Purdue University, C.o.A., Forestry and Natural Resources. Martell Forest. Available online: https://ag.purdue.edu/fnr/Pages/propmartell.aspx (accessed on 5 May 2020).
  26. Morse, B.S.; Thornton, D.; Goodrich, M.A. Color anomaly detection and suggestion for wilderness search and rescue. In Proceedings of the 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, MA, USA, 5–8 March 2012; pp. 455–462. [Google Scholar]
  27. Barsky, L.E.; Trainor, J.E.; Torres, M.R.; Aguirre, B.E. Managing volunteers: FEMA’s Urban Search and Rescue programme and interactions with unaffiliated responders in disaster response. Disasters 2007, 31, 495–507. [Google Scholar] [CrossRef] [PubMed]
  28. Kendra, J.; Wachtendorf, T. Rebel Food… Renegade Supplies: Convergence after the World Trade Center Attack; Preliminary paper no. 316; Disaster Research Center, University of Delaware: Newark, DE, USA, 2001. [Google Scholar]
  29. Robinson, G. First to Deploy; RPFlightSystems, Inc.: Austin, TX, USA, 2012; pp. 56–82. [Google Scholar]
Figure 1. Images of the six clothing articles, objects AF, and mannequin, object G, used as simulated missing persons for this study. Each of these images were provided to search teams to inform their search.
Figure 1. Images of the six clothing articles, objects AF, and mannequin, object G, used as simulated missing persons for this study. Each of these images were provided to search teams to inform their search.
Drones 04 00038 g001
Figure 2. Image showing a simulated missing person, object G, with a section of their clothing, specifically their jeans, zoomed in to show RGB values associated with different pixel colors, top right. These pixel colors are also shown on a color wheel, bottom left, to show where these pixels lie on the visible spectrum. The result of a search in this range is shown in Figure 3.
Figure 2. Image showing a simulated missing person, object G, with a section of their clothing, specifically their jeans, zoomed in to show RGB values associated with different pixel colors, top right. These pixel colors are also shown on a color wheel, bottom left, to show where these pixels lie on the visible spectrum. The result of a search in this range is shown in Figure 3.
Drones 04 00038 g002
Figure 3. Image showing one of the simulated missing persons, object G, after using the Loc8 software to search for the colors described in Figure 2. Due to environmental factors a few areas of the white have identified blue and are good examples of false positives with this type of software.
Figure 3. Image showing one of the simulated missing persons, object G, after using the Loc8 software to search for the colors described in Figure 2. Due to environmental factors a few areas of the white have identified blue and are good examples of false positives with this type of software.
Drones 04 00038 g003
Figure 4. Map identifying Purdue University Martell Forest boundary shown in red. Study area boundary shown in green.
Figure 4. Map identifying Purdue University Martell Forest boundary shown in red. Study area boundary shown in green.
Drones 04 00038 g004
Figure 5. Detail of automated flight plan created in mission planning software. Orange lines represent lawnmower style grid aircraft flight path commonly used in SAR missions. This flight plan provided for 80% longitudinal and latitudinal overlap.
Figure 5. Detail of automated flight plan created in mission planning software. Orange lines represent lawnmower style grid aircraft flight path commonly used in SAR missions. This flight plan provided for 80% longitudinal and latitudinal overlap.
Drones 04 00038 g005
Figure 6. High resolution image of study area with detailed insets of simulated missing persons. Inset border colors correlate to the colors in Figure 2. This high resolution orthomosaic image was generated from the images gathered during the flight as seen in Figure 5.
Figure 6. High resolution image of study area with detailed insets of simulated missing persons. Inset border colors correlate to the colors in Figure 2. This high resolution orthomosaic image was generated from the images gathered during the flight as seen in Figure 5.
Drones 04 00038 g006
Figure 7. Two screenshots showing object G as it exists in imagery collected during the study. These images demonstrate the distortion possible when capturing a human shape from the air, contrast with object G in Figure 1, Figure 2 and Figure 3. Object G’s location during the study can be seen in Figure 6.
Figure 7. Two screenshots showing object G as it exists in imagery collected during the study. These images demonstrate the distortion possible when capturing a human shape from the air, contrast with object G in Figure 1, Figure 2 and Figure 3. Object G’s location during the study can be seen in Figure 6.
Drones 04 00038 g007
Table 1. Table showing each simulated missing person, the number of pixels they take up in images where they are present, and the number of images they are present in. These numbers are then used to determine the total number of pixels each simulated missing person takes up in the dataset.
Table 1. Table showing each simulated missing person, the number of pixels they take up in images where they are present, and the number of images they are present in. These numbers are then used to determine the total number of pixels each simulated missing person takes up in the dataset.
TargetNumber of PixelsTimes AppearingTotal Number of Pixels
B: Blue jeans268154020
C: Light Blue shirt3524840
D: Black shirt 157191083
E: Black shirt 284262184
F: Dark Blue shirt200306000
G: Mannequin652130
Total target pixels14,257
Table 2. Objects discovered along with the time and man hours required to discover the object. Objects can be referenced in Figure 1, and their locations seen in Figure 6.
Table 2. Objects discovered along with the time and man hours required to discover the object. Objects can be referenced in Figure 1, and their locations seen in Figure 6.
Group OneGroup Two
TargetTimeMan HoursTimeMan Hours
B: Blue jeans0:01:100:04:400:04:500:48:20
C: Light Blue shirt0:36:102:24:400:02:300:25:00
D: Black shirt 1Not foundNot foundNot foundNot found
E: Black shirt 2Not foundNot found0:22:003:40:00
F: Dark Blue shirt0:05:400:22:400:12:142:02:20
G: MannequinNot foundNot found0:28:304:45:00
Table 3. Provides a number and percentage of false positives and negatives between groups one and two. The disparity in number of alerts is due to the fact that the software used by group one allowed them to easily alert every image containing one of the objects, whereas group two only alerted to each object a single time. Once an object was found group two ignored the object on subsequent finds.
Table 3. Provides a number and percentage of false positives and negatives between groups one and two. The disparity in number of alerts is due to the fact that the software used by group one allowed them to easily alert every image containing one of the objects, whereas group two only alerted to each object a single time. Once an object was found group two ignored the object on subsequent finds.
Group OneGroup Two
PositiveFalse PositivePositiveFalse Positive
Number of alerts7624514
% of alerts76%24%26%74%

Share and Cite

MDPI and ACS Style

Weldon, W.T.; Hupy, J. Investigating Methods for Integrating Unmanned Aerial Systems in Search and Rescue Operations. Drones 2020, 4, 38. https://doi.org/10.3390/drones4030038

AMA Style

Weldon WT, Hupy J. Investigating Methods for Integrating Unmanned Aerial Systems in Search and Rescue Operations. Drones. 2020; 4(3):38. https://doi.org/10.3390/drones4030038

Chicago/Turabian Style

Weldon, William T., and Joseph Hupy. 2020. "Investigating Methods for Integrating Unmanned Aerial Systems in Search and Rescue Operations" Drones 4, no. 3: 38. https://doi.org/10.3390/drones4030038

APA Style

Weldon, W. T., & Hupy, J. (2020). Investigating Methods for Integrating Unmanned Aerial Systems in Search and Rescue Operations. Drones, 4(3), 38. https://doi.org/10.3390/drones4030038

Article Metrics

Back to TopTop