Recognize the Little Ones : UAS-Based In-Situ Fluorescent Tracer Detection

In ecological research, a key interest is to explore movement patterns of individual organisms across different spatial scales as one driver of biotic interactions. While various methods exist to detect and record the presence and movements of individuals in combination with UAS, addressing these for smaller animals, such as insects, is challenging and often fails to reveal information on potential interactions. Here, we address this gap by combining the UAS-based detection of small tracers of fluorescent dyes by means of a simple experiment under field conditions for the first time. We (1) excited fluorescent tracers utilizing an UV radiation source and recorded images with an UAS, (2) conducted a semi-automated selection of training and test samples to (3) train a simple SVM classifier, allowing (4) the classification of the recorded images and (5) the automated identification of individual traces. The tracer detection success significantly decreased with increasing altitude, increasing distance from the UV radiation signal center, and decreasing size of the fluorescent traces, including significant interactions amongst these factors. As a first proof-of-principle, our approach has the potential to be broadly applicable in ecological research, particularly in insect monitoring.


Introduction
A fundamental question driving ecological research is finding explanations that lead to species interactions and their spatial distributions, from global down to local scale [1].
Today, various methods of remote sensing (such as radio-telemetry, harmonic radar or LIDAR) are used for detecting and recording the movements of organisms in their natural environments [2][3][4][5][6].However, these methods often require the attachment of devices on every single individual every single individual, and are time and/or cost consuming, especially for investigations of larger insect populations.Alternatively, the use of fluorescent powder dyes was successfully applied as a non-invasive method for vertebrates [7][8][9][10] and invertebrates [11][12][13].It proved to be an affordable method while allowing the detection of many individuals in parallel [12].In principle, the fluorescent tracer dye emits signals when illuminated by a UV-light source, which can be detected optically or perceived visually.Thus, the method allows the detection of animal interactions (e.g., flower visits of pollinators [13][14][15]) or tracking movements indirectly, e.g., by following tracks until full signal decay [7,[16][17][18][19].However, manually searching for fluorescent powder in the field, such as residues on flowers using UV radiation flashlights, is very labor-intensive and time-consuming.
Automated image processing and object-based analysis techniques are by now well-established in the remote sensing community.Moreover, recent technical trends, including minimization of high-quality sensors, have led to new possibilities in the use of unmanned aerial systems (UAS) in Drones 2019, 3, 20 2 of 13 various fields and applications [20].In particular, UAS are now used in ecology and agriculture in a spectrum of research topics [20,21], such as plant identification [22] and weed management [23,24], or pest control and insect monitoring [25][26][27][28][29].
With the advantage of being able to fly at low altitudes, very high-resolution data can be generated flexibly in time and space and thus according to the requirements of the user [20,30].Therefore, remote sensing based on using UAS can provide new capabilities to capture fluorescent traces on plants over larger spatial scales.Recent studies already used UAS to detect locusts combined with strobe-based optical methods [31,32].However, to our knowledge neither any study tested the detection of fluorescent powder dye by UAS derived image data so far, nor used an automated classification and object identification approach to provide tracking information on insect movement from such detections.
Here we aimed to combine these methods and scrutinized if such a detection, automated identification and classification of fluorescent tracer dyes by UAS and post-processing is principally possible and feasible.To answer these questions, we developed a standardized experimental protocol under simple, low-altitude field-conditions as a first proof-of-principles.We tested three parameters in our set-up, which we determined to be highly influential on the efficiency of fluorescent tracer detection: (a) the distance between the UAS camera sensor and a fluorescent tracer, (b) the distance between the projection center on the ground of a UV radiation source spotlight and the fluorescent tracer (i.e., this measure determines the excitation intensity of the fluorescent tracer) and (c) the size of the fluorescent tracer.
We hypothesize that: i. detection probability decreases with increasing distance between the UAS camera sensor and the fluorescent tracer (DTS), ii.
detection probability decreases with increasing distance between the projection center on the ground of a UV radiation source spotlight and the fluorescent tracer (DTL), iii.
detection probability increases with increasing size of the fluorescent tracer.

Experimental Design & Data Collection
Four patterns were created with a custom R script using package raster [33], R-version 3.1.3[34], creating a raster with 200 columns and 200 rows yielding a quadratic cell size of one mm 2 (Supplementary Materials-Appendix A).Hereof, 1% of the (i.e., 400) cells were randomly selected to receive a fluorescent tracer marking.We aimed to use three different tracer dyes, yielding 133 cells per dye.From the selected raster cells a subset of cells per dye (20%, i.e., 26 cells) was extended by selecting the neighboring cells above, to the right, and above right of these cells yielding a second cell size of four mm 2 (Table 1).The raster was printed on DIN A2 paper sheets sustaining cell edge lengths of one [mm].The selected cells were filled with a paste of an ethanol water mixture (70% vol.ethanol) and the fluorescent dyes (GloMania, UVPP-SET-10, dyes yellow, green and neon pink) resulting in fluorescent traces when dried.These paper sheets were glued on cardboards to ensure a standardized smooth surface.
All four cardboards were placed outdoors on mowed lawn surface in the shape of an imaginary rectangle, leaving 10 cm between the borders of each sheet.The UV-light source (custom made, Supplementary Materials-Appendix B) was held by hand and its focus aligned with the center of the object level.The light source distance (i.e., height) to the sheets was approximately 1.5 m.Data collection took place at night, by hovering over the sheets with a quadcopter-type UAS (platform: DJI Mavic Pro, DJI Innovations, Shenzhen, China) holding a Red-Green-Blue-camera (RGB-camera; DJI FC220.DJI Enterprise 2017), see Figure 1, with a focal length of 5 mm and a 1/2.To analyze the maximum spatial resolution of the set-up, we considered two different heights from which image shots were taken, those were measured as the distance from the object to the camera sensor (DTS; Figure 1).More specifically, we took images at 2.2-2.7 m (distance class 1) and 3.0-3.65m (distance class 2), corresponding to a spatial resolution of 0.67-0.83mm and 0.94-1.13mm, respectively.In addition, we calculated the distance from the center of the light source to each individual tracer (see details below; DTL; Figure 1).
During data selection, we accepted only images with visible fluorescence.The traces were sufficiently illuminated and no loss of quality due to focus or camera shake was discernible.This resulted in two images of distance class 1 and eight images of distance class 2 (Table 1).In addition, the subsequent image data processing concentrated on the yellow dye (i.e., fluorescein), omitting the green and neon pink dye because of insufficient fluorescence signals from the latter two (Suppl.Material -App.B).No additional image pre-processing was applied.

Image Data Processing
Data analysis encompassed the selection of training and a test sample dataset of a fluorescent trace, classification of the fluorescent trace pixels in comparison to background pixels, and object detection (Figure 2): Training and test sample pixels were selected manually: Regarding the fluorescent traces, a threshold-based approach considering the three image bands of the RGB images was applied.A pixel representing a fluorescent trace was selected and all pixels diverging maximal five values in each band, starting from the selected pixel, were added to the selection automatically.This approach was stopped when the manual selection of a pixel representing a fluorescent trace would have led to the To analyze the maximum spatial resolution of the set-up, we considered two different heights from which image shots were taken, those were measured as the distance from the object to the camera sensor (DTS; Figure 1).More specifically, we took images at 2.2-2.7 m (distance class 1) and 3.0-3.65m (distance class 2), corresponding to a spatial resolution of 0.67-0.83mm and 0.94-1.13mm, respectively.In addition, we calculated the distance from the center of the light source to each individual tracer (see details below; DTL; Figure 1).During data selection, we accepted only images with visible fluorescence.The traces were sufficiently illuminated and no loss of quality due to focus or camera shake was discernible.This resulted in two images of distance class 1 and eight images of distance class 2 (Table 1).In addition, the subsequent image data processing concentrated on the yellow dye (i.e., fluorescein), omitting the green and neon pink dye because of insufficient fluorescence signals from the latter two (Supplementary Materials-Appendix B).No additional image pre-processing was applied.

Image Data Processing
Data analysis encompassed the selection of training and a test sample dataset of a fluorescent trace, classification of the fluorescent trace pixels in comparison to background pixels, and object detection (Figure 2): guarantee that both spectrally distinct features (the paper and other backgrounds) were equally represented by the sample.Pixels representing paper were selected using circular regions with a diameter of 20 pixels, manually placed on the image regions representing paper along a raster with a cell size of 20 pixels.These circular regions were chosen such that the sample represents the illumination gradient while emphasizing the regions where the paper was illuminated.These light regions were spectrally similar to the fluorescent traces and therefore, pixels in this spectral subspace are most important for the construction of the support vectors (see below).Other background pixels were sampled using two one-pixel-wide lines placed horizontally across the image and two additional one-pixel-wide lines recording the region at the paper margin.From the generated samples, we randomly selected pixels according to the "30p rule" [35] resulting in a total sample size of 90 pixels representing the fluorescent traces and 45 pixels representing paper and other backgrounds, respectively.Samples were derived from each image according to this procedure using GIMP 2.8.18 [36].Random subsets from these were sampled using R. We provide R scripts for image classification and object detection in a self-written R-package [34] "TraceIdentification" (v.0.0.0.9000; available online at GitHub: https://github.com/henningte/TraceIdentification)(Suppl.Material-App.A).Further R-packages used to conduct the data processing were EBImage [37], sinkr [38], Hmisc [39], doParallel [40], foreach [41], caret [42].Training and test sample pixels were selected manually: Regarding the fluorescent traces, a threshold-based approach considering the three image bands of the RGB images was applied.A pixel representing a fluorescent trace was selected and all pixels diverging maximal five values in each band, starting from the selected pixel, were added to the selection automatically.This approach was stopped when the manual selection of a pixel representing a fluorescent trace would have led to the automatic selection of pixels representing background.Background pixels were sampled stratified to guarantee that both spectrally distinct features (the paper and other backgrounds) were equally represented by the sample.Pixels representing paper were selected using circular regions with a diameter of 20 pixels, manually placed on the image regions representing paper along a raster with a cell size of 20 pixels.These circular regions were chosen such that the sample represents the illumination gradient while emphasizing the regions where the paper was illuminated.These light regions were spectrally similar to the fluorescent traces and therefore, pixels in this spectral subspace are most important for the construction of the support vectors (see below).Other background pixels were sampled using two one-pixel-wide lines placed horizontally across the image and two additional one-pixel-wide lines recording the region at the paper margin.From the generated samples, we randomly selected pixels according to the "30p rule" [35] resulting in a total sample size of 90 pixels representing the fluorescent traces and 45 pixels representing paper and other backgrounds, respectively.Samples were derived from each image according to this procedure using GIMP 2.8.18 [36].Random subsets from these were sampled using R. We provide R scripts for image classification and object detection in a self-written R-package [34] "TraceIdentification" (v.0.0.0.9000; available online at GitHub: https://github.com/henningte/TraceIdentification)(Supplementary Materials-Appendix A).Further R-packages used to conduct the data processing were EBImage [37], sinkr [38], Hmisc [39], doParallel [40], foreach [41], caret [42].
Image classification was conducted using support vector machines (SVM) as classifiers.SVM is known to perform well with small sample sizes [35] and enable the construction of nonlinear decision boundaries.Furthermore, Reference [43] successfully used SVM for classification purposes using images from fluorescence microscopy.Since the envisaged scope of our method is assumed to result in similar image data and represents a similar detection task (small features), it is reasonable to assume that SVMs are suitable classifiers in this case.A two-stage approach was used to select the optimal kernel and fit the parameters of the SVM for the classification task.First, using the training and test sample derived from the best quality image (maximum fluorescence and image scale, least noise), the SVM was constructed in four approaches, each using a different kernel function (linear, radial and homogenous sigmoid).The parameters of each SVM were fitted by a grid search approach [44], using R package e1071, function best.svm,V 1.6-7 [45], whereby the cost parameter (C) was varied in a first step in the interval 2 −10 ; 2 −8 ; . . .; 2 10 and in a finer step in the interval C 1 -1.2;C 1 -1.0; . . .; C 1 ; . . .; C 1 + 1.0; C 1 + 1.2, with C 1 being the cost parameter of the first approach and all assessed values for the cost parameter > 0. The parameters of the kernel functions were fitted -if necessary -in the intervals: γ ∈ [2 −20 ; 2 −18 ; . . .; 2 6 ] [44].The kernel resulting in the best accuracy for the fluorescence tracer in a 10-fold cross validation approach was selected (Supplementary Materials-Appendix A).If more than one kernel resulted in the best accuracy, the simplest kernel was selected.In the second stage, an SVM for each sample image was fitted using the selected kernel and the above-described grid search approach.We calculated the total classification accuracy and the classification accuracy relating to each class (i.e., fluorescent tracer and background) as measures for classification performance (Supplementary Materials-Appendix A).

Detection Success and Statistics
In a first step, the quality control of the trace detection process, i.e. the successful classification of the fluorescent traces by the chosen SVM, was performed manually by comparing the classified images with the corresponding raster template originals using QGIS [46].It was only tested if a fluorescent trace was successfully classified (at least one of the corresponding pixels).From each image the successful identified and non-identified tracers were counted.To achieve this, the classified images and the raster templates were referenced relative to each other.A point was created for each trace and additionally for the UV radiation source spot projection centers on each of the original images.In the Drones 2019, 3, 20 6 of 13 next step, distances could be derived for each trace and the corresponding UV radiation source spot projection center by computing a distance matrix (Supplementary Materials-Appendix A).
For statistical analysis (Supplementary Materials-Appendices C,D), we fitted generalized linear mixed effects models (GLMM) with binomial distribution family, function lmer from package lme4 [47], in order to assess the dependency of the fluorescent trace detection success (binary response) on the following three predictors: The distance between UAS sensor and fluorescent traces (DTS; 2 levels), the UV radiation intensity measured as distance between the UV radiation source spot projection center on the ground and the fluorescent traces (DTL; continuous), the size of the fluorescent traces (size; 2 levels) and interactions between these parameters.The random structure of the models contained image-id depending on DTS, thus accounting for pseudo-replication because individual tracers were not independent from each other.Previous data inspection was done with package MASS [48] and fitdistrplus [49].Model selection was done by multi-model inference based on Akaike Information Criterion corrected for small sample sizes (AICc) and Akaike weights using the dredge function from package MuMIn [50].Model validation was performed graphically and by comparing the best model to generalized linear models without random effects structure by AICc.Subsequent to model selection, the significance of predictors was tested by a Wald chi-square test with type II sums of squares using ANOVA function from package car [51].Data plotting and visualization were done with package effects [52].

Results
The kernel function that led to the best classification accuracy was a radial kernel.For all images, the parameters vary between 0.0625 and 1.05 (C) and 0.0625 and 4 (γ).The classifier achieved overall classification accuracies of 99.67% ± 0.54% (n = 10).The mean detection rates per image varied between 37.4% and 55%. Figure 3 gives two examples of the recorded images and the resulting classification results for the considered distance classes.
The GLMM with the lowest AICc contains all main parameters and a two-way interaction between the tracer size and the UV radiation intensity (Table 2, Table 3).All main predictors significantly affected detection success (Table 2), but the bigger tracer size alone was not significant (Table 3).In general, small tracer size (red lines in Figure 4) significantly decreased detection success compared to the bigger size (blue lines in Figure 4).In combination with the effect of tracer size, increasing DTL reduced identification success significantly.The interactive effects of tracer size and DTL were attenuated when DTS increased from level one to two, yielding significantly reduced identification success and pointing to the importance of the interplay of all three variables despite the lack of a three-way-interaction term in the best model.

Table 2. Analysis of deviance table (Type II Wald chi-square tests) from the generalized linear mixed effects model (GLMM)
. Fluorescent tracer identification depended on the size of the fluorescent trace (size; two levels), the distance between UAS sensor and fluorescent trace (DTS; two levels) and the distance between the UV radiation source spot projection center on the ground and the fluorescent trace (DTL; continuous) and corresponding interaction terms.All values are round to three digits after the decimal place.Table 3. Summary statistics of the generalized linear mixed effects model (GLMM) results on the effect of size of the fluorescent trace (size 1 and 4 (mm 2 )), the distance between UAS camera sensor and trace (DTS 1 corresponding to 2.2-2.7 m, DTS 2 corresponding to 3.0-3.65m), the distance between the UV radiation source spot projection center on the ground and the fluorescent trace (DTL (mm)) on the tracer detection success (binary response, N = 1308).We tested the interacting effects of three explanatory variables on a binary response in generalized linear mixed models with binomial distribution and logit-link.In random model structure, we accounted for grouping factors since a set of individual tracers were derived from a few images which themselves were obtained set wise from two UAS flight levels (see Table 1).The best model after the selection process had an Akaike Information Criterion corrected for small sample sizes (AICc) value of 1349.on the effect of size of the fluorescent trace (size 1 and 4 (mm 2 )), the distance between UAS camera sensor and trace (DTS 1 corresponding to 2.2-2.7 m, DTS 2 corresponding to 3.0-3.65m), the distance between the UV radiation source spot projection center on the ground and the fluorescent trace (DTL (mm)) on the tracer detection success (binary response, N = 1308).We tested the interacting effects of three explanatory variables on a binary response in generalized linear mixed models with binomial distribution and logit-link.In random model structure, we accounted for grouping factors since a set of individual tracers were derived from a few images which themselves were obtained set wise from two UAS flight levels (see Table 1).The best model after the selection process had an Akaike Information Criterion corrected for small sample sizes (AICc) value of 1349.

Discussion
In general, both, the accuracy of the classification scheme and the results of the statistical analysis, suggest that an automated, UAS-based imaging, subsequent classification, and the identification of fluorescent dyes traces to record movement patterns of organisms (for example pollinators), is technically possible (Tables 1-3, Figure 4).
With the featured classification optimization and sampling, the SVM achieved high prediction values for the classification of the used fluorescent dyes leading to almost a complete identification of visually discernible fluorescent traces in the recordings (Table 1).Nevertheless, we account the application of the method to be independent of the program or method used for object classification and identification.Based on the results of the statistical analysis an identification probability of more than 97% can be assumed under optimal conditions, i.e., strong fluorescence signal, traces of a size of around 4 mm 2 and a flight height of 2.2 m to 2.7 m (Figure 4).
In sum, all tested main parameters, the distance between the camera sensor and the object level (DTS), the size of the fluorescent trace (size), and the distance between a trace and the center of the UV-light cone at the object level (DTL), significantly affected the identification probability: Confirming our hypothesis, (i) identification probability decreased with increasing DTS (Figure 4, Table 2).The increased distance (DTS 2; 3 m to 3.65 m) between camera sensor and fluorescent traces led to a significantly lower identification probability of the traces, which presumably occurred due to the lower spatial resolution (0.94-1.13 mm per pixel edge compared to 0.67-0.83mm per pixel edge at DTS 1) and the lower fluorescent radiation energy that reaches the camera sensor (Table 3, Figure 4).
In this study, the realized flight scenarios, resulting in DTS 1 and DTS 2, were both conducted at very low altitude.In real surveys, when tracers are to be detected in vegetation, we also assume comparably low DTS, relative to altitudes where UAS normally are flown when surveying for pests [53] or breeding birds [54].However, under these conditions also other methods are applicable, such as camera stands, tripod-based or crane-based options (i.e., a dolly), which may allow for taking images in a more controlled way, without adding the problems derived from UAS-mounted moving cameras.In our approach, we refrained from using such methods, since ecological studies often require more surveyed ground space than a few local spots, which often requires UAS-based surveys [54].Thus, we tested by directly using an UAS-based approach, to ensure the inclusion of unstable flight conditions.However, we did not test the application in a field survey in a full flight campaign, which needs to be tested in future studies.
In our study, we did not include a higher difficulty of UV trace-detection induced by complex vegetation structure.This might affect the detection success of the UV tracers, whereby more stable conditions during image taking (as derived by the alternative methods) may ensure high detectability and thus may limit the use of a UAS.
Regarding camera settings, we used a large aperture and relatively high film speed (ISO 1600) to allow the use of short exposure times of 1/200 s.This was done to reduce low image qualities due to movements.However, it has to be considered that larger apertures potentially lead to more pronounced optical aberrations of the optical system and therefore may negatively influence image quality, especially at the edges of the images.In contrast, lower apertures in combination with slightly higher exposure times potentially enhance the potential of the presented technique.Therefore, we suggest future investigations consider in depth how image sharpness can be improved by settings of the optical system.
An approach for increasing the DTS, i.e. higher flight levels, while keeping the high spatial resolution constant, is the usage of a professional aerial camera with a larger image sensor size (e.g., medium format).For example, using the PhaseOne iXU 100MP camera sensor would increase the flight height to about 8 m with a comparable GSD of 0.7 mm.However, such professional aerial camera systems are of a heavy weight and thus require more powerful UAS.In this study, the influence of the image sensor size on the classification accuracy was not investigated.With (ii) increasing DTL, the identification probability decreased significantly (Figure 4, Table 2).This decrease was stronger, the smaller the trace size and the larger the DTS was.This observation can most likely be attributed to the lower intensity of the fluorescence stimulating UV radiation at larger distances to the radiation source.To compensate for this negative effect, we recommend future studies to upregulate light intensity with increasing flight altitude, thus ensuring a constant level of light intensity.Moreover, it is desirable to increase the power of the UV radiation source in general, because higher excitation energies increase the fluorescence of the tracers and therefore the contrast.With this, it could even be possible to compensate for lower resolutions at higher flight heights.Another option could be to also use a light source in addition to the UV radiation source in order to enhance image sharpness.This may support lower film speeds and apertures and therefore could yield a higher signal to noise ratio and fewer aberrations.However, it has to be tested if such a setting would complicate image classification due to lit features that are no fluorescent traces.
We think that all these parameters are worth testing since all suggestions may potentially increase detection success.We suggest analyzing the modulation transfer function (MTF) on test charts using fluorescent tracers as it provides detailed information on image contrast in dependency of both object parameters and parameters of the optical system [55].By this, it may be possible to offer a procedure on how to choose parameters under different conditions (e.g., size of the traces in different applications or flight height).However, this requires a more detailed and strictly standardized experimental setting.
Our results demonstrate (iii) that identification probability increased with increasing size of the fluorescence tracer (Figure 4, Tables 2 and 3).The size of the fluorescent traces in relation to the identification probability also interacts significantly with the DTL.This interaction may be caused by a more intense fluorescence and higher detectability of larger traces, even under less intense UV radiation at higher DTS (Figure 4, Tables 2 and 3).Therefore, it can be assumed that the size of the fluorescent traces is critical for the applicability of the method if the intensity of the UV radiation is low.In this case, differences in the identification success are visible even for small changes in the intensity of the UV radiation.In more realistic flight scenarios, UV-light source and UAS will not be decoupled, but attached to each other, contrary to our approach using a hand-held UV light spot yielding a fixed height to the object level.Thus, when the UV-light spot is attached to a UAS, the DTS and the resulting DTL will be interdependent, both varying with flight altitude.In consequence, we still emphasize to consider interdependency between all three main parameters in future studies.

Conclusions
In this study, we provide a first proof-of-principle for the UAS-based detection and identification of small fluorescent dye traits (size range between 1 and 4 mm 2 ) by means of a simple experiment under outdoor light conditions.
Based on the results of the statistical analysis, the highest identification probability in our study is given under optimal conditions of a strong fluorescence signal (low DTL), trace sizes of around 4 mm 2 and a low flight height (DTS of 2.2-2.7 m; Figure 4).Since these parameters proved to be interdependent, there is no single parameter to focus on in future studies.We discussed several improvements to be considered, including the promising modulation transfer function (MTF).Since tracer size derived by organisms may vary and cannot be controlled in a standardized way, we also suggest testing more complex set-ups to overcome the limitations of this study.
In summary, we think the proposed method represents the first step to enable automated identification of fluorescent traces and thus facilitate many applications in ecology, such as monitoring of biodiversity, especially of insects, exploring patterns of animal movement, spatial distributions and plant-pollinator interactions.

Supplementary Materials:
The following are available online at http://www.mdpi.com/2504-446X/3/1/20/s1.Appendix A: R-code for image classification and object identification, Appendix B: extended Methods, Appendix C: data files, Appendix D: R-code for statistical analysis.
3"-CMOS (DJI Enterprise 2017).The exposure was set to 1/200 s to gather fluorescence signals without much shaking Drones 2019, 3, 20 3 of 13 due to the camera gimbal movement.Photosensitivity and aperture were controlled by the cameras internal software and set to ISO-1600; f = 2.2.

Figure 1 .
Figure 1.Concept of the conducted survey.The unmanned aerial system (UAS) (DJI Mavic Pro) carried the camera system.Hand-held UV-light source is displayed.The red dot indicates the fluorescent tracer patterns that varied in two different sizes (1 mm² and 4 mm²).Images were taken in two different heights, measured as the distance from the object to the camera sensor (DTS).In subsequent image analyses, the distance from the center of the light source to each individual tracer was calculated (DTL).

Figure 1 .
Figure 1.Concept of the conducted survey.The unmanned aerial system (UAS) (DJI Mavic Pro) carried the camera system.Hand-held UV-light source is displayed.The red dot indicates the fluorescent tracer patterns that varied in two different sizes (1 mm 2 and 4 mm 2 ).Images were taken in two different heights, measured as the distance from the object to the camera sensor (DTS).In subsequent image analyses, the distance from the center of the light source to each individual tracer was calculated (DTL).

Figure 2 .
Figure 2. Workflow of the conducted study.After (1) the experimental design, (2) images were taken during the UAS flight campaign, (3) followed by image data processing.(4) An assessment of the detection success by the used classifiers was conducted.In the final step (5), statistical analysis was done.

Figure 2 .
Figure 2. Workflow of the conducted study.After (1) the experimental design, (2) images were taken during the UAS flight campaign, (3) followed by image data processing.(4) An assessment of the detection success by the used classifiers was conducted.In the final step (5), statistical analysis was done.

Figure 3 .
Figure 3. Sample images demonstrating how the raster templates correspond roughly to the recorded patterns (upper row), how patterns were recorded in the raw images (middle row) and how images were classified (bottom row) for a flight height (DTS) (a) of 2.2-2.7 m (distance class 1) and (b) 3.0-3.65m (distance class 2).
Fluorescent tracer identification depended on the size of the fluorescent trace (size; two levels), the distance between UAS sensor and fluorescent trace (DTS; two levels) and the distance between the UV radiation source spot projection center on the ground and the fluorescent trace (DTL; continuous) and corresponding interaction terms.All values are round to three digits after the decimal place.

Figure 3 .
Figure 3. Sample images demonstrating how the raster templates correspond roughly to the recorded patterns (upper row), how patterns were recorded in the raw images (middle row) and how images were classified (bottom row) for a flight height (DTS) (a) of 2.2-2.7 m (distance class 1) and (b) 3.0-3.65m (distance class 2).

Figure 4 .
Figure 4. Dependency of the probability of the fluorescent trace detection on the distance between the UV radiation source spot projection center on the ground and the fluorescent traces (DTL in cm, but differences were measured based on 1 mm scale) for two different distance levels (DTS) between the UAS sensor and the traces (panel 1 left side: 2.2 m to 2.7 m; panel 2 right side: 3 m to 3.65 m).Tracer size varied between 1 mm 2 (size 1, blue solid lines) and 4 mm 2 (size 4, red solid lines).Blue and red areas represent 95% confidence intervals of the predicted values.

Figure 4 .
Figure 4. Dependency of the probability of the fluorescent trace detection on the distance between the UV radiation source spot projection center on the ground and the fluorescent traces (DTL in cm, but differences were measured based on 1 mm scale) for two different distance levels (DTS) between the UAS sensor and the traces (panel 1 left side: 2.2 m to 2.7 m; panel 2 right side: 3 m to 3.65 m).Tracer size varied between 1 mm 2 (size 1, blue solid lines) and 4 mm 2 (size 4, red solid lines).Blue and red areas represent 95% confidence intervals of the predicted values.

Table 1 .
Level of replication in the unbalanced two-by-two factorial design with continuous co-predictor.In total, 1308 tracers of two different sizes from ten images were analyzed revealing either positive (identified = yes) or negative (identified = no) object detection.Images were derived from two different UAS flight levels, resulting in two distance-to-camera sensor levels of the individual tracers (DTS).

Table 2 .
Analysis of deviance table (Type II Wald chi-square tests) from the generalized linear mixed effects model (GLMM).
6.All values are round to three digits after the decimal place.SE = standard error.p values <0.05 are reported in bold numbers.

Table 3 .
Summary statistics of the generalized linear mixed effects model (GLMM) results 6.All values are round to three digits after the decimal place.SE = standard error.p values <0.05 are reported in bold numbers.