Monitoring Selective Logging in a Pine-Dominated Forest in Central Germany with Repeated Drone Flights Utilizing A Low Cost RTK Quadcopter

There is no doubt that unmanned aerial systems (UAS) will play an increasing role in Earth observation in the near future. The field of application is very broad and includes aspects of environmental monitoring, security, humanitarian aid, or engineering. In particular, drones with camera systems are already widely used. The capability to compute ultra-high-resolution orthomosaics and three-dimensional (3D) point clouds from UAS imagery generates a wide interest in such systems, not only in the science community, but also in industry and agencies. In particular, forestry sciences benefit from ultra-high-structural and spectral information as regular tree level-based monitoring becomes feasible. There is a great need for this kind of information as, for example, due to the spring and summer droughts in Europe in the years 2018/2019, large quantities of individual trees were damaged or even died. This study focuses on selective logging at the level of individual trees using repeated drone flights. Using the new generation of UAS, which allows for sub-decimeter-level positioning accuracies, a change detection approach based on bi-temporal UAS acquisitions was implemented. In comparison to conventional UAS, the effort of implementing repeated drone flights in the field was low, because no ground control points needed to be surveyed. As shown in this study, the geometrical offset between the two collected datasets was below 10 cm across the site, which enabled a direct comparison of both datasets without the need for post-processing (e.g., image matching). For the detection of logged trees, we utilized the spectral and height differences between both acquisitions. For their delineation, an object-based approach was employed, which was proven to be highly accurate (precision = 97.5%; recall = 91.6%). Due to the ease of use of such new generation, off-the-shelf consumer drones, their decreasing purchase costs, the quality of available workflows for data processing, and the convincing results presented here, UAS-based data can and should complement conventional forest inventory practices.


Introduction
"Drones-the third generation source of remote sensing data" was chosen as the title for the leading article by [1] for the special issue on Unmanned Aerial Systems (UAS) in the International Journal of

UAS SfM Data-Based Tree Detection
The capability of UAS imaging systems to generate ultra-high-resolution imagery and 3D data holds great potential for the development of forestry applications. This is expressed in a large number of recent publications, such as [6,[20][21][22][23][24][25][26][27][28][29][30][31][32]. According to [6], these publications focus on the estimation of dendrometric parameters, including height measurements, tree species classification, quantification of spatial gaps in forests, forest fire-related issues, forest health monitoring, and forest disease mapping. One of the main advantages of ultra-high-resolution data is the capability of tree-level-based analyses and applications. For the detection of individual trees, Zhen et al. [33] distinguished four groups of methods, namely, (1) raster-based methods (treetop detection, crown delineation, object-based image analysis (OBIA)); (2) point cloud-based methods (clustering, voxel-based segmentation); (3) methods combining raster, point clouds, and a priori information; and (4) tree shape reconstruction methods (e.g., convex hull, Hough transformation). Tree top detection involves algorithms such as local maximum, image binarization, and template matching, while crown delineation comprises valley-following, watershed segmentation, and region-growing. In general, the source of the point cloud or raster data is not relevant for the choice of one of these methods. Thus, these methods are also applicable to laser-based data (see below).
Several SfM data-based studies aimed at individual tree detection have used a local maxima approach [25,26,30,32]. In a study by Mohan et al. [25], UAS SfM-based point clouds were used for individual tree detection in a mixed conifer forest (Wyoming, USA). Thiel et al. [32] also used UAS imagery and SfM-based point clouds for a test site in Germany, which is part of the forest stand investigated in this study. Nevalainen et al. [30] aimed at individual tree detection and classification Drones 2020, 4, 11 4 of 26 using UAS SfM-based point clouds and hyperspectral imaging in southern Finland, in which they used forest stands dominated by pine, spruce, birch, and larch. The study by Li et al. [26] applied a local maximum algorithm to delineate individual trees in the Huailai area, China. The task was to detect tree individuals (Aspen) forming windbreaks surrounding agricultural areas. However, in general, using multitemporal point clouds for change detection of forests is rarely reported in the literature.

Laser Scanner-Based Tree Detection
Although this work is based on UAS SfM data, our results are also discussed in the context of outcomes based on airborne light detection and ranging (LiDAR) [34] and terrestrial laser scanner (TLS) data [35], as there is hardly any related work published based on UAS SfM data. Both TLS and LiDAR are active laser scanning systems used for the generation of point clouds [36]. As active systems, laser scanners are independent of illumination conditions. Still, occlusion occurs, as the laser light cannot penetrate through optically dense materials. Nevertheless, occlusion is less prominent when compared to SfM, as each 3D point is measured directly. LiDAR systems can be mounted on aircraft, UAS, or ground-based platforms (vehicles, tripods, handheld systems, etc.). Professional LiDARs are capable of generating very precise and dense point clouds [35]. Also, the system specifications of the laser scanners allow for a proper assessment of the data quality in terms of positional accuracy. Due to their high cost, such devices are rarely available, and flexible use can hardly be guaranteed. Also, particularly in forests, the acquisition of TLS data is extremely time consuming and the surveyed areas are small. The area of acquisition can be extended when laser scanners are mounted on vehicles [37]. Still, limitations exist in terms of the requirement of traversable forest tracks, although these limitations can be overcome when laser scanners are mounted on UAS [38,39]. Due to the wide scanning angle, the high point cloud density, and the possibility to fly tracks with great overlap between the scans, shadowing hardly occurs. In contrast to conventional airborne LiDAR and UAS-borne SfM point clouds, stems can be sensed precisely [40]. Although the full potential of UAS-borne LiDAR is not yet fully exploited, similar tree detection rates achievable with multiple TLS scans can be expected, with the advantage of capturing larger areas in a shorter time, while having no GNSS constraints [38,40]. Nevertheless, it is questionable whether UAS-borne LiDAR systems will be widely used in the near future for the detection of selective logging. Professional systems are still very expensive, the data processing is complex, and the operation and handling of the hardware in the field is challenging.

LiDAR-Based Tree Detection
Multitemporal LiDAR point cloud-based change detection at the individual tree level is hardly reported in the literature. Limitations exist due to non-regular acquisitions and the enduring evolution of LiDAR sensors, resulting in increasing point densities and thus complications when comparing two dissimilar datasets. Nevertheless, in the work of Marinelli et al. [41,42] a new approach for selective logging using multitemporal LiDAR data in forest areas was proposed and tested in the Trento province of the southern Alps. The two test sites were covered by needle-leaved forest, and the point density of the LiDAR data ranged from 10 to 50 pls/m 2 , with four returns for each pulse. One benefit of the used LiDAR dataset in [41,42] is the high point density, and thus the high probability to receive ground returns even for small gaps in the canopy.
Other studies focused on individual tree detection and thus used monotemporal datasets. Nevertheless, the results of those studies enable the assessment of the accuracy of the change detection case. Under the assumption that the same method to detect selective logging is applied twice (using data acquired at different times), the accuracy of the change product can be estimated by squaring the accuracy of the monotemporal product. In the study by Lu et al. [43], a LiDAR-based approach was proposed, aiming at the segmentation of individual trees. This publication also presents a short collection of previously published results by various authors applying different individual tree segmentation methods. However, it is not feasible to draw conclusions regarding the preferable tree segmentation method, since the experimental setup varies significantly between the studies considered Drones 2020, 4, 11 5 of 26 (e.g., point density, forest types). The bottom-up approach presented by Lu et al. [43] can detect 84% of the trees for a study site in Pennsylvania, USA dominated by a deciduous species (leaf-off). The point density was approximately 10 pls/m 2 , with up to four returns per pulse. Mongus and Zalik [44] presented an approach for 3D single tree crown delineation with LiDAR data by utilizing the complementaries of treetop and trunk detections. Six dissimilar test sites were selected, which were located in the Slovenian Alps, and the LiDAR point density ranged from 26 to 97 pts/m 2 for the different sites. Hu et al. [45] developed a tree clustering algorithm based on the mean shift theory. The algorithm was applied to LiDAR data, with an average point density of 15 pts/m 2 acquired over a multi-layered, evergreen broad-leaved forest in South China. Further publications on airborne LiDAR-based tree detection focus on the benchmarking of diverse methods [34], comparing vectorand raster-based segmentation approaches [36], or else try to identify trends in automatic individual crown detection [33]. With regard to the accuracy achieved, the results presented in these publications are in the same range as the previously cited studies.

Terrestrial Laser Scanner (TLS)-Based Tree Detection
Except the study by Mongus and Zalik [44], the previously discussed publications relied on canopy information for individual tree detection, and thus were prone to errors, as discussed in Section 4.1. below. One strategy to overcome this drawback is to sense the tree stems using TLS. TLS can generate 3D point cloud data with a much higher density than that achievable with airborne LiDAR. Thus, the sampling rate is commonly no limiting factor. The fastest TLS data acquisition and preprocessing strategy uses single scans, as shown by Liang et al. [46]. However, single scans only sense the stems from one side, therefore complicating data evaluation. According to Liang et al. [46], one main source of error is related to the trees standing close together. The study by Xia et al. [47] was also based on single scan data. Their site was located in the Sichuan Giant Panda Sanctuaries, China, and covered by mature and dense bamboo forest. The presented method was based on point clustering and merging a stem model afterwards. Shadowing, which is a typical phenomenon in single scan data, was discussed as the main source of error. In Oveland et al. [48], low-cost TLS equipment generating comparably low-density point clouds was tested in the Gran municipality in southeastern Norway, the forest of which is dominated by Norway spruce and Scots pine. The authors concluded that challenging GNSS conditions under forest canopies need to be treated accordingly. Multiple TLS scans were used by Maas et al. [49] to detect trees and to delineate the parameters relevant to the forest inventory. Five different test plots located in Austria and Ireland were chosen for this analysis, and the detection rate was much improved against the previous studies, which emphasizes the advantages of multiple scan data against single scans. A similar performance was achieved by Bienert et al. [37] for multiple scan datasets in the Lauerholz Forest, Northern Germany.

Organization of The Paper
The remainder of the paper is structured as follows: Section 2 presents the materials and methods, including a description of site, fieldwork, UAS data processing, reference data collection, method development, and framework of separability and accuracy analysis. Section 3 lays out the results, including separability and accuracy analysis. Section 4 presents the discussion, followed by the conclusion in Section 5.

Materials and Methods
This chapter provides all of the necessary details on the site characteristics, the acquisition and processing of the UAS and other field data, the collection of reference data, the approach to detect felled trees, the approach to investigate the separability of the felled and remaining trees, and the accuracy analysis for the detection of felled trees. Figure 1 shows the general workflow of this study. UAS imagery was recorded before and after the logging of individual trees. By means of SfM, orthomosaics and point clouds were computed using the acquired imagery. Based on the point clouds canopy height models (CHMs) were generated. The OBIA-based detection of felled trees is based on the spectral differences and CHM differences between both acquisition dates. The accuracy assessment refers to the detection rate of felled trees, and also includes false-positive and false-negative events. For gaining deeper insights into the data characteristics, the separability between the felled and the remaining trees was analyzed using various approaches.
Drones 2020, 4, x FOR PEER REVIEW 6 of 27 into the data characteristics, the separability between the felled and the remaining trees was analyzed using various approaches. Figure 1. General workflow of this study. The blue branches represent the delineation of the felled tree map, while the green branches highlight work steps related to validation and data analysis. CHM, canopy height model; OBIA, object-based image analysis; OM, orthomosaic; SfM, structure from motion.

The Site "Roda Forest"
The Roda Forest is located in the federal state of Thuringia in Central Germany. It is part of the Roda River catchment ( Figure 2) and is, for the most part, planted and intensively managed. The area of investigation (AOI) is located in the southern part of the Roda Forest. It has an extent of approximately 500 m × 250 m ( Figure 3). The UAS mission covers a larger area of 0.465 km², and the AOI is located in the center of the of the UAS mission area. The dominant tree species of the test site is Scots pine (Pinus sylvestris), followed by Norway spruce (Picea abies), and other rarely occurring species such as European larch (Larix decidua), birch (Betula pendula), and European beech (Fagus sylvatica). The stand is homogeneous in terms of tree age, while the tree density and height show some variability, in part due to past disturbances and slightly differing growing conditions ( Figures  2 and 3). The available forest inventory data provide stand-wise averages of relevant information (such as tree height, species compositions, relative stocking, etc.). Due to the incomplete canopy coverage (glades, gaps between trees), undergrowth is well-developed in some places. It comprises, amongst others, small trees (up to 8 m), bushes, bracken, and blackberry. The AOI features gentle terrain with elevations between 385 and 390 m (over WGS84 ellipsoid). The underlying bedrock (Early Trias-bunter) causes slightly acidic soils. . General workflow of this study. The blue branches represent the delineation of the felled tree map, while the green branches highlight work steps related to validation and data analysis. CHM, canopy height model; OBIA, object-based image analysis; OM, orthomosaic; SfM, structure from motion.

The Site "Roda Forest"
The Roda Forest is located in the federal state of Thuringia in Central Germany. It is part of the Roda River catchment ( Figure 2) and is, for the most part, planted and intensively managed. The area of investigation (AOI) is located in the southern part of the Roda Forest. It has an extent of approximately 500 m × 250 m ( Figure 3). The UAS mission covers a larger area of 0.465 km 2 , and the AOI is located in the center of the of the UAS mission area. The dominant tree species of the test site is Scots pine (Pinus sylvestris), followed by Norway spruce (Picea abies), and other rarely occurring species such as European larch (Larix decidua), birch (Betula pendula), and European beech (Fagus sylvatica). The stand is homogeneous in terms of tree age, while the tree density and height show some variability, in part due to past disturbances and slightly differing growing conditions (Figures 2 and 3). The available forest inventory data provide stand-wise averages of relevant information (such as tree height, species compositions, relative stocking, etc.). Due to the incomplete canopy coverage (glades, gaps between trees), undergrowth is well-developed in some places. It comprises, amongst others, small trees (up to 8 m), bushes, bracken, and blackberry. The AOI features gentle terrain with elevations between 385 and 390 m (over WGS84 ellipsoid). The underlying bedrock (Early Trias-bunter) causes slightly acidic soils. During the past two years, the forest of the AOI was affected by several stressors, such as storm events, bark beetle attacks, as well as long drought periods during the spring and summer of 2018 and the spring of 2019. Accordingly, several damages were obvious within the test site. Forest management activities were conducted in June 2019 to remove stressed and affected trees. Another intention of these activities was the slight thinning of the forest. Accordingly, trees were often cleared in areas with a high tree density.   During the past two years, the forest of the AOI was affected by several stressors, such as storm events, bark beetle attacks, as well as long drought periods during the spring and summer of 2018 and the spring of 2019. Accordingly, several damages were obvious within the test site. Forest management activities were conducted in June 2019 to remove stressed and affected trees. Another intention of these activities was the slight thinning of the forest. Accordingly, trees were often cleared in areas with a high tree density.  During the past two years, the forest of the AOI was affected by several stressors, such as storm events, bark beetle attacks, as well as long drought periods during the spring and summer of 2018 and the spring of 2019. Accordingly, several damages were obvious within the test site. Forest management activities were conducted in June 2019 to remove stressed and affected trees. Another intention of these activities was the slight thinning of the forest. Accordingly, trees were often cleared in areas with a high tree density.

Field Work: Acquisition of UAS Data and Check Points
The first UAS campaign took place on 28 May 2019 and the second on 19 July 2019. The logging activities were executed in between this time span. The UAS imagery was recorded using the RTK version of Da-Jiang Innovations Science and Technology Co., Ltd's (DJI) Phantom 4 Pro. This system allows for very accurate real-time positioning in the order of centimeters (see Table 1), if correction data from a reference station can be received. For this study, the correction data stemmed from the German satellite positioning service SAPOS. The correction data were received via NTRIP (Networked Transport of RTCM (Radio Technical Commission for Maritime Services) via Internet Protocol). Accordingly, a mobile internet connection was required. The distance to the nearest SAPOS base station (reference station "JENA/Schoengleina") was 14 km. As the RTK signal was constantly available during the flights, the positional accuracy can be expected to meet the specifications. The Phantom 4 Pro RTK is equipped with a camera featuring a 1" CMOS (complementary metal oxide semiconductor) sensor and a mechanical shutter. The field of view of this system is 84 • . The 3D RTK coordinate of the image center is stored in EXIF format, along with several other parameters. See Table 1 for further UAS specifications.  [50]. Abbreviations as follows: Joint Photographic Experts Group (JPEG), exchangeable image file format (EXIF), Carrier-phase differential global navigation satellite system (CDGNSS).  (Table 2). Due to the very low wind speed, hardly any movements of the trees were observed during the flights. The full cloud coverage resulted in diffuse illumination conditions. Accordingly, unwanted effects, such as hard shadows and strong illumination differences between the canopy and the forest floor, were avoided. A simple airborne campaign-like flight pattern with parallel flight lines only was chosen. To increase the probability of detecting small glades and to obtain data from the forest floor, respectively, the images were acquired in nadir view and with a large image overlap ( Table 2). According to the programmed flight speed, the shutter speed (fixed to 1/320 s), and the geometric ground resolution, motion blur was avoided. With respect to the aperture, the exposure value was set to −1 for the second flight campaign, as the illumination level was slightly reduced compared to the first campaign (denser cloud layer). By this means, similar aperture values were obtained for both campaigns. Take-off and landing were operated at the southern forest edge of the site. The visual observation of the UAS was possible from this position throughout the mission.

UAS DJI Phantom 4 RTK
Although, for the SfM processing of the image data, only the camera positions and no GCPs were considered (direct georeferencing), five check points were placed to cover the area of interest (first campaign only). These equally distributed check points were used to evaluate the positional accuracy of the generated orthomosaic and digital elevation model (DEM). To precisely identify and Drones 2020, 4, 11 9 of 26 locate the check points in the UAS imagery, 50 cm × 50 cm Teflon panels were utilized, featuring a black cross to mark the panel center. The positions of the Teflon panels were measured using survey grade equipment (ppm10xx-04 full RTK GNSS sensor in combination with Novatel Vexxis GNSS L1/L2-Antenna [51]). Each check point was surveyed 50 times. The root mean square error (RMSE; computed separately for x, y, z) was below 2 cm at all check points. For this study, the averaged positions of the 50 measurements were used. Table 2. UAS missions and acquisition parameters. Wind speed was measured at the Kahla weather station located next to Leuchtenburg Castle and 5 km to the Northwest of the test site. The covered area refers to the entire area covered by the UAS missions. The area of interest, as shown in Figure 3, is a subset of the UAS mission area. Abbreviation as follows: International Organization for Standardization (ISO).

SfM-Based Generation of Orthomosaics and Point Clouds
The UAS data processing involved the computation of one dense 3D point cloud and one orthomosaic per UAS campaign. For the processing, the 3D reconstruction software Metashape 1.5.1 (Agisoft LLC) was used, applying the standard workflow. The UAS images were not altered before the SfM processing. Direct georeferencing was applied, and the camera positional accuracy parameter was set to 0.02 m (Table 3). For the processing of the UAS imagery of the second campaign, the camera model parameter set of the first campaign was applied. The rationale is that, due to the same campaign setup, the similar temperatures, and the similar aperture values, the physical camera structure should not have changed significantly, even though low-cost camera equipment was used. The validity of this assumption was confirmed after processing (see text below and Figures 4 and 5). During the processing, all images could be aligned. The number of tie points was approximately 450,000 for both datasets. Both dense point clouds comprised about 66 million points, which corresponds to an average point density of 144 points/m 2 . According to the flight altitude and the camera hardware, the nominal geometric ground resolution was approximately 3 cm. Nevertheless, orthomosaics with 5 cm pixel spacing were produced, which can be considered as being sufficient for this study [32], as the objects to be detected are at least one magnitude larger. Figure 3 shows the orthomosaic of the first campaign.
At all check points, the deviation between the check point coordinate and the model coordinate was found to be below 5 cm (separately measured for x, y, z). The RMSE between the check points and the SfM model was below 2 cm. Further evidence for the high geometric quality of the SfM model was provided by the minor camera position error and the minor effective reprojection errors (Table 4). Table 3. UAS data processing parameters (Agisoft Metashape 1.5.1). f, focal length; b1 and b2, affinity and skew coefficients; cx and cy, principal point offset; k1, k2, and k3, radial distortion coefficients; p1 and p2, tangential distortion coefficients.  The geometric alignment between the two models was checked for 18 objects distributed across the site. To this end, non-moving objects and stable targets were selected, such as deadwood on the forest floor, hunting stands, distinctive stones on the forest roads, or stacks of wood. The deviation between both models was below 10 cm for all objects (separately measured for x, y, z). The RMSE of the deviation at the 18 objects was below 2 cm. These numbers underline the advancement of RTK UAS against previous versions, with positional accuracy of several meters. To visualize the geometric agreement, bi-temporal composites are provided in Figures 5,6. Figure 6 shows some trees that were felled between both UAS campaigns to demonstrate the potential of the radiometric information for change detection.  . For non-moving targets, such as stems on the forest floor, the geometrical offset between both datasets was generally below two pixels. Offsets were determined for 18 objects distributed across the test area. The tilt of the inclined tree in the center of the image subset obviously increased between both acquisition dates.

Figure 6.
Bi-temporal composite of the green channel, example B. No systematic shift between both datasets can be observed. Some twigs appearing in yellow or dark blue apparently slightly moved between both UAS acquisitions (e.g., at the bottom of the subset). The trees appearing in yellow correspond to felled trees-the reflection of the green channel decreased substantially. . For non-moving targets, such as stems on the forest floor, the geometrical offset between both datasets was generally below two pixels. Offsets were determined for 18 objects distributed across the test area. The tilt of the inclined tree in the center of the image subset obviously increased between both acquisition dates.

Figure 6.
Bi-temporal composite of the green channel, example B. No systematic shift between both datasets can be observed. Some twigs appearing in yellow or dark blue apparently slightly moved between both UAS acquisitions (e.g., at the bottom of the subset). The trees appearing in yellow correspond to felled trees-the reflection of the green channel decreased substantially. The geometric alignment between the two models was checked for 18 objects distributed across the site. To this end, non-moving objects and stable targets were selected, such as deadwood on the forest floor, hunting stands, distinctive stones on the forest roads, or stacks of wood. The deviation between both models was below 10 cm for all objects (separately measured for x, y, z). The RMSE of the deviation at the 18 objects was below 2 cm. These numbers underline the advancement of RTK UAS against previous versions, with positional accuracy of several meters. To visualize the geometric agreement, bi-temporal composites are provided in Figures 5 and 6. Figure 6 shows some trees that were felled between both UAS campaigns to demonstrate the potential of the radiometric information for change detection.

Computation of The Spectral Difference Images
Further UAS data processing comprised the computation of the difference between the orthomosaics of the first and the second UAS campaign (see Figure 1). This temporal difference was used as one of the predictors for the detection of felled trees. The difference image was filtered using simple averaging and a 15 × 15 filter matrix to remove the unwanted effects caused by slight misalignment, the movement of trees and tree elements, or small-scale changes. For an example of the filtered difference image, see Figure 7. Further UAS data processing comprised the computation of the difference between the orthomosaics of the first and the second UAS campaign (see Figure 1). This temporal difference was used as one of the predictors for the detection of felled trees. The difference image was filtered using simple averaging and a 15 × 15 filter matrix to remove the unwanted effects caused by slight misalignment, the movement of trees and tree elements, or small-scale changes. For an example of the filtered difference image, see Figure 7. The felling of the trees caused a clear drop in the reflection of the green light, resulting in positive spectral differences (bright spots surrounded by dashed red lines). The tree tops of the logged trees are marked as well (used as reference). The remaining polygons will be discussed in Sections 2.6 and 3.

Computation of Canopy Height Models (CHM) and CHM Differences
A first look at the generated dense point clouds revealed one characteristic that potentially complicates the detection of logged trees using height information only. Although the UAS imagery was acquired in nadir and with a large overlap between the images, in several places, small gaps between trees were not detected correctly (see Figure 8). Nevertheless, the point cloud data were used to generate pit-free [53] CHMs using the command LAS2DEM of the point cloud processing software LAStools v181108 (see Figure 1). The chosen parameters are summarized in Table 5. This step utilizes the terrain-normalized UAS-based point clouds. For their normalization, LiDAR data provided by the Thuringian State Office for Property Management and Geographic Information, TLBG, acquired in February 2014 were available. The LiDAR returns were classified into ground and non-ground points with an average point density of 4 pts/m². The data, including metadata, are freely available via the Thuringian Geoportal (www.geoportal-th.de).  by dashed red lines). The tree tops of the logged trees are marked as well (used as reference). The remaining polygons will be discussed in Sections 2.6 and 3.

Computation of Canopy Height Models (CHM) and CHM Differences
A first look at the generated dense point clouds revealed one characteristic that potentially complicates the detection of logged trees using height information only. Although the UAS imagery was acquired in nadir and with a large overlap between the images, in several places, small gaps between trees were not detected correctly (see Figure 8). Nevertheless, the point cloud data were used to generate pit-free [53] CHMs using the command LAS2DEM of the point cloud processing software LAStools v181108 (see Figure 1). The chosen parameters are summarized in Table 5. This step utilizes the terrain-normalized UAS-based point clouds. For their normalization, LiDAR data provided by the Thuringian State Office for Property Management and Geographic Information, TLBG, acquired in February 2014 were available. The LiDAR returns were classified into ground and non-ground points with an average point density of 4 pts/m 2 . The data, including metadata, are freely available via the Thuringian Geoportal (www.geoportal-th.de). Points are colored according to height. "A" represents a glade, "B" a treetop, and "C" a small gap between four trees. Accordingly, the true elevation of "C" should be close to the elevation of "A." This phenomenon was observed for several gaps across the site. Apparently, even the large overlap between the UAS images was not sufficient to reliably identify tie points on the forest floor, resulting in too large values of the dense point cloud z-coordinate in such places. This phenomenon could even be observed in places of logged trees, where the gap after the logging was too small to identify tie points on the forest floor. Thus, there are consequences when using the height information for selective logging detection. In the next step, the elevation difference between both CHMs was computed (Figure 1). Figure 9 shows the difference image for the entire study area. In general, logged trees can be identified due to notable height differences of several meters. Due to the phenomenon described above, and the visualization in Figure 8, for many logged trees, the height difference was obviously underestimated. Still, the CHM difference can be used to indicate the position of logged trees. However, the operational detection becomes more challenging as height differences of this magnitude are also caused by other impacts, such as movement of vegetation due to wind, growth, or changes in the undergrowth caused by the harvesters (see, e.g., the northern edge of the study site in Figure 9). Points are colored according to height. "A" represents a glade, "B" a treetop, and "C" a small gap between four trees. Accordingly, the true elevation of "C" should be close to the elevation of "A." This phenomenon was observed for several gaps across the site. Apparently, even the large overlap between the UAS images was not sufficient to reliably identify tie points on the forest floor, resulting in too large values of the dense point cloud z-coordinate in such places. This phenomenon could even be observed in places of logged trees, where the gap after the logging was too small to identify tie points on the forest floor. Thus, there are consequences when using the height information for selective logging detection. In the next step, the elevation difference between both CHMs was computed (Figure 1). Figure 9 shows the difference image for the entire study area. In general, logged trees can be identified due to notable height differences of several meters. Due to the phenomenon described above, and the visualization in Figure 8, for many logged trees, the height difference was obviously underestimated. Still, the CHM difference can be used to indicate the position of logged trees. However, the operational detection becomes more challenging as height differences of this magnitude are also caused by other impacts, such as movement of vegetation due to wind, growth, or changes in the undergrowth caused by the harvesters (see, e.g., the northern edge of the study site in Figure 9). Drones 2020, 4, x FOR PEER REVIEW 15 of 27 This observation is based on the phenomenon described above (see Figure 8). Nonetheless, the CHM difference can be used to indicate the position of logged trees. The detection process is complicated though, as often, these height differences are of the same magnitude as other small changes in height caused by other factors (e.g., movement due to wind or growth).

Collection of Reference Data for Accuracy Assessment and Samples for Separability Analysis
For a subset of the test site containing approximately 200 trees, the trunk base coordinates of all of trees were available. The trunk base coordinates were delineated from the TLS data presented in the study by Thiel et al. [32]. Two field campaigns were realized to create a dataset containing the logged trees. The first field campaign was conducted before the logging. The trees to be removed were marked by the foresters and their positions were collected (each tree was assigned a number). The second campaign took place after the logging, to examine whether the marked trees were actually logged, and if other trees were damaged. In fact, we found some discrepancies between planned and conducted logging, and the database was updated accordingly. However, the comparison of the trunk base coordinates of the logged trees and the corresponding missing crowns turned out to be a rather complex issue. Due to the skewness of the stems, the crown and trunk base hardly matched for any of the logged trees. For several trees, the deviation was even so large that the assignment of the trunk base coordinates of a logged tree to the corresponding crown was not feasible. Consequently, the data collected during the field campaigns were not suitable as reference data for this investigation.
For these reasons, the UAS data were used as a basis for the generation of reference and sample data ( Figure 1). Two types of data were collected: (1) Reference data in the form of the position of the treetops of all logged trees (point data), and (2) Sample data in the form of 100 random samples of logged trees and 100 random samples of unchanged forest (polygon data) for spectral and height differences. The very high resolution of the UAS data allowed for a very precise sampling and detection of felled trees. With the aim of generating the best available reference dataset, all UAS data products were used during the data collection (spectral difference, height difference, both orthomosaics and height models). Finally, the manually digitized samples and tree top positions In general, the differences correspond to the logged trees. Apparently, two clusters in terms of height differences can be identified: (1) Large differences of 10-25 m, related to the height of the logged tree; and (2) small differences of 0-5 m, indicating the position of a logged tree, but clearly underestimating its height. This observation is based on the phenomenon described above (see Figure 8). Nonetheless, the CHM difference can be used to indicate the position of logged trees. The detection process is complicated though, as often, these height differences are of the same magnitude as other small changes in height caused by other factors (e.g., movement due to wind or growth).

Collection of Reference Data for Accuracy Assessment and Samples for Separability Analysis
For a subset of the test site containing approximately 200 trees, the trunk base coordinates of all of trees were available. The trunk base coordinates were delineated from the TLS data presented in the study by Thiel et al. [32]. Two field campaigns were realized to create a dataset containing the logged trees. The first field campaign was conducted before the logging. The trees to be removed were marked by the foresters and their positions were collected (each tree was assigned a number). The second campaign took place after the logging, to examine whether the marked trees were actually logged, and if other trees were damaged. In fact, we found some discrepancies between planned and conducted logging, and the database was updated accordingly. However, the comparison of the trunk base coordinates of the logged trees and the corresponding missing crowns turned out to be a rather complex issue. Due to the skewness of the stems, the crown and trunk base hardly matched for any of the logged trees. For several trees, the deviation was even so large that the assignment of the trunk base coordinates of a logged tree to the corresponding crown was not feasible. Consequently, the data collected during the field campaigns were not suitable as reference data for this investigation.
For these reasons, the UAS data were used as a basis for the generation of reference and sample data ( Figure 1). Two types of data were collected: (1) Reference data in the form of the position of the treetops of all logged trees (point data), and (2) Sample data in the form of 100 random samples of logged trees and 100 random samples of unchanged forest (polygon data) for spectral and height differences. The very high resolution of the UAS data allowed for a very precise sampling and detection of felled trees. With the aim of generating the best available reference dataset, all UAS data products were used during the data collection (spectral difference, height difference, both orthomosaics and height models). Finally, the manually digitized samples and tree top positions were double checked with the point clouds before and after the logging. According to the generated reference data, in total, 380 trees were logged.

Automatic Detection of Felled Trees
For the detection of felled trees, spectral and geometric information was available. The original intention to use only geometric information (∆CHM) was discarded due to the difficulties in detecting small gaps, as discussed above (see Figure 8). Accordingly, we tested two approaches, with the first approach considering solely spectral information and the second combining spectral and geometric information. Figure 10 provides the workflow applied for the automatic detection of felled trees. Further details are provided in the following text. were double checked with the point clouds before and after the logging. According to the generated reference data, in total, 380 trees were logged.

Automatic Detection of Felled Trees
For the detection of felled trees, spectral and geometric information was available. The original intention to use only geometric information (∆CHM) was discarded due to the difficulties in detecting small gaps, as discussed above (see Figure 8). Accordingly, we tested two approaches, with the first approach considering solely spectral information and the second combining spectral and geometric information. Figure 10 provides the workflow applied for the automatic detection of felled trees. Further details are provided in the following text.

Segmentation and Classification of Felled Trees Based on Spectral Differences
Segmentation and classification ( Figure 10, left branch) were accomplished using the spectral difference of the green channel only, as it clearly reveals the position of the missing crowns, while it is almost insensitive to other changes ( Figure 7). Thus, as shown in the results section, the spectral difference of the green channel turned out to be a valid predictor for the detection of felled trees. By viewing the spectral difference of the blue channel, hardly any changes were visible. In contrast, the spectral difference of the red channel was sensitive to changes. However, the visible changes in the difference image were not only related to missing tree crowns, but also to changes on the forest floor caused by the harvesters. Hence, confusion between several types of changes would emerge.
Segmentation and classification were accomplished using an OBIA software (eCognition Developer 9.5) for two reasons. First, in contrast to the height difference (see section 2.5.2.), a global threshold could not successfully be applied to the spectral difference, as the spectral characteristics of the imagery slightly varied over the site. This variation was due to minor changes in illumination during the 35 min UAS flights. Second, due to the small-scale variations of grey values (texture) caused by objects such as twigs, leaves, undergrowth, etc. (see Figures 5,6 and Section 2.3.2), the threshold-based classification resulted in a noisy map of felled trees. Even filtering of the spectral difference could not remove the unwanted texture entirely (see Figure 7).

Segmentation and Classification of Felled Trees Based on Spectral Differences
Segmentation and classification ( Figure 10, left branch) were accomplished using the spectral difference of the green channel only, as it clearly reveals the position of the missing crowns, while it is almost insensitive to other changes ( Figure 7). Thus, as shown in the results section, the spectral difference of the green channel turned out to be a valid predictor for the detection of felled trees. By viewing the spectral difference of the blue channel, hardly any changes were visible. In contrast, the spectral difference of the red channel was sensitive to changes. However, the visible changes in the difference image were not only related to missing tree crowns, but also to changes on the forest floor caused by the harvesters. Hence, confusion between several types of changes would emerge.
Segmentation and classification were accomplished using an OBIA software (eCognition Developer 9.5) for two reasons. First, in contrast to the height difference (see Section 2.5.2.), a global threshold could not successfully be applied to the spectral difference, as the spectral characteristics of the imagery slightly varied over the site. This variation was due to minor changes in illumination during the 35 min UAS flights. Second, due to the small-scale variations of grey values (texture) caused by objects such as twigs, leaves, undergrowth, etc. (see Figures 5 and 6 and Section 2.3.2), the threshold-based classification resulted in a noisy map of felled trees. Even filtering of the spectral difference could not remove the unwanted texture entirely (see Figure 7). The eCognition processing parameters are summarized in Table 6. The chosen parameters of the multiresolution segmentation resulted in segments that fit the extent of the removed trees. The following step (spectral difference segmentation) merged segments with high spectral similarity, resulting in larger objects for unchanged forest. This step was accomplished to facilitate the classification, and was followed by the collection of random samples for the classification. These training samples were not identical to the samples for the separability analysis described in Section 2.5. The samples described in Section 2.4 were manually digitized, while the polygon geometry of the training samples was used for the classification of stems from the multiresolution segmentation. In total, 25 samples were collected for the class of logged trees and 190 samples were collected for the class of unchanged forest. For mapping, the classification process itself used the object features area, roundness, mean difference to neighbors, and mean difference to darker neighbors (see Table 6). These were calculated based on the spectral difference of the green channel. To classify the data, the nearest neighbor algorithm implemented in eCognition was used. The resulting map was exported as a binary mask and vectorized for further processing. The polygons are shown in Figure 7 (red dashed polygons).

Classification of Felled Trees based on the ∆CHM
As previously discussed, for a significant partition of the felled trees, the ∆CHM does not represent the actual height of the removed trees. It was found that the heights of the felled trees were heavily underestimated, as the small gaps resulted from the selective logging were not correctly represented in the second CHM. Instead of height differences of 25 m for several logged trees, differences of only 0.5-1.0 m were measured (see Figure 9). This matter complicates the usage of ∆CHM as a predictor for selective logging, as the potential confusion with other small changes needs to be considered. Accordingly, the scheme displayed in Figure 10 (red branch) was developed to detect the felled trees. Masking out areas where CHM1 < 5 m was required to detect only the changes related to the logging of trees, while the masking of areas where ∆CHM < 0.5 m was accomplished to avoid the detection of small changes within the canopy. Cleaning was done using the commands erode (5) and dilate (3) of the Orfeo ToolBox 7.0.0 (https://www.orfeo-toolbox.org) for QGIS. The resulting vectors (Figure 7, yellow polygons) represent changes related to felled trees.
According to Figure 7, the ∆CHM-based workflow overestimated the number and crown size of felled trees. However, changing the threshold for ∆CHM from 0.5 m to greater values resulted in missing out several logged trees. Nevertheless, the ∆CHM mask was used to improve the spectral difference-based detection of felled trees, as described in the following.

Integration of Spectral Difference-and Height Difference-Based Classifications
Although the detection of felled trees using solely the spectral difference provided accurate results (see Figure 7 and Table 7), the potential for further improvement was investigated. To reduce the number of false positive detections, spectral difference-based and elevation difference-based classifications were integrated. A tree was finally marked as felled if it was recognized as a felled tree based on both the spectral difference ( Figure 10, left branch) and the height difference (∆CHM; Figure 10, right branch). The geometry of the polygons based on spectral differences was used as the final geometry for the polygons (Figure 7, green polygons).
The rationale behind this approach was to avoid false-positive detections caused by slight movements of small trees with less compact crowns. Even small movements in the order of a few decimeters can cause significant spectral differences between both acquisitions, while the difference in elevation is rather small. Figure 7 (in the right half of the figure) shows one example for a false-positive detection based on the spectral difference that could be removed by integrating spectral differenceand height difference-based classifications. In total, 13 false-positive detections were avoided (Table 7).

Separability and Accuracy Analysis
The separability of logged and remaining trees (unchanged parts of the forest) was analyzed for the spectral difference -(green channel) and the elevation difference-based approaches. Moreover, it was investigated at the pixel and object (sample) levels. For the former, all pixels within the samples were treated as separate entities. For the latter, the average of each sample constituted one entity for the analysis and the number of samples corresponded to the number of entities. The collection of samples is described in Section 2.4.
The separability analysis was conducted using violin plots and receiver operator characteristic (ROC) curves. ROC curves are a suitable technique to compare binary classifiers. For classifications based on a single predictor (e.g., the spectral difference of the green channel), it represents the histogram overlap of two classes. In a ROC curve, the true positive rate (i.e., correctly classified felled trees) is plotted against the false positive rate in dependence of the decision threshold. In ROC diagrams, the 1:1 line corresponds to a random classification. All points above the 1:1 line represent an improved separation against random. One way to quantify the separation of the two classes is the computation of the area under the curve (AUC). Perfect separation results in an AUC of 1; random classification results in an AUC of 0.5.
Accuracy analysis provides the figures for precision and recall, which are based on the count for correctly detected trees (true positive, tp), missed out trees (false negative, fn), and wrongly detected trees (false positive, fp). Precision and recall are computed as follows: A felled tree was considered being correctly detected when a reference tree (Figure 7, grey point) was surrounded by a delineated polygon (which represents the area of the removed tree crown). In all other cases, it is either a missed out tree or a wrongly detected tree. In the accuracy analysis, only the classification results based on the spectral differences (Figure 7, red dashed polygons) and on the integration of height and spectral differences were considered (Figure 7, green polygons).

Results
The results section focuses on the suitability of the used UAS data to detect felled trees. First, the separability of unchanged forest and felled trees was investigated using violin plots and ROC curves, including AUC statistics. Second, the accuracy of the detection of felled trees was analyzed using the measures precision and recall. Figure 11 shows four violin plots to illustrate the separability of the felled trees and the unchanged forest. When looking at the spectral difference (green channel), it is noticeable that the median of the grey value difference (GVD) is 50 for the unchanged forest. This is due to the circumstance that the optical data were not calibrated (which is not necessary for this application). Accordingly, this difference is, for the most part, the result of differences in illumination, aperture, and exposure value. As a result of logging, the reflection of green light is reduced, causing an increased spectral difference between both acquisitions-the median is close to a GVD of 100. measures that presume a normal distribution. Accordingly, we chose ROC/AUC, an index for binary classifiers independent of the underlying distribution. Figure 11. Violin plots illustrating the separability of felled trees and unchanged forest based on stratified samples (100 samples per class uniformly distributed across the site). The horizontal lines represent the median (longer line) and the 25 th /75 th percentiles (shorter lines), respectively. The clear bimodal distribution of the elevation difference of the felled trees, is remarkable, and this is related to the phenomenon discussed above (Figures 8 and 9). GVD, grey value difference.

Separability Analysis
The ROC curves allow a deeper insight into the separability of the felled trees and the unchanged forest, as they quantify the histogram overlap of the two classes. In general, the separability of two classes is proportional to the AUC. A perfect classifier achieves a true positive rate (tpr) of 1.0, while the false positive rate (fpr) is zero, which results in an AUC of 1.0. Commonly, the separation is not perfect. As an example, Figure 12 shows the experimental data used in this study. To achieve a tpr value of 1.0, the corresponding value of fpr is 0.6 (notable overestimation of logged trees, low precision). For the detection of felled trees, a more balanced threshold resulting in small figures for the fpr and great figures for the tpr is aspired. According to Figure 12, tpr values greater than 0.9 can be achieved, while fpr is below 0.1 when only the spectral difference (object-based approach) is used. Figure 11. Violin plots illustrating the separability of felled trees and unchanged forest based on stratified samples (100 samples per class uniformly distributed across the site). The horizontal lines represent the median (longer line) and the 25th/75th percentiles (shorter lines), respectively. The clear bimodal distribution of the elevation difference of the felled trees, is remarkable, and this is related to the phenomenon discussed above (Figures 8 and 9). GVD, grey value difference.
The median of the pixel-based height difference of unchanged forest is close to 0 m (−0.03 m). The felled trees are represented as positive deviations from zero. In comparison to the spectral difference, we found a wider spread (interquartile range) of height differences of the logged trees. This is because, for a great part of the logged trees, the height difference could not be correctly estimated. The reason for this observation has already discussed previously. On the other hand, despite several outliers, the spread and interquartile range are very low for the unchanged forest, which, again, suggests the high quality of the elevation models. In general, the plots suggest a reasonable separability of the felled trees and the unchanged forest using spectral or elevation differences. The usage of image objects results in a reduction of spread and dispersion of the data and thus a potentially increased separability.
The histograms of the height differences depicted in Figure 11 show a pronounced bimodal distribution for the felled trees. This observation reflects the phenomenon shown in Figures 8 and 9. According to the histogram, for approximately 50% of the felled trees, the correct height difference could not be estimated. This bimodality prevents the computation of commonly used separability measures that presume a normal distribution. Accordingly, we chose ROC/AUC, an index for binary classifiers independent of the underlying distribution.
The ROC curves allow a deeper insight into the separability of the felled trees and the unchanged forest, as they quantify the histogram overlap of the two classes. In general, the separability of two classes is proportional to the AUC. A perfect classifier achieves a true positive rate (tpr) of 1.0, while the false positive rate (fpr) is zero, which results in an AUC of 1.0. Commonly, the separation is not perfect. As an example, Figure 12 shows the experimental data used in this study. To achieve a tpr value of 1.0, the corresponding value of fpr is 0.6 (notable overestimation of logged trees, low precision). For the detection of felled trees, a more balanced threshold resulting in small figures for the fpr and great figures for the tpr is aspired. According to Figure 12, tpr values greater than 0.9 can be achieved, while fpr is below 0.1 when only the spectral difference (object-based approach) is used. Figure 11. Violin plots illustrating the separability of felled trees and unchanged forest based on stratified samples (100 samples per class uniformly distributed across the site). The horizontal lines represent the median (longer line) and the 25 th /75 th percentiles (shorter lines), respectively. The clear bimodal distribution of the elevation difference of the felled trees, is remarkable, and this is related to the phenomenon discussed above (Figures 8 and 9). GVD, grey value difference.
The ROC curves allow a deeper insight into the separability of the felled trees and the unchanged forest, as they quantify the histogram overlap of the two classes. In general, the separability of two classes is proportional to the AUC. A perfect classifier achieves a true positive rate (tpr) of 1.0, while the false positive rate (fpr) is zero, which results in an AUC of 1.0. Commonly, the separation is not perfect. As an example, Figure 12 shows the experimental data used in this study. To achieve a tpr value of 1.0, the corresponding value of fpr is 0.6 (notable overestimation of logged trees, low precision). For the detection of felled trees, a more balanced threshold resulting in small figures for the fpr and great figures for the tpr is aspired. According to Figure 12, tpr values greater than 0.9 can be achieved, while fpr is below 0.1 when only the spectral difference (object-based approach) is used.  Based on Figure 12, object-based separation outperformed pixel-based separation. This observation can be substantiated by the AUC values. The lowest value of 0.938 was computed for the pixel-based spectral difference, followed by 0.952 (pixel-based height difference), 0.967 (object-based height difference), and 0.989 (object-based spectral difference). The gain in separability is much more pronounced for the spectral difference. When working at the pixel level, the AUC is greater for the height difference, while at the object level, the spectral difference is superior. The OBIA approach obviously successfully suppresses the small-scale variability of the spectral difference (acting as spatial noise), and thus reduces the range of values (see Figure 11), which, in turn, reduces the overlap between unchanged forest and felled trees. For height differences, this effect is less obvious, as, except for some outliers, the small-scale variability is rather small. Accordingly, the range between the minimum and the maximum (see Figure 11), and thus the overlap between the unchanged forest and the felled trees, was hardly reduced.

Accuracy Analysis
The algorithm to detect felled trees was described in Section 2.6. The accuracy analysis considers the entire test site and thus all 380 felled trees as reference. The accuracy was analyzed for two mapping products. The first product was based on the spectral difference only, and the second product implemented spectral and height differences.
For the spectral difference-based product, precision and recall are well above 90% (Table 7); 349 logged trees were successfully detected, which means that 31 felled trees were missed out. The number of false-positive detected felled trees was 22. When implementing the height difference as well, the number of false-positive detected felled trees could be reduced to nine, resulting in a rather high precision of 97.5%, while the number of missed out trees only increased by one. It was found that missed out logged trees were typically small trees with a crown diamater of less that 1 m. The false-positive detections were related to several causes, which are discussed in the following chapter.

Discussion
It was successfully demonstrated that direct georeferencing using DJI's Phantom 4 RTK provides sufficient geolocation accuracy for change detection (mapping of selective logging), based on repeated flights. The geolocation accuracy at the check points, expressed as the RMSE, was found to be below 2 cm. When using the same camera parameters for both flights, the deviation between both data models was below 10 cm (x, y, z) for all 18 test targets, and thus at least an order of magnitude smaller than the size of most of the objects to be detected (crown diameter of logged trees). Obviously, in this experiment, the DJI's FC6310R camera was sufficiently physically stable for retaining the camera parameters, which thus enables the automatic detection of selective logging. Nevertheless, some logged trees were missed out, and some false alarms occurred as well. The following section discusses the impacts on the detection rate. Then, our results are compared to previous work, before the chapter closes with an outlook on future work.

Discussion of Impacts on Accuracy
The suitability of the spectral and height information generated using the UAS is reflected in the high detection rates, as shown in Table 7. Nevertheless, 32 logged trees could not be detected, while nine false-positive detections emerged (for the combination of spectral and height information). Essentially, two main reasons can be suggested for causing the false-positive and false-negative detections: (1) The principle of SfM (see Section 1.1), and (2) utilizing changes in the canopy to predict logging. The first reason refers to the observation examined in Figure 8 and the related difficulties, such as degrading the height difference as a predictor for the detection of logged trees. Accordingly, the straightforward usage of height difference is hindered. SfM requires imagery of the same objects recorded from different positions. Only features visible in at least two images can function as tie points. Small gaps in the canopy and changing viewing angles can prevent the acquisition of the same features from different camera positions. Accordingly, the SfM model might often miss information of the forest floor-in particular, when the gaps in the canopy are small. This problem can be reduced by increasing the image overlap, which would, however, cause an increment of flight and SfM processing time. Still, there is no guarantee of capturing the relevant areas of the forest floor, as trees tend to close gaps in the canopy rapidly after a disturbance.
The second cause (using canopy changes as indicator for logging) for missed out or wrongly detected trees is not specifically related to SfM or optical data. Also, conventional airborne LiDAR would be affected, as reliable individual stem detection is not feasible, which, however, would be necessary to clearly indicate if a tree individual was felled. For example, even in simply structured forests like the study location, trees can have two crowns or large upward branches that have similar properties of a single crown. Sometimes, trees are so close together that a separation of the two crowns is not possible, and the deforestation of one of the two trees cannot be recognized. In a specific case (Figure 13), one tree was obviously slightly tilted during the logging activities, resulting in a false-positive event.
The second cause (using canopy changes as indicator for logging) for missed out or wrongly detected trees is not specifically related to SfM or optical data. Also, conventional airborne LiDAR would be affected, as reliable individual stem detection is not feasible, which, however, would be necessary to clearly indicate if a tree individual was felled. For example, even in simply structured forests like the study location, trees can have two crowns or large upward branches that have similar properties of a single crown. Sometimes, trees are so close together that a separation of the two crowns is not possible, and the deforestation of one of the two trees cannot be recognized. In a specific case (Figure 13), one tree was obviously slightly tilted during the logging activities, resulting in a false-positive event.
Other sources of faults are related to small logged trees causing false-negative events and dismantled large branches due to logging activities causing false-positive detections. Accordingly, the use of canopy changes as an indicator for logging has some limitations. Regardless, this study demonstrates the great potential for using RTK UAS data, together with a fast and simple method, for monitoring selective logging. With the chosen flight parameters, an area of almost 0.5 km² (please note that the UAS mission covers a larger area than the test site) could be captured within 35 min of flight time (see Table 2), which is a great advantage over TLS, for which the ratio of surveyed area and surveying time is much smaller [35,46].  Other sources of faults are related to small logged trees causing false-negative events and dismantled large branches due to logging activities causing false-positive detections. Accordingly, the use of canopy changes as an indicator for logging has some limitations. Regardless, this study demonstrates the great potential for using RTK UAS data, together with a fast and simple method, for monitoring selective logging. With the chosen flight parameters, an area of almost 0.5 km 2 (please note that the UAS mission covers a larger area than the test site) could be captured within 35 min of flight time (see Table 2), which is a great advantage over TLS, for which the ratio of surveyed area and surveying time is much smaller [35,46].

Related Work
To the best of our knowledge, selective logging detection over forests using repeated UAS flights have not yet been reported in the literature. One reason might be related to hitherto existing limitations in the absolute geolocation accuracy of SfM products. Accordingly, we essentially compared our results to studies aimed at individual tree detection. To extend the amount of related work, we also included some studies featuring laser-based point clouds. Table 8 provides an overview over the selected related works. More information on these studies is provided in the introduction section.

Terrestrial laser scanner (TLS)-based individual tree (stem) detection
Liang et al. [46] Single scan TLS Finland, Evo/pine, spruce, birch, larch tp = 73% Xia et al. [47] Single scan TLS China, Sichuan Giant Panda Sanctuaries/dense bamboo forest tp = 88% Oveland et al. [48] Single scan (low cost) TLS Norway/Gran municipality in southeastern Norway/spruce and scots pine tp = 78% fn = 22% Maas et al. [49] Multiple scan TLS Austria, Ireland/conifer forest, broad-leaved forest tp = 97% Marinelli et al. [42,43] presented the only study that utilized LiDAR data acquired at different dates to detect selective logging. The authors developed an adapted approach for this application, and the accuracy achieved was similar to the results presented in our study.
The second block in Table 8 summarizes the previous work that utilized UAS imagery-based point clouds for individual tree detection in various areas. All of these studies applied a local maximum approach. Assuming that the same method was used twice (using data collected at different times) to capture selective logging, the accuracy of the change product can be estimated by squaring the accuracy of the monotemporal product. Accordingly, the accuracy presented in the previous UAS studies was lower than that of this work. The same applies to the LiDAR works presented (Table 8, 3rd block). Obviously, the detection accuracy benefits from the synergetic utilization of spectral and geometrical information. Pursuant to the studies summarized in Table 8 in the last block, multiple scans were necessary for reliable stem detection. Accordingly, great expense was needed in order to achieve similar accuracies to that in this study. In general, the great advantage of UAS data over TLS and LiDAR data for individual tree detection or change monitoring is the short-term availability of suitable spectral and point cloud data over areas of reasonable extent.

Outlook
The logged tree detection accuracy achieved in this study is among the best accuracies of the previously cited publications. Furthermore, our results could be achieved with relatively low-cost and easy-to-handle equipment and a simple data processing chain. Flying above the forest canopy guarantees perfect GNSS conditions, which is a general advantage of UAS-based forest parameter retrieval. The RTK feature of the UAS allowed for centimeter-level image registration, and thus for the direct comparison of two UAS datasets acquired at different dates, without the need for co-registration. Future research will focus on the investigation of varying UAS survey parameters, such as image overlap and differing flight altitudes, aiming to mitigate the "missing out small gaps issue" (Figure 8). Furthermore, checking the transferability to other forest types, such as deciduous forests, is intended. In particular, non-nadir images, in combination with checkerboard-like flight patterns, during leaf-off conditions will increase the number of SfM-based points of the stems and will thus enable their detection. Such datasets might be integrated with leaf-on data before and after a disturbance, in order to relate changes in the canopy to specific trees.

Conclusions
In this study, we presented an approach for the straightforward mapping of selective logging of individual trees. The approach uses the spectral and height differences derived from two consecutive UAS flights, where one flight is conducted before and one flight after the logging. The aim was to develop a simple and transferable OBIA-based approach for the detection of logged trees. According to the results, very high detection rates can be achieved using UAS data in combination with an OBIA-based approach.
At this point it must be clearly stated that the advent of off-the-shelf and inexpensive RTK-capable UAS are a real game changer, since for SfM processing, no GCPs are needed. Due to the high geolocation accuracy of the image data, direct georeferencing is feasible for SfM processing. This basically minimizes the UAS survey time to the flight duration. The most time-consuming part of conventional UAS surveys, namely, the installation and surveying of GCPs, becomes obsolete. Moreover, the surveying of GCPs under a forest canopy is challenging, if not impossible, in many cases anyway. Accordingly, UAS-based change detection using consecutive UAS flights is hardly feasible with conventional UAS. Ultimately, RTK UAS-based 3D models are not prone to systematic errors such as doming or bowling, even if simple flight patterns (causing weak tie point networks) are chosen. Although DJI's Phantom 4 RTK can be currently (22 January 2020) purchased for a comparably low price of €5400 (also available as a multispectral version for a similar price), it is very likely that such systems will become even more affordable in the near future. Furthermore, there is a tendency of making RTK correction services freely available. For instance, the German satellite positioning service SAPOS (www.sapos.de) is publicly available in several states in Germany. For the United States of America, a collection of public RTK base stations is provided by the GPS WORLD community (www.gpsworld.com/finally-a-list-of-public-rtk-base-stations-in-the-u-s/). A worldwide collection of partly open RTK services is provided by the German Federal Agency for Cartography and Geodesy (BKG) (http://rtcm-ntrip.org/home.html). Accordingly, a wide use of RTK UAS in agencies, companies, but also in the private sector (e.g., citizen science, UAViators) can be expected and further services can be developed.