Next Article in Journal
A Software-Defined Radar for Low-Altitude Slow-Moving Small Targets Detection Using Transmit Beam Control
Previous Article in Journal
Shallow-Guided Transformer for Semantic Segmentation of Hyperspectral Remote Sensing Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Abandoned Logging Roads in Point Reyes National Seashore

1
Department of Geography and Environment, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132, USA
2
Estuary & Ocean Sciences Center, San Francisco State University, Tiburon, CA 94920, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(13), 3369; https://doi.org/10.3390/rs15133369
Submission received: 11 May 2023 / Revised: 6 June 2023 / Accepted: 20 June 2023 / Published: 30 June 2023

Abstract

:
Temporary roads are often placed in mountainous regions for logging purposes but then never decommissioned and removed. These abandoned forest roads often have unwanted environmental consequences. They can lead to altered hydrological regimes, excess erosion, and mass wasting events. These events can affect sediment budgets in streams, with negative consequences for anadromous fish populations. Maps of these roads are frequently non-existent; therefore, methods need to be created to identify and locate these roads for decommissioning. Abandoned logging roads in the Point Reyes National Seashore in California, an area partially under heavy forest canopy, were mapped using object-based image processing in concert with machine learning. High-resolution Q1 LiDAR point clouds from 2019 were used to create a bare earth model of the region, from which a slope model was derived. This slope model was then subjected to segmentation algorithms to identify and isolate regions of differing slopes. Regions of differing slopes were then used in a convolutional neural network (CNN), and a maximum likelihood classifier was used to delineate the historic road network. The accuracy assessment was conducted using historic aerial photos of the state of the region post-logging, along with ground surveys to verify the presence of logging roads in areas of question. This method was successfully able to identify road networks with a precision of 0.991 and an accuracy of 0.992. It was also found that the CNN was able to identify areas of highest disturbance to the slope gradient. This methodology is a valuable tool for decision makers who need to identify areas of high disturbance in order to mitigate adverse effects.

1. Introduction

Roads are often placed in wild, forested regions for a variety of reasons. Whether constructed for recreation, infrastructure, fire combat, or logging, these roads are often poorly documented, and their precise location may become lost due to infrequent use, changes in the morphology of the area, and vegetation encroachment [1,2]. This results in networks of roads, many of which have become neglected, fragmented, and difficult to find for remediation purposes. Having maps of these historic lost roads is essential for decommissioning efforts [3].
Abandoned forest roads, specifically abandoned logging roads, typically are composed of compacted soil [4]. This compacted soil can act as a semi-impervious surface and divert water flow from one watershed to another [5]. Watersheds in forested regions are especially susceptible to mass wasting events and enhanced sediment flows in areas where the natural geomorphological processes have been disturbed [6]. Logging roads in forested regions can bring subsurface flow to the surface and accelerate erosion with overland flow [7], which can result in incisions into the natural landscape slope gradient. These incisions can provide a direct path for concentrated overland flow containing increased sediment loads to stream channels [8]. This accelerated erosion can deposit fine sediment into the drainage basin and negatively affect fish habitat in streams. In summary, roads can limit infiltration, cause surface runoff, increase the rate of fine sediment production, and trigger mass wasting events [9]. Increased sediment loads in watersheds have been shown to affect the ecological composition of many forest species [10].
In the Point Reyes National Seashore, 154 different fish species are known to utilize the aquatic habitat [11]. Two of these have been listed as species of special concern: the coho salmon (Oncorhynchus kissutch), and the steelhead trout (Oncorhynchus mykiss irideus) [12]. These are anadromous fish who depend entirely on gravely freshwater stream beds for habitat and breeding success [13]. Logging roads are known to affect the spawning rates and habitat suitability in anadromous fish populations [14], primarily because logging roads increase fine sediment loads to stream channels and decrease the amount of available gravel [15]. Increased sediment flow into stream beds from logging roads can persist up to 80 years past the logging event [16], showing the need to identify these roads despite their age. Decommissioning roads has been found to return sediment loads to their natural levels [8,17]. Identifying and removing sources of high sediment loads should be a priority for any habitat remediation project for these sensitive species [13,14].
Much research has been undertaken to identify roads from both imagery and light detection and ranging (LiDAR) point clouds [3,18,19,20,21,22,23], but limited work has been undertaken to identify logging roads under vegetation [2]. Dense vegetation obscures the chance of road identification from aerial or satellite imagery [22], whereas LiDAR has the ability to penetrate through canopy and gather an accurate 3d interpretation of the ground underneath [24]. However, because logging roads are often dirt or gravel, their uniform features, such as their curb height [18] or intensity [19], which would traditionally be used in processes meant for extracting road forms from LiDAR, cannot be used. Additionally, logging road networks often become highly fragmented over time, making the extraction of complete road networks more difficult.
Object-based image analysis (OBIA) is the process of clustering pixels into regions that correspond to “individual surfaces, objects, or natural parts of objects” ([25], p. 576). This contrasts with the more traditional pixel-based methods where the spectral signature of the pixel alone is the primarily consideration, typically neglecting spatial aspects such as the size and shape of the area being classified, or topological relationships [26]. This often results in ‘salt and pepper’ classifications, where classes overlap, mix, and cause errors [27]. OBIA has been shown to perform with higher accuracy, primarily because, in addition to spectral values, OBIA can also focus on an object’s shape relative to the scene as well as the object’s texture and proximity to other objects [28].
Convolutional neural networks (CNN) are a machine learning method that has shown great promise in remote sensing in recent years, especially in conjunction with object-based image analysis [29]. CNNs have been used to improve image classifications through their ability to automatically extract spatial patterns of images using a set of convolution and pooling operations to learn specific objects’ characteristics [30]. This can be an especially accurate tool when combined with OBIA [31]. OBIA’s ability to preserve objects and edges, when integrated with CNNs, can be used to generate robust, consistent, and fully automated classifications [32].
When acquiring LiDAR for road detection purposes, high point density is required in heavily forested regions. While some studies have experimented with using LiDAR files with as few as 6 pulses/m2 [3], it should be noted that the roads in question were not abandoned or in dense forests; therefore, detecting them was less of a problem. Maintained roads reflect back more LiDAR points because of a lack of ground and canopy vegetation, and they tend to not be fragmented, which makes the detection and extraction process more straightforward [18]. In addition, many studies have used intensity images derived from LiDAR to detect roads, although intensity has proven to be less useful for road detection when the roads are unpaved [3,19].
Forested roads have more vegetation through which the LiDAR must penetrate to get an accurate ground return. Thus, unmaintained roads need considerably higher point density to be detected. Because of this, it is recommended to have ideally more than 6 pulses/m2 for densely vegetated regions [2].
It is the goal of this project to develop a set of segmentation and classification criteria to map abandoned, overgrown logging roads in Point Reyes National Seashore, California using LiDAR point cloud tiles, OBIA, and CNN.
These efforts in identifying abandoned logging roads will help add to the body of knowledge at the disposal of environmental planners aiming to remediate damaged habitat. Managers at Point Reyes National Seashore have identified this specific issue to be the top priority on their list of management needs [1].
This work is building on Sherba et al. (2014) [2] through incorporating new machine learning techniques to improve classification accuracy, automate the process of identifying hilltop roads, and improve the process of filling in gaps in hillside road classification due to fragmentation.

2. Methods

2.1. Research Area

Point Reyes National Seashore, located in Marin County, California, is approximately 50 km northwest of San Francisco (Figure 1). It was established in 1962 and is the only U.S. Park Service protected Seashore on the West Coast. The Park encompasses approximately 71,000 acres over more than 80 miles of coastline. Within the Park, 32,730 acres are designated wilderness or potential wilderness, constituting one of the most accessible wilderness areas in the country and the only marine wilderness on the west coast of the continental United States [33].
The first lumber mill in the Point Reyes vicinity was built in Bolinas in 1851. By 1858, four mills were operating in the area. Logging continued in Point Reyes until the area was designated a National Seashore in 1962 [34]. After the Seashore was established, the logged areas were subsequently left to naturalize [1].
The study area is an area of 11.75 km2 just south of Olema, California and is representative of the logged regions of concern in Point Reyes National Seashore. This area was selected through discussion with National Park Service project leaders, specifically because the area was heavily logged, and no remediation effort has been attempted to decommission the roads [1]. This site is also the location of ongoing projects by National Seashore managers to understand the lasting effects of historic logging on the region [1].
Currently, this area is popular with hikers and has two major trails running through it. The seashore is home to a variety of vegetation from shrubs to trees but is predominately characterized by coastal douglas-fir (Pseudotsuga menziesii var. menziesii), coast redwood (Sequoia sempervirens), coast live oak (Quercus agrifolia), tanoak (Notholithocarpus densiflorus), and california bay (Umbellularia californica).

2.2. Types of Roads

Generally, logging roads within mountainous forested regions fall into two categories: those in a mid-slope position and those that follow or cross over ridge lines [2]. Examples of overgrown historic logging roads within the study area shown in Figure 2. Changes in the natural slope of the region are highlighted in red. Figure 2a shows a logging road in a mid-slope position and Figure 2b a road in a ridge-top position. These roads manifest themselves differently in the data, and it is important to recognize that, in order to extract complete road networks, these two road types must be approached separately.

2.3. Data

LiDAR tiles for Marin County were provided by the Golden Gate National Parks Conservancy [1]. These tiles have an area of 0.47 km2 each, and the study area comprises exactly 25 complete tiles. The highest resolution LiDAR available for the region was collected by Quantum Spatial between the dates of 22 December 2018 and 15 March 2019 in the format of High-resolution Q1 LiDAR [35]. In this file, the average point cloud density of Marin is eight points per m2, However, in the region of the study area the average density is as high as sixteen points per m2. High point cloud density is necessary for an accurate bare earth DEM because a large percentage of the points in a region as densely vegetated as Point Reyes [2,3] are absorbed by vegetation.
The LiDAR tiles were mosaicked together in ArcGIS [36] and then processed into a bare earth DEM from which a slope model was derived. These data were the primary input for the segmentation processes and object-based road identification using eCognition software [37]. These steps are covered in subsequent sections.
Historic aerial photos taken of the area in the 1960s were provided by the Point Reyes National Seashore Museum to be used in the accuracy assessment. These photos accurately show the area just after logging ceased [1]. The aerial photos were scanned at 1200 dpi in a charge-coupled device scanner, converted to 1 m resolution, and georeferenced in ArcGIS.
Sentinel 2B imagery from 16 October 2020 at a resolution of 10 m (Figure 1) was chosen to show the current general land cover of the region, although it was not used for analysis because the logging roads have been completely revegetated and are no longer visible using satellite imagery. However, the imagery demonstrates the current state of the region in comparison to the aerial photographs from the 1960s of post-logging, which were used in the accuracy assessment. In the Sentinel 2B imagery, the area can be seen to be completely revegetated with all roads obfuscated from view (Figure 1).

2.4. Data Preparation

The LiDAR point clouds were downloaded in the form of tiles from the Golden Gate National Parks Conservancy. These LiDAR tiles were then mosaicked together and filtered to only include ground points that have penetrated the canopy and struck ground. Then, a digital elevation model (DEM) was derived from the ground-return-only LiDAR, which was then processed into a slope model.
Logging roads present similarly in the data as, and are often confused with, streams and rivers in slope models [2]. Therefore, it was necessary to create a stream layer with which to mask out stream features from analysis. This was achieved by using the DEM derived from the LiDAR to create a flow accumulation raster. It is from this that a stream network was generated, which was used to create a stream order map. This layer was buffered and masked out of the final logging road classification; objects falling within natural drainages were removed from analysis. Shown in Figure 3 are the steps in the road extraction process, which will be covered in the following sections.

2.5. Segmentation

The first step of object-based classification is the image segmentation process, where pixels are aggregated into groups based on their spectral and spatial patterns [38]. The goal of segmentation is to reduce heterogeneity and increase the homogeneity of image objects [39]. The image is broken down into primitive image objects using segmentation algorithms, which cluster pixels together based on their relative spectral value, shape, and texture [40]. The exact parameters used in this process can be found in in Appendix A. Using these primitive image objects, features within the scene can then be extracted [39,40]. Multi-resolution segmentation, within eCognition, has been shown to be one of the most effective methods to categorize complex landscape features into meaningful objects [39,41,42]. However, the scale, shape, and compactness parameters are typically determined in a heuristical manner. Scale refers to the size of image objects relative to the scene’s resolution [39,43]. Generally, to avoid over-segmentation or under-segmentation, a scale should be chosen that results in image objects that encompass the entirety of what is being attempted to classify [2,43]. Shape has two components: smoothness and compactness. Smoothness refers to the roughness of the edges of the image objects, and compactness refers to the object’s length versus width [39]. For this research, a weight of 0.1 for shape, a compactness of 0.8, and a scale of 11 were found to be the best for this particular site under these particular conditions. These numbers, which were arrived at manually, are data-scale-dependent, and if this workflow were applied to LiDAR of a different quality or to a region whose morphology presented in a significantly different way, it is likely that these numbers would have to be adjusted. It is important to remember that the goal of image segmentation is to break an image into homogeneous objects and that specific parameter weights are dependent on the range of values present in the input raster, as well as that raster’s resolution. In this situation, we are particularly interested in arriving at objects that span the complete width of the roads (Figure 4). Segments that span the entire width of the object being classed can gather the spectral and spatial information of neighboring objects in OBIA [2,40,43]. Within the study area, objects encompassing undisturbed hillslope areas have a higher slope value than roads that have been graded for logging equipment. With objects that span the entire width of the low-sloped roads, it is possible to use the contextual information from neighboring high-sloped objects in the classification of the road objects.

2.6. Initial Classification and Region Growing

For this research, we considered two zones: one training-zone, for training the CNN-model, and one validation zone for applying the model to assess its performance independently of the training [44,45]. The validation area was only used for the accuracy assessment stage of this study and was chosen from where there was historic aerial photo coverage that could be accurately georeferenced. The training area was chosen based on its coverage of different types of terrain and roads to ensure that road types from a variety of environmental conditions would be included in the model.
Road objects were classified based not only on their own spectral signature but also the signature of their neighbors, and other relationships such as size and shape. As depicted in Figure 2, the position of roads in the landscape determines how they can be detected in a slope model. Roads in a mid-slope position in most cases are of a low slope and bordered on either side by high slopes (Figure 2a). After classification, these initial road objects were expanded to encompass neighboring objects using region growing algorithms.
Region growing is a method to classify image objects by first identifying a seed object within the segmented image and then growing this seed into the desired segmented object through iterations of growing and merging regions [46]. In some road detection applications, it is known where the objects to be classified are located [18], which aids in the seeding process. This is typically followed by multiple iterations of region growing until the objects are classed. However, because not only the initial seed location must be known but also when to stop the region growing, this requires significant expert knowledge of the region to be classified. This can be the most difficult part of creating an automated system of classification.
In the case of abandoned logging roads, there are two additional problems: (1) it is not known where all of the logging roads are located, and (2) the logging roads are highly fragmented and may no longer be connected due to changes in the geomorphology of the region. The latter point becomes a prohibitive complication if not every road fragment is seeded during the initial steps of the process [18]. This process of seeding and growing is time consuming, can have a number of parameters to adjust, lacks standardized documentation, and can require significant time to develop effective methods [47].
However, the object identification and seeding process can be fully automated through identifying specific features unique to the objects. Road features are more subtle than other types of objects extracted in OBIA, and the highly pronounced features typically utilized to automatically seed the region are not present. Because of this, it is required to use the almost imperceptible changes in the terrain that might be detectable in a slope model [2,48,49,50].
For this study, three classifications of the objects within the scene were used. Objects with a very high slope are not roads, and therefore these were assigned to the (1) ‘exclusion’ class. Next, all other objects within the scene were assigned to the (2) ‘candidate’ class, which may or may not be a road. Then, these candidate objects, which had a low slope but were bordered on either side by high slope, were classified as the (3) ‘road’ class. These road objects were then grown into neighboring candidate objects with a similar slope. Roads that were not captured by the road class ruleset were classified by their road-based slope characteristics and their connectivity to roads; road seed objects that had a high certainty of being roads were grown iteratively along low slope road pathways into areas of less certainty. This method was utilized to avoid the miss-classification of other low-slope areas (such as hilltops) as roads.
Complete coverage of all road objects was not achieved through this process, although a sampling of road objects from many different environments (hilltop and mid-slope) was found. In general, neural networks are most successful if the dataset has a sampling of objects to be classified across the entirety of the environments in which they are present [45]. These objects were then used as input samples to train the CNN machine learning algorithm.

2.7. Training and Applying the CNN

The road objects from the training subset were used as the input samples in a CNN [51]. The first 1000 samples points for each class (road, candidate, and exclusion) were taken from within their classified image objects. Then, the canvas was rotated by 30 degrees and another 1000 samples were taken for each class. This process was repeated iteratively until the map returned to the start after 12 cycles. This rotation was implemented to increase the number of samples, as well as to ensure that the resulting CNN did not spatially correlate the sample points to one another. This resulted in a total of 36,000 samples (12,000 for each class). This was then used to make a 3-layer convolutional neural network, consisting of a convolutional, a pooling, and a fully connected layer. The convolutional layer is where features from the input to the CNN were extracted. The pooling layer then reduced the spatial volume of input image. The fully connected layer connected neurons in one layer to neurons in another layer to classify the image, where the neurons are computational units designed to find patterns in the pooling layer and whose numbers represent the number of pixels in the input image [45].
The output of the CNN was one prediction layer for each input class in the form of a heatmap. A heatmap displays the magnitude of a phenomenon; in the case of CNNs, the values of the resulting heat maps directly represent the model’s prediction of the probability (from 0 to 1) of that class being present in a given area [45,52]. The resulting heat maps from the CNN were then used as an input to guide the classification of the image segments.
Shown in Figure 5 is the resulting output of the convolutional neural network in the validation area. The CNN created a one band heatmap for each of the three class types; these outputs were then stacked into a 3-band raster stack with each band (red, blue, and green) corresponding to a classification parameter (road, candidate, and exclusion). This raster stack is for visualization purposes to better understand and conceptualize the region being analyzed. This visualization’s purpose is to highlight areas with a high confidence of being roads compared to areas of exclusion or uncertainty.

2.8. Classifying and Extracting the Road Network

To reclassify the CNN, a threshold was set in the heatmap from which the features were extracted [45,48,52]. These thresholds were extracted using the maximum likelihood classifier, which designated which class an object in the scene belongs to. The maximum likelihood classifier calculates the probability that a given object belongs to a specific class based on defined membership curves, with the value of the object being the mean of the pixels contained within it [53]. The fine-tuned membership function curves can be seen in Figure 6. An object was attributed to a specific class based on the highest value of the averaged pixels within the image object of the corresponding layer (road, candidate, and exclusion) as defined by the CNN. Using this method, image segments with higher road heatmap values associated with them were classified as road, segments associated with the exclusion zone were classified as excluded, and candidates as candidates. Each object is assigned to the class that has the highest presence within the object. Image segments between these classes were classified based on the likelihood that they might belong to one class or another based on these curves (Figure 6). These objects were then used to re-seed the image.
After classification, road seed objects were grown iteratively along the pathways of the CNN to fill in classification gaps and identify hilltop roads until the network had been fully extracted. Since the CNN better identified road objects that were not evident in the slope model, a more complete and better connected road network could be obtained. Figure 7 illustrates the process of extracting this more complete road network. Figure 7a shows the initial segmentation, Figure 7b depicts the initial classification, Figure 7c shows the trained neural network’s output, and Figure 7d shows the growing process along the neural network pathways.
Next, an iterative growing and shrinking step was added to smooth the road network and prepare it for extraction. This resulting model included both roads and rivers, but this was to be expected in this type of model [2], and therefore rivers were masked out using the stream order network described in the data preparation stage. In a final step, the road network polygons were skeletonized (the centerline of a polygon) into a line type road-network and exported to a shapefile [54].

2.9. Accuracy Assessment

For an accuracy assessment in remote sensing studies, often the most common technique used is a ground survey, where the ground truthing is found through on-the-ground surveys and compared to the classified map. However, many of the logging roads in this project were so overgrown that accessing and verifying them on the ground was difficult or impossible. Historic aerial photos are a suitable alternative to ground surveys, especially when the features being classed can be reliably extracted from imagery. Aerial photos also have the benefit of being a form of remotely sensed data that can be processed off site, which lowers costs and effort [20].
In the case of abandoned logging roads, aerial photographs are an acceptable source of information on the state of the post-logging landscape. Aerial photos of the Point Reyes National Seashore from 1963 document the area just after logging had ceased [1]. The photos were georeferenced, and the road network was overlain and buffered with a 15 m corridor as a tolerance due to possible geometric errors [50]. Any accuracy assessment points falling within this buffer were considered a road.
Accuracy assessments consider misclassified points as either errors of omission (where a point is classified as non-road within the road network) or errors of commission (a point being classified as road outside of the road network) [20]. These errors were considered in the aerial imagery where vegetation had previously been removed and road/non-road areas could be clearly seen. The aerial photos post logging still had some small areas obscured from view and therefore could not be vectorized properly in those regions. In these areas, where there was uncertainty as to the validity of the classified points, ground surveys were used instead. Thus, ground surveys were used in two instances: (1) the roads were obscured by vegetation, or (2) there was not photo coverage.
This accuracy assessment only considered two classes: road and non-road. A total of 500 accuracy assessment points were taken across these two classes within their feature areas [20,55]. A stratified random accuracy assessment was utilized where points were randomly distributed within each class and where each class had a number of points proportional to its relative area [56]. Errors of omission and commission were calculated, with the final result being a confusion matrix whose values were used to calculate accuracy and precision.

3. Results

Depicted in Figure 8 is the extracted road network overlain over a historic aerial photo of the region that was used for the accuracy assessment. Through simple visual inspection, these road networks can still be reliably seen despite 60 years of naturalization.
This research successfully extracted a significant, vast majority of the forest roads within the study area. Within the validation area, a total of 25.725 km of roads were extracted within a 1 km2 area.
The confusion matrix results from the accuracy assessment are found in Table 1. Formulas used for calculating accuracy and precision can be found in Table 2. For this area, a Kappa of 0.977 (Table 1), a precision of 0.991, and an accuracy of 0.992 were found (Table 2). Kappa values generally have been falling out of favor in the remote sensing community, with a variety of other metrics taking their place [55] (as shown in Table 2). Many recent image-analysis papers have adopted using confusion matrix formulas in their results [45,55,56,57,58,59]. Since there is a lack of standardization between studies, to allow for comparison, all standard formulas have been included here.

4. Conclusions

The goal of this study was to extract historic forest road networks out of LiDAR point clouds under heavy canopy. This work attempted to augment the currently used automated road extraction process to better capture hilltop roads and fill gaps in cutbank roads by investigating the use of CNNs to automate and improve OBIA accuracy.
The results have shown that roads were able to accurately be identified in LiDAR using OBIA in conjunction with CNNs. Specifically, the results show that OBIA methods can be improved upon using CNNs. CNNs help to increase classification accuracy and automate the extraction methods that, in previous research, had to be processed using non-autonomous methods. The results of this study’s coupled CNN, OBIA approach have minimized these errors, improved classification accuracy, and captured hilltop roads.
When looking at studies of similar areas, such as Sherba et al. [2], pixel-based classification results show that the unsupervised classification of logging roads had a total accuracy of 78%. They also found that the errors of commission occurred largely on ridgelines where large areas of low slope were present, and errors of omission occurred as gaps in the road network [2]. In their paper, when a strictly fully automated OBIA approach was used, an initial classification accuracy of 86% was found. Additionally, after extracting out misclassified drainages and incorporating in hand-digitized ridge roads, classification accuracy increased to 90%.
Sherba et al. [2] classified their segmented slope model into two initial classes: roads that were considered ‘certain road objects’, and ‘road candidate objects’, which were roads only if they were near the initial road class. They then grew these road objects into the candidate object class to create a logging road map [2]. A major difficulty in this workflow was the time-consuming nature of the task and the lack of automation in its identification of hilltop roads. In order to increase the practicality of road extraction methods, this research effort has also added an exclusion class (of areas certain not be roads) to restrict the possibility of roads growing into areas with an extremely low likelihood of being roads.
This work incorporates their OBIA process with CNN machine learning techniques. First, OBIA was used to carry out an initial classification of the region; then, these objects were used as samples for a CNN machine learning algorithm. The output of the CNN was then used to re-segmented the image. Next, the resulting segmentation was classified based on the maximum likelihood classifier, from which the road network was extracted. This process increased classification accuracy, automated the process of identifying hilltop roads, and aided the process of filling in gaps in hillside roads.
Some novel findings of this research are:
  • In order to segment a slope raster, objects should completely span roads because this allows for the incorporation of neighboring objects of, for example, higher slope values to be used in the classification of road objects.
  • When attempting to expand the road network through the growth road-classified image objects, the low slope of the road can be used, but this is greatly improved upon by using the values contained in the roads CNN.
  • Road objects and stream objects are almost identical. At this stage, it is recommended to mask out these objects during post-processing. Although an automated process to remove these is desirable, more research is needed.
  • The output of the CNN appears to highlight road objects that may be producing more sediment. Proportional road surface area has been shown to be positively correlated to sediment discharge [60]. The heatmaps resulting from the CNN empirically appear to be associating higher road values with road objects of a greater surface area. Future research is needed to understand if the road features in the CNN that have higher values associated with them are producing more sediment and should have a higher remediation priority. If this is true, it may be possible to adapt the CNN algorithm to make a ‘road order’ map of road objects that have the greatest impact on streams in the region. To achieve this objective, more field research would be needed in the measuring of sediment loads being produced from these features.

Author Contributions

W.W. conceptualized and designed the study, conducted the research, analyzed the data, and wrote the manuscript. L.B. and E.H. provided guidance and critical feedback throughout the research process and contributed to the editing and revision of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The proprietary nature of the data prevents us from publicly archiving or sharing it at this time. Interested researchers may request access to the data from the Golden Gate National Parks Conservancy. Details on how to obtain access to the data can be obtained by contacting the organization directly. The methodology employed in this study is provided in the Appendix A of the paper.

Acknowledgments

I would like to thank Shawn Maloney of Point Reyes National Seashore for his help in organizing the project and his input in the selection of the study area. I would like to thank Paul Engel of Point Reyes National Seashore for his help with accessing the museum containing the historic photos, as well as his help with finding the proper photos for the study region. Moreover, I would like to thank Ben Becker of the Point Reyes National Seashore for his help with organizing the project.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional neural network
LiDARLight detection and ranging
OBIAObject-based image analysis
DEMDigital elevation model

Appendix A. Process Tree

Figure A1. Complete eCognition process tree to extract the road network.
Figure A1. Complete eCognition process tree to extract the road network.
Remotesensing 15 03369 g0a1

References

  1. Becker, B.; National Park Service, Point Reyes, USA; Maloney, S.; National Park Service, Point Reyes, USA. Personal communication, 2020.
  2. Sherba, J.; Blesius, L.; Davis, J. Object-Based Classification of Abandoned Logging Roads under Heavy Canopy Using LiDAR. Remote Sens. 2014, 6, 4043–4060. [Google Scholar] [CrossRef] [Green Version]
  3. White, R.A.; Dietterick, B.C.; Mastin, T.; Strohman, R. Forest Roads Mapped Using LiDAR in Steep Forested Terrain. Remote Sens. 2010, 2, 1120–1141. [Google Scholar] [CrossRef] [Green Version]
  4. Rab, M.A. Recovery of soil physical properties from compaction and soil profile disturbance caused by logging of native forest in Victorian Central Highlands, Australia. For. Ecol. Manag. 2004, 191, 329–340. [Google Scholar] [CrossRef]
  5. Troendle, C.A.; King, R.M. The Effect of Timber Harvest on the Fool Creek Watershed, 30 Years Later. Water Resour. Res. 1985, 21, 1915–1922. [Google Scholar] [CrossRef] [Green Version]
  6. Luce, C.H.; Black, T.A. Sediment production from forest roads in western Oregon. Water Resour. Res. 1999, 35, 2561–2570. [Google Scholar] [CrossRef]
  7. Underwood, K.R. The Effects of Hillslopes on Trail Degradation Olympic National Park, Washington; University of Arkansas: Fayetteville, NC, USA, 2009. [Google Scholar]
  8. Douglas, I.; Spencer, T.; Greer, T.; Bidin, K.; Sinun, W.; Meng, W.W. The Impact of Selective Commercial Logging on Stream Hydrology, Chemistry and Sediment Loads in the Ulu Segama Rain Forest, Sabah, Malaysia. Philos. Trans. Biol. Sci. 1992, 335, 397–406. [Google Scholar] [CrossRef]
  9. Wemple, B.C.; Swanson, F.J.; Jones, J.A. Forest roads and geomorphic process interactions, Cascade Range, Oregon. Earth Surf. Process. Landforms 2001, 26, 191–204. [Google Scholar] [CrossRef]
  10. Wong, K.; Neo, L. Species richness, lineages, geography, and the forest matrix: Borneo’s ‘Middle Sarawak’ phenomenon. Gard. Bull. Singap. 2019, 71, 463–496. [Google Scholar] [CrossRef]
  11. Kelly, J.P.; Fox, K.J. Fish Species of Tomales Bay and Its Watershed; The Tomales Bay Association: Point Reyes Station, CA, USA, 1995. [Google Scholar]
  12. Ketcham, B.J. Point Reyes National Seashore Water Quality Monitoring Report, May 1999–May 2001; Point Reyes National Seashore: Point Reyes Station, CA, USA, 2001. [Google Scholar]
  13. Gronsdahl, S.; Moore, R.D.; Rosenfeld, J.; McCleary, R.; Winkler, R. Effects of forestry on summertime low flows and physical fish habitat in snowmelt-dominant headwater catchments of the Pacific Northwest. Hydrol. Process. 2019, 33, 3152–3168. [Google Scholar] [CrossRef]
  14. Jacob, L.; Prudente, B.; Montag, L.; Silva, R. The effect of different logging regimes on the ecomorphological structure of stream fish assemblages in the Brazilian Amazon. Hydrobiologia 2021, 848, 1027–1039. [Google Scholar] [CrossRef]
  15. Baxter, C.V.; Frissell, C.A.; Hauer, F.R. Geomorphology, Logging Roads, and the Distribution of Bull Trout Spawning in a Forested River Basin: Implications for Management and Conservation. Trans. Am. Fish. Soc. 1999, 128, 854–867. [Google Scholar] [CrossRef]
  16. Dan Moore, R.; Wondzell, S. Physical hydrology and the effects of forest harvesting in the Pacific Northwest: A review. J. Am. Water Resour. Assoc. 2005, 41, 763–784. [Google Scholar] [CrossRef]
  17. Ahnert, F.O. Introduction to Geomorphology; Arnold: London, UK, 1998. [Google Scholar]
  18. Li, P.; Wang, R.; Wang, Y.; Gao, G. Automated Method of Extracting Urban Roads Based on Region Growing from Mobile Laser Scanning Data. Sensors 2019, 19, 5262. [Google Scholar] [CrossRef] [Green Version]
  19. Li, Y.; Yong, B.; Wu, H.; An, R.; Xu, H. Road detection from airborne LiDAR point clouds adaptive for variability of intensity data. Optik 2015, 126, 4292–4298. [Google Scholar] [CrossRef]
  20. Lillesand, T.M.; Kiefer, R.W.; Chipman, J.W. Remote Sensing and Image Interpretation, 7th ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015; p. 720. [Google Scholar]
  21. Mena, J. State of the art on automatic road extraction for GIS update: A novel classification. Pattern Recognit. Lett. 2003, 24, 3037–3058. [Google Scholar] [CrossRef]
  22. Wang, W.; Yang, N.; Zhang, Y.; Wang, F.; Cao, T.; Eklund, P. A review of road extraction from remote sensing images. J. Traffic Transp. Eng. Engl. Ed. 2016, 3, 271–282. [Google Scholar] [CrossRef] [Green Version]
  23. Yucong, L.; Saripalli, S. Road detection from aerial imagery. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St. Paul, MN, USA, 14–18 May 2012. [Google Scholar] [CrossRef]
  24. Stoker, J.M.; Brock, J.C.; Soulard, C.E.; Ries, K.G.; Sugarbaker, L.; Newton, W.E.; Haggerty, P.K.; Lee, K.E.; Young, J.A. USGS Lidar Science Strategy—Mapping the Technology to the Science: U.S. Geological Survey Open-File Report 2015; US Department of the Interior, US Geological Survey: Washington, DC, USA, 2016; Volume 1209, p. 33. [Google Scholar] [CrossRef]
  25. Preetha, M.M.S.J.; Suresh, L.P.; Bosco, M.J. Image segmentation using seeded region growing. In Proceedings of the 2012 International Conference on Computing, Electronics and Electrical Technologies (ICCEET), Nagercoil, India, 21–22 March 2012. [Google Scholar] [CrossRef]
  26. Khatami, R.; Mountrakis, G.; Stehman, S.V. A meta-analysis of remote sensing research on supervised pixel-based land-cover image classification processes: General guidelines for practitioners and future research. Remote Sens. Environ. 2016, 177, 89–100. [Google Scholar] [CrossRef] [Green Version]
  27. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  28. Karami, A.; Khoorani, A.; Noohegar, A.; Shamsi, S.R.F.; Moosavi, V. Gully erosion mapping using object-based and pixel-based image classification methods. Environ. Eng. Geosci. 2015, 21, 101–110. [Google Scholar] [CrossRef]
  29. Ferreira, M.P.; Lotte, R.G.; D’Elia, F.V.; Stamatopoulos, C.; Kim, D.H.; Benjamin, A.R. Accurate mapping of Brazil nut trees (Bertholletia excelsa) in Amazonian forests using WorldView-3 satellite images and convolutional neural networks. Ecol. Inform. 2021, 63, 101302. [Google Scholar] [CrossRef]
  30. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  31. Martins, V.S.; Kaleita, A.L.; Gelder, B.K.; da Silveira, H.L.F.; Abe, C.A. Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution. ISPRS J. Photogramm. Remote Sens. 2020, 168, 56–73. [Google Scholar] [CrossRef]
  32. Robson, B.A.; Bolch, T.; MacDonell, S.; Hölbling, D.; Rastner, P.; Schaffer, N. Automated detection of rock glaciers using deep learning and object-based image analysis. Remote Sens. Environ. 2020, 250, 112033. [Google Scholar] [CrossRef]
  33. National Research Council; Division on Earth and Life Studies; Ocean Studies Board; Committee on Best Practices for Shellfish Mariculture and the Effects of Commercial Activities in Drakes Estero, Pt. Reyes National Seashore, California. Shellfish Mariculture in Drakes Estero, Point Reyes National Seashore, California; National Academies Press: Washington, DC, USA, 2009. [Google Scholar]
  34. Livingston, D. A Good Life: Dairy Farming in the Olema Valley: A History of the Dairy and Beef Ranches of the Olema Valley and Lagunitas Canyon, Golden Gate National Recreation Area and Point Reyes National Seashore, Marin County, California; Historic Resource Study, National Park Service; Department of the Interior: San Francisco, CA, USA, 1995. [Google Scholar]
  35. Quantum Spatial Inc. Marin County, California QL1 LiDAR: Technical Data Report; Quantum Spatial Inc.: Corvallis, OR, USA, 2019. [Google Scholar]
  36. ESRI. ArcGIS Pro: Version 2.7. 2020. Available online: https://support.esri.com/en/technical-article/000012500 (accessed on 10 May 2023).
  37. eCognition. Version 10. 2020. Available online: https://geospatial.trimble.com/ (accessed on 10 May 2023).
  38. Alon, A.S.; Festijo, E.D.; Juanico, D.E.O. An Object-Based Supervised Nearest Neighbor Method for Extraction of Rhizophora in Mangrove Forest from LiDAR Data and Orthophoto. In Proceedings of the 2019 IEEE 9th International Conference on System Engineering and Technology (ICSET), Shah Alam, Malaysia, 7 October 2019. [Google Scholar] [CrossRef]
  39. El-naggar, A.M. Determination of optimum segmentation parameter values for extracting building from remote sensing images. Alex. Eng. J. 2018, 57, 3089–3097. [Google Scholar] [CrossRef]
  40. Xu, J.; Luo, C.; Chen, X.; Wei, S.; Luo, Y. Remote Sensing Change Detection Based on Multidirectional Adaptive Feature Fusion and Perceptual Similarity. Remote Sens. 2021, 13, 3053. [Google Scholar] [CrossRef]
  41. Carreira-Perpiñán, M.Á. A review of mean-shift algorithms for clustering. arXiv 2015, arXiv:1503.00687. [Google Scholar]
  42. Tab, F.A.; Naghdy, G.; Mertins, A. Scalable multiresolution color image segmentation. Signal Process. 2006, 86, 1670–1687. [Google Scholar] [CrossRef] [Green Version]
  43. Duan, G.; Zhang, J.; Zhang, S. Assessment of Landslide Susceptibility Based on Multiresolution Image Segmentation and Geological Factor Ratings. Int. J. Environ. Res. Public Health 2020, 17, 7863. [Google Scholar] [CrossRef]
  44. Navulur, K. Multispectral Image Analysis Using the Object-Oriented Paradigm; CRC Press/Taylor & Francis: Boca Raton, NJ, USA, 2007; Volume 24, p. 165. [Google Scholar]
  45. Prakash, N.; Manconi, A.; Loew, S. Mapping Landslides on EO Data: Performance of Deep Learning Models vs. Traditional Machine Learning Models. Remote Sens. 2020, 12, 346. [Google Scholar] [CrossRef] [Green Version]
  46. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  47. Idrees, M.; Pradhan, B. Hybrid Taguchi-Objective Function Optimization Approach For Automatic Cave Bird Detection From Terrestrial Laser Scanning Intensity Image. Int. J. Speleol. 2016, 45, 289–301. [Google Scholar] [CrossRef] [Green Version]
  48. De Luca, G.N.; Silva, J.M.; Cerasoli, S.; Araújo, J.; Campos, J.; Di Fazio, S.; Modica, G. Object-Based Land Cover Classification of Cork Oak Woodlands using UAV Imagery and Orfeo ToolBox. Remote Sens. 2019, 11, 1238. [Google Scholar] [CrossRef] [Green Version]
  49. Erikson, M. Segmentation of individual tree crowns in colour aerial photographs using region growing supported by fuzzy rules. Can. J. For. Res. 2003, 33, 1557–1563. [Google Scholar] [CrossRef]
  50. Zhen, Z.; Quackenbush, L.; Zhang, L. Impact of Tree-Oriented Growth Order in Marker-Controlled Region Growing for Individual Tree Crown Delineation Using Airborne Laser Scanner (ALS) Data. Remote Sens. 2014, 6, 555–579. [Google Scholar] [CrossRef] [Green Version]
  51. eCognition. Using Deep Learning Models. 2023. Available online: https://tinyurl.com/3we9rfvr (accessed on 10 May 2023).
  52. Timilsina, S.; Aryal, J.; Kirkpatrick, J.B. Mapping Urban Tree Cover Changes Using Object-Based Convolution Neural Network (OB-CNN). Remote Sens. 2020, 12, 3017. [Google Scholar] [CrossRef]
  53. Richards, J.A.J.A. Remote Sensing Digital Image Analysis: An Introduction, 5th ed.; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  54. Lewandowicz, E.; Flisek, P. Base Point Split Algorithm for Generating Polygon Skeleton Lines on the Example of Lakes. ISPRS Int. J. Geo-Inf. 2020, 9, 680. [Google Scholar] [CrossRef]
  55. Guillén, L.A. Accuracy Assessment in Convolutional Neural Network-Based Deep Learning Remote Sensing Studies—Part 1: Literature Review. Remote Sens. 2021, 13, 2450. [Google Scholar]
  56. Foody, G.M. Explaining the unsuitability of the kappa coefficient in the assessment and comparison of the accuracy of thematic maps obtained by image classification. Remote Sens. Environ. 2020, 239, 111630. [Google Scholar] [CrossRef]
  57. Papp, A.; Pegoraro, J.; Bauer, D.; Taupe, P.; Wiesmeyr, C.; Kriechbaum-Zabini, A. Automatic Annotation of Hyperspectral Images and Spectral Signal Classification of People and Vehicles in Areas of Dense Vegetation with Deep Learning. Remote Sens. 2020, 12, 2111. [Google Scholar] [CrossRef]
  58. Veeranampalayam Sivakumar, A.N.; Li, J.; Scott, S.; Psota, E.J.; Jhala, A.; Luck, J.D.; Shi, Y. Comparison of Object Detection and Patch-Based Classification Deep Learning Models on Mid- to Late-Season Weed Detection in UAV Imagery. Remote Sens. 2020, 12, 2136. [Google Scholar] [CrossRef]
  59. Yang, M.D.; Tseng, H.H.; Hsu, Y.C.; Tsai, H.P. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sens. 2020, 12, 633. [Google Scholar] [CrossRef] [Green Version]
  60. Reid, L.M.; Dunne, T. Sediment production from forest road surfaces. Water Resour. Res. 1984, 20, 1753–1761. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The 11.75 km2 study area is located in the Five Brooks region of the Point Reyes National Seashore. Sentinel 2 imagery from 16 October 2020.
Figure 1. The 11.75 km2 study area is located in the Five Brooks region of the Point Reyes National Seashore. Sentinel 2 imagery from 16 October 2020.
Remotesensing 15 03369 g001
Figure 2. Two examples of abandoned logging roads present within the study area, where location (a) depicts a sample area within the study area where a road has been placed in a mid-slope position. Location (b) depicts a road in a hill-top position. In photos on left, natural slope is shown in blue, and incised logging roads are marked in red. Graphic on the right depicts cross-sectional views of LiDAR taken of same location.
Figure 2. Two examples of abandoned logging roads present within the study area, where location (a) depicts a sample area within the study area where a road has been placed in a mid-slope position. Location (b) depicts a road in a hill-top position. In photos on left, natural slope is shown in blue, and incised logging roads are marked in red. Graphic on the right depicts cross-sectional views of LiDAR taken of same location.
Remotesensing 15 03369 g002
Figure 3. Workflow from unprocessed LiDAR to extracted road network.
Figure 3. Workflow from unprocessed LiDAR to extracted road network.
Remotesensing 15 03369 g003
Figure 4. A small subset of the training area after segmentation.
Figure 4. A small subset of the training area after segmentation.
Remotesensing 15 03369 g004
Figure 5. The output of the convolutional neural network. Each color band corresponds to each one of the three classes present in the model. Red represents the areas of highest likelihood of being roads (road class); blue represents possible roads (candidate class); and green highest likelihood of not being roads (exclusion class).
Figure 5. The output of the convolutional neural network. Each color band corresponds to each one of the three classes present in the model. Red represents the areas of highest likelihood of being roads (road class); blue represents possible roads (candidate class); and green highest likelihood of not being roads (exclusion class).
Remotesensing 15 03369 g005
Figure 6. Membership functions in eCognition for maximum likelihood classifier from CNN heatmaps, where item (a) shows road membership, item (b) depicts the candidate class for growing roads into, and item (c) shows the exclusion class.
Figure 6. Membership functions in eCognition for maximum likelihood classifier from CNN heatmaps, where item (a) shows road membership, item (b) depicts the candidate class for growing roads into, and item (c) shows the exclusion class.
Remotesensing 15 03369 g006
Figure 7. The road classification process, where (a) shows initial segmentation, item (b) depicts initial classification, item (c) shows the trained neural network’s output, and item (d) shows the growing process along neural network pathways. In items (b) through (d), red represents the areas with the highest likelihood of being roads (road class); blue represents possible roads (candidate class); and green represents the highest likelihood of not being roads (exclusion class).
Figure 7. The road classification process, where (a) shows initial segmentation, item (b) depicts initial classification, item (c) shows the trained neural network’s output, and item (d) shows the growing process along neural network pathways. In items (b) through (d), red represents the areas with the highest likelihood of being roads (road class); blue represents possible roads (candidate class); and green represents the highest likelihood of not being roads (exclusion class).
Remotesensing 15 03369 g007
Figure 8. The extracted road network overlain over a historic aerial photo of the region that was used in the accuracy assessment.
Figure 8. The extracted road network overlain over a historic aerial photo of the region that was used in the accuracy assessment.
Remotesensing 15 03369 g008
Table 1. Confusion matrix from accuracy assessment.
Table 1. Confusion matrix from accuracy assessment.
RoadsNon-RoadsTotalAccuracyKappa
Roads11211130.99115
Non-Roads33843870.99225
Total115385500
Accuracy0.973910.99740 0.992
Kappa 0.97728
Table 2. Formulas for understanding the confusion matrix results. TP = true positive, TN = true negative, FP = false positive, and FN = false negative.
Table 2. Formulas for understanding the confusion matrix results. TP = true positive, TN = true negative, FP = false positive, and FN = false negative.
MeasureValueDerivations
Sensitivity0.9739TPR = TP/(TP + FN)
Specificity0.9974SPC = TN/(FP + TN)
Precision0.9912PPV = TP/(TP + FP)
Negative Predictive Value0.9922NPV = TN/(TN + FN)
False Positive Rate0.0026FPR = FP/(FP + TN)
False Discovery Rate0.0088FDR = FP/(FP + TP)
False Negative Rate0.0261FNR = FN/(FN + TP)
Accuracy0.9920ACC = (TP + TN)/(TP + TN + FP + FN)
F1 Score0.9825F1 = 2TP/(2TP + FP + FN)
Matthews Correlation Coefficient0.9773TP*TN − FP*FN/sqrt((TP + FP)*(TP + FN)*(TN + FP)*(TN + FN))
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wiskes, W.; Blesius, L.; Hines, E. Identification of Abandoned Logging Roads in Point Reyes National Seashore. Remote Sens. 2023, 15, 3369. https://doi.org/10.3390/rs15133369

AMA Style

Wiskes W, Blesius L, Hines E. Identification of Abandoned Logging Roads in Point Reyes National Seashore. Remote Sensing. 2023; 15(13):3369. https://doi.org/10.3390/rs15133369

Chicago/Turabian Style

Wiskes, William, Leonhard Blesius, and Ellen Hines. 2023. "Identification of Abandoned Logging Roads in Point Reyes National Seashore" Remote Sensing 15, no. 13: 3369. https://doi.org/10.3390/rs15133369

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop