Next Article in Journal
Assessing the Response of the Net Primary Productivity to Snow Phenology Changes in the Tibetan Plateau: Trends and Environmental Drivers
Next Article in Special Issue
Enhancing Long-Term Robustness of Inter-Space Laser Links in Space Gravitational Wave Detection: An Adaptive Weight Optimization Method for Multi-Attitude Sensors Data Fusion
Previous Article in Journal
Neural Network-Based Fusion of InSAR and Optical Digital Elevation Models with Consideration of Local Terrain Features
Previous Article in Special Issue
Masked Image Modeling Auxiliary Pseudo-Label Propagation with a Clustering Central Rectification Strategy for Cross-Scene Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cadastral-to-Agricultural: A Study on the Feasibility of Using Cadastral Parcels for Agricultural Land Parcel Delineation

Lyles School of Civil and Construction Engineering, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(19), 3568; https://doi.org/10.3390/rs16193568
Submission received: 22 July 2024 / Revised: 15 September 2024 / Accepted: 18 September 2024 / Published: 25 September 2024

Abstract

:
Agricultural land parcels (ALPs) are essential for effective agricultural management, influencing activities ranging from crop yield estimation to policy development. However, traditional methods of ALP delineation are often labor-intensive and require frequent updates due to the dynamic nature of agricultural practices. Additionally, the significant variations across different regions and the seasonality of agriculture pose challenges to the automatic generation of accurate and timely ALP labels for extensive areas. This study introduces the cadastral-to-agricultural (Cad2Ag) framework, a novel approach that utilizes cadastral data as training labels to train deep learning models for the delineation of ALPs. Cadastral parcels, which are relatively widely available and stable elements in land management, serve as proxies for ALP delineation. Employing an adapted U-Net model, the framework automates the segmentation process using remote sensing images and geographic information system (GIS) data. This research evaluates the effectiveness of the proposed Cad2Ag framework in two U.S. regions—Indiana and California—characterized by diverse agricultural conditions. Through rigorous evaluation across multiple scenarios, the study explores diverse scenarios to enhance the accuracy and efficiency of ALP delineation. Notably, the framework demonstrates effective ALP delineation across different geographic contexts through transfer learning when supplemented with a small set of clean labels, achieving an F 1 -score of 0.80 and an Intersection over Union (IoU) of 0.67 using only 200 clean label samples. The Cad2Ag framework’s ability to leverage automatically generated, extensive, free training labels presents a promising solution for efficient ALP delineation, thereby facilitating effective management of agricultural land.

1. Introduction

The agricultural land parcel (ALP) provides key information for crop yield estimation, crop type classification, land use management, and various agricultural applications. Spatially defining the boundaries of agricultural land at the parcel level can effectively aggregate various information about fields, thus aiding sound management of agriculture policies [1,2]. European policies, such as the Common Agricultural Policy, utilize systems like the Land Parcel Identification System as foundational tools for effective spatial management [3]. Similarly, in the United States, although the concept of systematic ALP delineation is less centralized, it is widely implemented through various state and local initiatives that utilize geographic information systems (GISs). Programs like the USDA’s common land unit serve similar purposes in mapping farm fields for agricultural and conservation programs [4]. Globally, land managers recognize the potential benefits of such systems in enhancing sustainable agricultural practices. Well-defined agricultural parcels not only streamline farm management and subsidy distribution but also improve environmental compliance and resource management [5,6,7,8].
Although the ALP offers benefits for agricultural applications, it requires substantial resources for delineation and maintenance due to the dynamic nature of farming, unlike cadastral parcels, which are based on land ownership and more commonly recognized as foundational to land management systems [9]. Consequently, efforts to automatically delineate ALPs are gaining attention [10,11,12,13]. The typical algorithms for parcel delineation often operate under the assumption that there are noticeable changes in land use or cover characteristics between adjacent parcels. Traditional approaches to delineating the parcel involve edge detection algorithms. Techniques that use convolution filters, such as Canny edge detection [14], were popular in the initial stages of research [15]. Following these methods, an ultrametric contour map (UCM) [16], generated from the globalized probability of boundary-based contour detection [17], was adopted for automatic parcel delineation [18]. More recently, the evolution of artificial intelligence has enabled the use of deep neural networks, such as U-Net, for segmenting the ALP. The performance of these networks has been compared to the gPb-UCM method, demonstrating significant improvements in the automation of ALP delineation [11,19].
Despite these advancements, deep learning approaches often face challenges, primarily due to the need for high-quality training labels and the intensive resources required for continuous updates, driven by the dynamic nature of agricultural activities [20]. These factors frequently lead to poor model performance and limited transferability to regions with different agricultural characteristics. While domain adaptation and transfer learning methods have been developed to address data distribution shift problems [21,22], the scarcity of reliable training samples across diverse landscapes remains a significant challenge in ALP delineation. Consequently, robust datasets are critical for training deep learning models to enable automated ALP extraction across diverse regions.
This research identifies cadastral parcels as promising surrogate training labels for ALP. Cadastral parcels are relatively stable elements of land management systems based on land ownership and are more widely available across different countries and states compared to agricultural parcels [9,23]. This availability makes them effective proxies for training models for ALP delineation. However, using cadastral parcels as training datasets introduces several challenges: their boundaries often misalign with actual agricultural patterns, leading to the prevalence of noisy labels due to discrepancies between cadastral and agricultural boundaries. Such inaccuracies can significantly impair the model’s accuracy and its ability to generalize. Additionally, class imbalance within the training data can degrade model performance, as the disproportionate representation of certain classes leads to biased learning and ineffective classification [24].
This paper proposes and examines a methodological framework for transitioning cadastral parcels to agricultural use: the Cad2Ag framework. Our framework involves redefining parcel classes and refining training datasets to facilitate the large-scale production of training labels for ALP tasks from cadastral labels. To address the prevalent class imbalance issues in this task, we refined the training data to ensure balanced representation and employed weighted cross-entropy in the loss function for its simplicity and effectiveness in improving model learning. Additionally, we explored the incorporation of multi-temporal remote sensing data to improve model robustness and assess performance across various training data volumes. The framework’s transferability was also evaluated by applying it to geographically distant and distinct locations. To assess the effectiveness of the Cad2Ag framework, we adopted U-Net as a representative example due to its widespread adoption and robust performance in semantic segmentation tasks for remote sensing data. The contribution of our work lies in developing a comprehensive framework that generates reliable and cost-effective agricultural labels from cadastral data, designed to train deep learning models for ALP delineation that are effective across diverse agricultural patterns and temporal variations.
The remainder of this article is structured as follows. Section 2 introduces the study area and materials. Section 3 presents the proposed framework, Cad2Ag, including class redefinition, label data generation, and data refinement. Section 4 outlines the experimental design used to examine the effectiveness of the Cad2Ag framework. Section 5 shows the experimental results, along with an investigation of various factors to enhance the performance and practicality of the framework. Section 6 discusses the significance and limitations of the study. Finally, Section 7 concludes the article.

2. Study Area and Material

This study utilized Indiana, USA, for generating agricultural parcel labels from cadastral labels and assessing the performance of the proposed framework. Additionally, Fresno County in California was chosen for the transfer learning experiment due to its significantly different field signatures compared to Indiana. Both regions are predominantly agricultural, aligning with our focus on identifying agricultural parcels in farmlands. The primary crops in Indiana are soybeans and corn, whereas, in California, the major crops are almonds and grapes. The distinct agricultural features of these regions, driven by their primary crops, make these datasets useful for the study. Cadastral parcel data were sourced from the Indiana GIS Data Harvest program of 2020. Auxiliary data, including digital surface models and road centerline data, which are widely available across the U.S., were used to refine the cadastral parcels. The digital surface model was obtained from the USGS’s 3D elevation program (https://www.usgs.gov/3d-elevation-program, accessed on 17 September 2024) [25], and the road centerline data were also sourced from the Indiana GIS Data Harvest program (https://data-harvest-ingov.hub.arcgis.com/, accessed on 17 September 2024). The detailed label-generation process is further explained in the next section.
For remotely sensed imagery, this study employs orthomosaic images from the National Agriculture Imagery Program (NAIP) (https://naip-usdaonline.hub.arcgis.com/, accessed on 17 September 2024) [26]. The geographical extent of our study region is depicted in Figure 1. Managed by the USDA’s Farm Service Agency, the NAIP provides high-quality aerial imagery of the United States’ agricultural regions. The imagery features a ground sampling distance (GSD) ranging from 0.6 to 1 m and includes four multispectral bands (blue, green, red, and near-infrared). NAIP imagery is typically captured during the agricultural growing seasons on a three-year cycle, though some states may opt for annual captures depending on funding and program requirements. This schedule ensures that the imagery remains relatively up-to-date, facilitating the monitoring of changes in agricultural landscapes over time. The widespread availability and accumulated historical data make this dataset ideal for validating the effectiveness of the proposed framework and demonstrating its practicality. In our study, three multi-temporal NAIP datasets acquired in June to September 2016, July 2018, and June 2020, all with a 0.6 m GSD, were utilized (examples shown in Figure 2).

3. The Cad2Ag Framework

The Cad2Ag framework is a methodological approach designed to leverage cadastral parcels as proxies for ALP delineation. The primary goal of this framework is to generate accurate and reliable training datasets for deep learning models, which can then be used to delineate agricultural parcels across diverse landscapes. The framework encompasses several key processes: redefining parcel classes to align with agricultural patterns, generating labeled data through an automated workflow, and refining these datasets to ensure balanced and effective model training. By systematically addressing these steps, the Cad2Ag framework facilitates the large-scale production of training data and subsequently aids in training deep learning models to perform ALP delineation tasks.
In the following subsections, we provide detailed explanations of each component of the Cad2Ag framework, including class definitions, data labeling, and data refinement.

3.1. Class Definition for ALP Delineation

In the context of semantic segmentation, which is a per-pixel classification task, we define four distinct classes to facilitate the delineation of ALP. The class definitions are as follows:
  • Background: Non-agricultural land;
  • Parcel: Agricultural land;
  • Road: Road centerlines;
  • Buffer: Boundaries between farmlands or between farmlands and other land types.
Our logic behind the class definitions begins with the requirement that agricultural parcel delineation necessitates the identification of two primary classes: parcel and non-parcel. The parcel class predominantly encompasses agricultural farmland, while the non-parcel class includes all other areas. These classes are defined primarily through their interfacing boundaries. While these boundaries often coincide with those of cadastral parcels, discrepancies frequently arise. Our framework aims to address these discrepancies by automatically generating labels for the ALP delineation.
Physical features such as hedges, fences, walls, bushes, roads, or rivers can demarcate parcel boundaries [27]. Parcel boundaries can often be identified by observable features like fences and roads, which constitute a significant proportion of these demarcations [28]. Since the quality of ALP segmentation depends heavily on performance at these interface boundaries, it is crucial for the deep learning model to focus on them. Therefore, the non-parcel class is subdivided into three subclasses to recognize the agricultural pattern more easily: background, road, and buffer. The background class represents areas surrounding the agricultural parcels that do not form part of the parcels themselves. This class may include features such as buildings, water bodies, forests, and other land cover types, playing a crucial role in contextualizing ALPs and distinguishing them from other land uses. Road represents the road centerlines and their immediate surroundings. The buffer class targets potential boundaries that match the agricultural pattern. This class is critical for accurately delineating interfaces where different parcel boundaries meet distinct edges. By incorporating these zones, the buffer class enhances segmentation accuracy, ensuring more precise delineation of ALPs and serving as a key intermediary to refine boundary definitions within the dataset. These classifications ensure that the deep learning model is finely attuned to the complex spatial arrangements of agricultural landscapes, enhancing both the accuracy and utility of the ALP delineation. The detailed procedure for creating these labels is described in the following subsection.

3.2. Data Labeling Workflow

This section outlines the data labeling workflow employed to generate training datasets for the semantic segmentation task. The Normalized Digital Height Model (NDHM), road centerlines, and cadastral parcels were utilized as inputs. The general workflow of the proposed framework is described in Figure 3.
First, non-agricultural areas are designated as the background class. To classify these areas, we employ the Digital Surface Model (DSM) to generate a binary mask layer, aligning the spatial resolution with NAIP imagery through nearest-neighbor resampling. Specifically, we first create an NDHM using the DSM to represent only non-ground heights and set a height threshold of 3 m to exclude tall objects, assuming that any object exceeding this height likely represents a non-agricultural feature, such as forests, residential buildings, or industrial zones. To enhance the quality of this mask layer, a morphological closing filter is applied to reduce speckle noise and fill in gaps, ensuring a cleaner and more accurate classification of non-agricultural areas.
Further, road centerlines and cadastral parcel boundaries are employed to label the road and buffer classes. The geometry of cadastral parcels is converted from polygons to lines prior to rasterizing the vector data, focusing on boundaries pertinent to the buffer class. Both datasets are rasterized and dilated with the appropriate buffer size, determined by referencing NAIP imagery and adjusted according to the road size of the target area. In our experiment, the spatial resolution of all mask layers was matched with NAIP imagery, and the coordinate system was reprojected to WGS 84/UTM zone 16N.
Following the creation of the mask layers, label data are automatically generated. Intersections between the background and road are labeled as roads, while intersections between the background and parcel are labeled as background. Overlapping areas between the parcel and the road are labeled as roads. Importantly, the training data generation process is fully automated, facilitating the rapid production of a substantial amount of training data. The labeled data are then cropped to a patch size of 512 × 512 pixels for input into the model.

3.3. Data Refinement

The subsequent step involves refining the training data. A significant proportion of parcel or background classes within an image can result in training data that contain only specific classes when cropped, thus impeding the model’s ability to effectively learn class representations. To mitigate this issue, data not containing all classes in their labels are filtered out. In the experiment, of the 92,000 tiles, approximately 60,000 were excluded. The dataset was then split into training and validation data at a ratio of 80% to 20%. Additionally, test data were manually digitized to quantitatively assess the model’s performance. The California dataset was also manually digitized to prevent false labels for transfer learning experiments. The generated training datasets are summarized in Table 1, detailing the regions, acquisition dates, number of tiles, patch sizes, and primary crops. By adhering to this workflow, we ensure the systematic and efficient generation of labeled training data, thereby enhancing the model’s learning capabilities and accuracy in agricultural parcel delineation tasks.

4. Experimental Design

This section outlines the experimental design employed to validate and assess the effectiveness of the Cad2Ag framework for ALP delineation using deep learning. The experiments were structured to evaluate both the qualitative and quantitative aspects of the framework using a variety of datasets and a deep learning architecture.

4.1. Dataset Construction and Assessment

We constructed extensive training datasets for ALP delineation by automatically generating labels with the Cad2Ag framework. Qualitative assessments were then conducted on these automatically generated labels to ensure that cadastral parcels were adequately converted to agricultural parcel labels to train a deep learning model. Each dataset was supplemented with single-temporal NAIP imagery to serve as baseline optical imagery for the development of a deep learning model for ALP delineation.

4.2. Deep Learning Architecture

For the ALP segmentation task, we employed a U-Net architecture [29], which is well suited for pixel-wise semantic segmentation. In our implementation, we reduced the number of filters in the initial convolutional layer from 64 to 32. This adjustment decreases the overall number of trainable parameters by approximately a factor of four, making the model more lightweight by reducing its complexity and computational requirements while also helping to decrease the risk of overfitting. The model, termed Unet-32, features a symmetrical structure with encoder and decoder paths. The encoder comprises several blocks, each containing two 2D convolutional layers with ReLU activation, followed by a 2 × 2 max-pooling layer for downsampling, a dropout layer for regularization, and batch normalization. The decoder includes up-convolutional layers, batch normalization, convolutional layers, and dropout, with skip connections utilized between the encoder and decoder to preserve feature information across the network. Each layer is padded to maintain the dimensions of the input layer, ensuring that the dimensions of the final output segmentation map match those of the input image tile. An illustration of the Unet-32 architecture is presented in Figure 4. Training is stopped early if the validation loss fails to improve after ten consecutive iterations to prevent overfitting.

4.3. Performance Evaluation across Various Scenarios

Initially, the model was trained and tested using a single-temporal dataset to establish a performance benchmark. We then explored the benefits of integrating multi-temporal datasets, which are anticipated to enhance model robustness and adaptability to land cover variations over time. We also assessed the impact of training data volume on model performance, utilizing datasets of varying sizes to explore scalability and consistency. Lastly, we evaluated the framework’s transferability by applying the trained model to geographically distinct regions, thus testing its generalization capabilities. These comprehensive experiments were designed to rigorously evaluate the Cad2Ag framework’s potential to improve agricultural land classification through deep learning, with a focus on its practical deployment in diverse scenarios.

4.4. Evaluation Metrics

For quantitative evaluation, we report precision, recall, F1-score, and Intersection over Union (IoU), widely accepted metrics in image segmentation tasks. The formulas in Table 2 calculate these metrics using T P , T N , F P , and F N , which represent the number of true positive, true negative, false positive, and false negative pixels, respectively. Additionally, we calculated the macro average of precision, recall, F 1 -score, and IoU to assess accuracy across all classes, reflecting the significance of each in the imbalanced datasets.

4.5. Computational Environment

All experiments were performed on a machine with the following hardware and software configurations: a 13th Gen Intel(R) Core(TM) i9-13900K CPU @ 5.80 GHz with 24 cores (Intel Corporation, Santa Clara, CA, USA), an NVIDIA GeForce RTX 4090 GPU with 24 GB VRAM running CUDA Version 12.2 (NVIDIA Corporation, Santa Clara, CA, USA), and 128 GB of DDR4 RAM. The system operated on Ubuntu 22.04.3 LTS with Kernel Version 6.8.0-40-generic. The experiments were conducted in a Python 3.9.18 environment with PyTorch 2.4.0, leveraging CUDA 12.4 for GPU-accelerated computations.

5. Experimental Results

5.1. Generated Labels for Cad2Ag

As illustrated in Figure 3, two types of datasets were generated: single-temporal and multi-temporal. Figure 5 presents samples of generated labels alongside RGB images. The background class is marked in black, the road class in green, the buffer class in red, and the parcel class in white. For multi-temporal datasets, the figure demonstrates how the RGB image changes over time while the label remains consistent.

5.2. Analysis of Single-Temporal Dataset Results

The model’s performance was evaluated using three single-temporal datasets (IN-I, II, III), with each dataset undergoing ten trials to ensure a robust statistical analysis. In each trial, training and validation data were randomly sampled. The ground truth, as detailed in Section 2, was derived from manually digitized parcels, providing a high-quality benchmark for model evaluation. Table 3 presents the averaged accuracy from these trials, demonstrating the model’s strong capability to distinguish primary classes. Notably, IoU values for the background and parcel classes exceeded 80% and 90%, respectively, across the datasets, showcasing strong model performance. The road class also displayed robust performance, consistently achieving F 1 -scores and IoU percentages above 75%. Conversely, the buffer class consistently exhibited lower accuracy. This diminished performance can be attributed to the class’s linear and narrow features, which are susceptible to registration errors and misclassification. The variability in manual digitization also complicates the precise delineation of buffer zones, suggesting a need for refined training strategies or increased model sensitivity to such features. The model’s outputs were directly obtained from the softmax layer of the segmentation map, and notably, no post-processing techniques like morphological transformations were applied. This approach allows for an assessment of the model’s raw segmentation capabilities. Several factors contribute to the variance in performance across the datasets. Firstly, inconsistencies in the noise ratio within the training data are due to temporal shifts between the imagery and the labels, where some agricultural parcels may have changed over time, leading to potential label mismatches. Secondly, the variability in crop types within each dataset significantly affects the model’s accuracy. Different crops present distinct spectral signatures, influencing class representation in the segmentation process. Our analysis indicates that datasets dominated by corn crops typically yield more accurate results than those with soybeans, likely due to the clearer differentiation of corn from surrounding vegetation.
A qualitative evaluation was also conducted to complement the quantitative metrics. Figure 6 illustrates the segmentation results using the IN-I dataset, highlighting the primary errors encountered. These errors predominantly involve mispredictions between the background and parcel classes, as well as failures in accurately predicting the buffer class. While the buffer class exhibited low accuracy, it is important to note that this class serves as a complementary feature for classifying the background and parcels, which are the primary classes of interest. While the results are promising, the use of single-temporal datasets revealed limitations in performance, likely due to their limitation in capturing dynamic changes in land cover, which could affect the model’s effectiveness in real-world applications. This assessment underscores the potential benefits of incorporating multi-temporal data to enhance the model’s adaptability and accuracy.

5.3. Evaluating the Impact of Multi-Temporal Data

To assess the influence of spectral diversity offered by multi-temporal remote sensing data, we employed training datasets compiled from imagery captured across three different time periods. While specialized deep learning models tailored for multi-temporal data might yield superior performance [30], we utilized the baseline Unet model, similar to our single-temporal dataset experiments. This allowed us to clearly discern the effects of introducing multi-temporal data. Specifically, the number of bands in the stacked input image tiles ranged from 4 to 12, incorporating four bands of multispectral airborne imagery collected on three different dates. During the inference stage, the acquisition date of the imagery used for prediction was aligned with the corresponding training datasets to ensure consistency in spectral characteristics.
The results presented in Table 4 demonstrate a modest improvement in accuracy for the buffer class when leveraging multi-temporal datasets compared to single-temporal datasets. The increased accuracy of the buffer class indicates more precise ALP delineation, resulting in better-defined agricultural boundaries from the definition of the buffer class. However, the other classes did not benefit from the multi-temporal dataset, possibly due to the increased complexity from the higher data dimensions, commonly referred to as the curse of dimensionality [31,32], which may have complicated the interpretation of these variations within the U-Net architecture.

5.4. Evaluating the Impact of Training Data Size

We hypothesized that a major factor affecting model performance in ALP delineation is the prevalence of noisy labels. It is well documented that increasing the size of the training dataset can mitigate the effects of noisy labels by enhancing the representation of classes [33,34,35]. In the context of ALPs, noisy labels frequently occur when the spatial patterns of cadastral parcels do not match those of agricultural parcels, resulting in inaccuracies. This mismatch can significantly hinder the model’s ability to accurately learn and predict the true characteristics of agricultural land, underscoring the need for larger and more representative training datasets to improve model reliability and accuracy.
Given that cadastral and agricultural parcels frequently share boundaries, we posited that increasing the training data size would reinforce the common patterns between these parcels during training. This, in turn, would enhance the model’s ability to accurately learn and predict agricultural parcels. To test this assumption, we utilized over 30,000 image tiles, selecting subsets ranging from 1000 to 10,000 tiles for training in various experiments. This approach is predicated on the observation that while false annotations often lack consistent patterns, the intersections between cadastral and agricultural parcel boundaries typically display recognizable and consistent features. These features might include edges of parcels, roads, and other visible boundaries, which serve as condition-invariant elements within the training dataset. Figure 7 illustrates the impact of varying training sizes on performance by plotting the IoU for each dataset. Across all four classes, a distinct trend of improved accuracy emerged as the dataset size increased. This improvement was especially pronounced in the road class, which benefits from relatively clean labels. In contrast, the buffer class demonstrated relatively modest improvements, underscoring the variability in label quality and the challenges associated with accurately classifying this class. These observations highlight the differential impact of increased training data on different classes, reflecting the varying complexity and clarity of the features they represent. The findings support the idea that cadastral parcels can adequately represent agricultural parcels given sufficient training data. The utilization of larger training datasets significantly reduces the impact of noisy labels, enhancing the deep learning model’s ability to generalize from the training data provided. These experimental results point in a promising direction, indicating that our methodology of generating training labels from cadastral data to derive agricultural labels is both viable and effective. This approach is integral to the Cad2Ag framework, offering a practical solution for automating the extraction of agricultural labels.

5.5. Evaluating the Impact of Transfer Learning with Clean Labels

Another primary objective of our study is to assess the transferability of our pre-trained model across different regions. This is particularly essential given that limited data availability, which varies widely across regions, may complicate both the transformation of cadastral labels to agricultural labels and the direct application of the trained model without any fine-tuning. However, obtaining a targeted set of labels in a specific area remains a relatively manageable procedure. We applied the model, initially trained using the dataset from Indiana, to agricultural land in California. Building upon our findings that the model can effectively learn the patterns of agricultural parcels from cadastral data using a large, multi-temporal training dataset, we selected the model trained on the IN-VI dataset due to its superior performance metrics. This model was then tested on the California area using a manually labeled dataset to evaluate the effectiveness of transfer learning in predicting agricultural parcels.
To create a practical test scenario, we manually digitized reference data for a small region in California using QGIS 3.22.4, producing a set of clean labels. We digitized in the following order: roads, agricultural farmlands (parcels), and backgrounds; this is because these three classes were relatively easy to identify. For pixels with ambiguous classifications, we assigned them to a buffer class, which does not require precise labeling due to its intended purpose. The manually created labels comprised 200 tiles, covering all four classes defined in Section 3.1. The process of collecting clean training samples (200 tiles of 300 m × 300 m each) took about one hour for a single person, indicating a reasonable level of human intervention, particularly given the model’s scalability after fine-tuning. To evaluate the impact of the number of clean labels, we conducted experiments by varying the number of clean samples. Instead of random selection, we incrementally increased the number of clean labels in specific intervals (0, 50, 100, 150, and 200 samples) to simulate more realistic scenarios and assessed their impact on model performance accordingly.
Table 5 and Figure 8 present the quantitative results of applying transfer learning with the California dataset (CA-I) using the IN-VI model. For baseline comparison, we included results from the same model without fine-tuning (0 samples). The transfer learning approach (200 samples) showed an improvement of 0.37 in the macro-averaged F 1 score and a 0.34 increase in IoU compared to the baseline. Interestingly, the accuracy did not increase when the number of training samples rose from 50 to 100. This could be due to the insufficient number of clean labels to enable the model to perform more robustly or the varying quality of the clean labels themselves. Adding an additional 50 clean labels after the initial 50 may not have provided the best representation of training samples to enhance the overall performance. This suggests that, while the model can benefit from clean labels, a sufficient number is required to achieve a plateau in accuracy: in our case, this was observed with over 150 samples. However, this threshold can vary depending on the quality of the pre-trained model and the characteristics of the target area. We acknowledge that fully understanding this gap is challenging due to the ‘black-box’ nature of deep learning models. Additionally, Figure 9 illustrates the segmentation results from one of the trials over a selected region of the CA-I dataset. As the number of clean labels increases, the overall accuracy generally improves. Notably, with the addition of only a few clean samples, the model begins to accurately identify road pixels, and the boundaries between the background and parcels become much clearer. When the number of clean training samples exceeds 150, the segmentation results and quantitative metrics are satisfactory. The inclusion of these clean labels allows the deep learning model to learn the features of the buffer class more accurately, resulting in improved precision. These findings underscore the effectiveness of using clean labels for transfer learning. Despite the limited training data and the introduction of crop types different from those in Indiana, our pre-trained model demonstrated robust performance across all four classes. This outcome is particularly encouraging, as it highlights the effectiveness of the Cad2Ag framework in generating suitable training labels for large-scale automated agricultural mapping.

6. Discussion

6.1. The Significance of the Study

This study aimed to enhance the automation of ALP delineation using cadastral parcels as training labels, addressing the dynamic nature of agricultural practices that require frequent updates. Our research sought to overcome the traditional barriers of ALP mapping, such as the requirement for extensive manual labor and the continual need for updated data. The experimental results demonstrated that our Cad2Ag framework can effectively utilize cadastral parcels to delineate ALPs with high accuracy. Additionally, we explored diverse, practical scenarios to enhance the model’s performance and found that the model trained via the Cad2Ag framework could effectively perform ALP tasks even with a small number of clean labels in transfer learning, reflecting its robustness and practical applicability.
The findings from this research underscore the potential of using cadastral data as a reliable surrogate for training deep learning models in agricultural applications. Notably, the successful transfer of the trained model to a distinctly different agricultural setting in California validated the framework’s versatility and confirmed its efficacy in improving ALP delineation with limited ground-truth data. This capability is crucial for extending the model to other regions, potentially enabling the transfer of cadastral map knowledge to generate ALP globally. Ultimately, the Cad2Ag framework presents a cost-effective and scalable solution to the ongoing challenge of agricultural parcel delineation, demonstrating that cadastral data, when appropriately processed and utilized, can provide a cost-effective and reliable basis for training deep learning models for ALP tasks.

6.2. Limitations and Future Work

Despite demonstrating the feasibility of using cadastral parcels for agricultural parcel delineation, limitations remain due to the lower performance of the model when trained solely on machine-labeled datasets, which are often noisy. Our study also observed that, while the use of multi-temporal data improves accuracy, the enhancements were modest. Given the recent advancements in the remote sensing domain for effectively leveraging multi-temporal imagery [36,37], it is evident that the Cad2Ag framework could benefit further by employing more advanced strategies for handling multi-temporal data. For example, the use of deep learning models to handle temporal data has been widely explored in recent studies, with techniques such as Long Short-Term Memory Networks (LSTM) [38] and Recurrent Neural Networks (RNNs) [39] that effectively capture temporal dependencies. Furthermore, with the rapid development of methods to enhance training robustness in the presence of noisy labels, additional gains in accuracy are anticipated. Recent techniques to improve training robustness include modifying the loss function through importance reweighting [40], utilizing advanced loss functions to address class imbalances [41], filtering noisy samples [35], and employing a noise adaptation layer [33,42]. Lastly, incorporating a 3D convolutional network that captures both spatial and temporal dimensions may offer potential improvements in accuracy [43]. These advances in managing noisy and spatio-temporal data are expected to be integrated into the Cad2Ag framework in future developments.
The framework assumes a certain level of alignment between cadastral boundaries and actual agricultural parcel boundaries. Significant discrepancies between these boundaries could reduce the framework’s effectiveness, potentially necessitating additional data processing or manual corrections. Our experiments were conducted within the U.S., and while we have demonstrated the framework’s effectiveness even when agricultural parcels in California and Indiana exhibit distinct characteristics, Cad2Ag may be less effective if the differences between cadastral and agricultural boundaries are more pronounced. Furthermore, while California and Indiana provide distinct datasets, the characteristics and availability of data could vary more drastically across different countries. We evaluated the effectiveness of Cad2Ag using high-resolution orthoimagery from the NAIP, but such high-resolution imagery is not consistently available worldwide. Future research should explore the feasibility of using lower-resolution imagery, such as that from Planetscope or Sentinel satellites, to determine whether the Cad2Ag framework remains effective with less detailed data. Additionally, it is essential to explore post-processing techniques to refine the outputs into ready-to-use GIS products, facilitating the framework’s output as an end product. This would include improving boundary definitions, resolving line discontinuities, and polygonizing segmented raster data to produce usable GIS products of ALPs. Implementing these post-processing steps is crucial for transforming raw model outputs into valuable, actionable information for agricultural management and planning. These refinements and expansions of our methodology not only promise to enhance the model’s accuracy and applicability but also extend its utility across different regions and agricultural conditions, making it a more versatile tool for global agricultural mapping and analysis. These efforts will continue to push the boundaries of what is possible in geospatial data analysis and land management, providing robust solutions for the dynamic and diverse needs of modern agriculture.

7. Conclusions

The Cad2Ag framework developed in this study demonstrates significant potential in agricultural land management by leveraging cadastral parcels as proxies for generating ALP labels through deep learning. This approach capitalizes on the stability and wide availability of cadastral data, offering a scalable and efficient solution for creating large-scale training datasets. By addressing the challenges associated with traditional, labor-intensive methods of ALP delineation, the Cad2Ag framework provides a robust method for automating this process. Using a lightweight U-Net architecture as a demonstration, we investigated the feasibility of this framework in diverse experimental scenarios. The Cad2Ag framework successfully generated over 30,000 training tiles programmatically, providing a robust dataset to train the deep learning model. Through this approach, the model achieved an mIoU of 0.63 and a macro-averaged F1-score of 0.71 using a multi-temporal dataset (IN-VI). In a transfer learning scenario, where the model was supplemented with a small set of clean labels (200 tiles, each measuring 300 m × 300 m), it achieved an mIoU of 0.67 and a macro-averaged F1-score of 0.80, demonstrating the effectiveness of automatically generated labels as a valuable pre-training resource. These results underscore the potential of the Cad2Ag framework in facilitating efficient and accurate ALP delineation across different geographic contexts, even with minimal clean labeled data.
Ultimately, this study contributes to the foundational efforts in geospatial data analysis and automated land management. It suggests potential pathways for enhancing agricultural policy implementation and resource management on a global scale. By exploring new methodologies for more precise and efficient land use planning and management, this research opens avenues that could lead to significant developments in agricultural land delineation practices, potentially facilitating a gradual transformation in the field.

Author Contributions

Conceptualization, H.S.K., H.S. and J.J.; methodology, H.S.K. and H.S.; formal analysis, H.S.K.; writing—original draft preparation, H.S.K. and H.S.; writing—review and editing, J.J.; funding acquisition, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the “Punjab Urban Land Systems Enhancement Project”, funded by the Food and Agriculture Organization of the United Nations and the World Bank Cooperative Program.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

We extend our gratitude to the individuals associated with the U.S. Geological Survey’s 3D Elevation Program (USGS 3DEP) and the U.S. Department of Agriculture’s National Agriculture Imagery Program (USDA NAIP) as well as to the anonymous reviewers for their invaluable contributions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Masoud, K.M.; Persello, C.; Tolpekin, V.A. Delineation of Agricultural Field Boundaries from Sentinel-2 Images Using a Novel Super-Resolution Contour Detector Based on Fully Convolutional Networks. Remote Sens. 2019, 12, 59. [Google Scholar] [CrossRef]
  2. Liu, W.; Wang, J.; Luo, J.; Wu, Z.; Chen, J.; Zhou, Y.; Sun, Y.; Shen, Z.; Xu, N.; Yang, Y. Farmland Parcel Mapping in Mountain Areas Using Time-Series SAR Data and VHR Optical Images. Remote Sens. 2020, 12, 3733. [Google Scholar] [CrossRef]
  3. European Commission; Joint Research Centre; Institute for the Protection and the Security of the Citizen. Land Parcel Identification System (LPIS) Anomalies’ Sampling and Spatial Pattern: Towards Convergence of Ecological Methodologies and GIS Technologies; Publications Office: Luxembourg, 2008. [Google Scholar]
  4. Lark, T.J.; Schelly, I.H.; Gibbs, H.K. Accuracy, bias, and improvements in mapping crops and cropland across the United States using the USDA cropland data layer. Remote Sens. 2021, 13, 968. [Google Scholar] [CrossRef]
  5. Zimmermann, J.; González, A.; Jones, M.B.; O’Brien, P.; Stout, J.C.; Green, S. Assessing land-use history for reporting on cropland dynamics—A comparison between the Land-Parcel Identification System and traditional inter-annual approaches. Land Use Policy 2016, 52, 30–40. [Google Scholar] [CrossRef]
  6. Chen, X.; Yu, L.; Du, Z.; Liu, Z.; Qi, Y.; Liu, T.; Gong, P. Toward sustainable land use in China: A perspective on China’s national land surveys. Land Use Policy 2022, 123, 106428. [Google Scholar] [CrossRef]
  7. Dacko, M.; Wojewodzic, T.; Pijanowski, J.; Taszakowski, J.; Dacko, A.; Janus, J. Increase in the Value of Agricultural Parcels—Modelling and Simulation of the Effects of Land Consolidation Project. Agriculture 2021, 11, 388. [Google Scholar] [CrossRef]
  8. Subedi, Y.R.; Kristiansen, P.; Cacho, O. Drivers and consequences of agricultural land abandonment and its reutilisation pathways: A systematic review. Environ. Dev. 2022, 42, 100681. [Google Scholar] [CrossRef]
  9. Kalantari, M.; Rajabifard, A.; Wallace, J.; Williamson, I. Spatially referenced legal property objects. Land Use Policy 2008, 25, 173–181. [Google Scholar] [CrossRef]
  10. Garcia-Pedrero, A.; Gonzalo-Martin, C.; Lillo-Saavedra, M. A machine learning approach for agricultural parcel delineation through agglomerative segmentation. Int. J. Remote Sens. 2017, 38, 1809–1819. [Google Scholar] [CrossRef]
  11. Garcia-Pedrero, A.; Lillo-Saavedra, M.; Rodriguez-Esparragon, D.; Gonzalo-Martin, C. Deep learning for automatic outlining agricultural parcels: Exploiting the land parcel identification system. IEEE Access 2019, 7, 158223–158236. [Google Scholar] [CrossRef]
  12. Hong, R.; Park, J.; Jang, S.; Shin, H.; Kim, H.; Song, I. Development of a parcel-level land boundary extraction algorithm for aerial imagery of regularly arranged agricultural areas. Remote Sens. 2021, 13, 1167. [Google Scholar] [CrossRef]
  13. Xu, L.; Ming, D.; Du, T.; Chen, Y.; Dong, D.; Zhou, C. Delineation of cultivated land parcels based on deep convolutional networks and geographical thematic scene division of remotely sensed images. Comput. Electron. Agric. 2022, 192, 106611. [Google Scholar] [CrossRef]
  14. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
  15. Rydberg, A.; Borgefors, G. Integrated method for boundary delineation of agricultural fields in multispectral satellite images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2514–2520. [Google Scholar] [CrossRef]
  16. Arbelaez, P. Boundary extraction in natural images using ultrametric contour maps. In Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), New York, NY, USA, 17–22 June 2006; IEEE: Piscataway, NJ, USA, 2006; p. 182. [Google Scholar]
  17. Maire, M.; Arbelaez, P.; Fowlkes, C.; Malik, J. Using contours to detect and localize junctions in natural images. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–8. [Google Scholar]
  18. Crommelinck, S.; Bennett, R.; Gerke, M.; Yang, M.Y.; Vosselman, G. Contour detection for UAV-based cadastral mapping. Remote Sens. 2017, 9, 171. [Google Scholar] [CrossRef]
  19. Persello, C.; Tolpekin, V.A.; Bergado, J.R.; De By, R.A. Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping. Remote Sens. Environ. 2019, 231, 111253. [Google Scholar] [CrossRef]
  20. Alibabaei, K.; Gaspar, P.D.; Lima, T.M.; Campos, R.M.; Girão, I.; Monteiro, J.; Lopes, C.M. A review of the challenges of using deep learning algorithms to support decision-making in agricultural activities. Remote Sens. 2022, 14, 638. [Google Scholar] [CrossRef]
  21. Tuia, D.; Persello, C.; Bruzzone, L. Domain adaptation for the classification of remote sensing data: An overview of recent advances. IEEE Geosci. Remote Sens. Mag. 2016, 4, 41–57. [Google Scholar] [CrossRef]
  22. Ma, Y.; Chen, S.; Ermon, S.; Lobell, D.B. Transfer learning in environmental remote sensing. Remote Sens. Environ. 2024, 301, 113924. [Google Scholar] [CrossRef]
  23. Çağdaş, V.; Kara, A.; Lisec, A.; Paasch, J.M.; Paulsson, J.; Skovsgaard, T.L.; Velasco, A. Determination of the property boundary—A review of selected civil law jurisdictions. Land Use Policy 2023, 124, 106445. [Google Scholar] [CrossRef]
  24. Moreno-Torres, J.G.; Raeder, T.; Alaiz-Rodríguez, R.; Chawla, N.V.; Herrera, F. A unifying view on dataset shift in classification. Pattern Recognit. 2012, 45, 521–530. [Google Scholar] [CrossRef]
  25. Stoker, J.; Miller, B. The accuracy and consistency of 3D elevation program data: A systematic analysis. Remote Sens. 2022, 14, 940. [Google Scholar] [CrossRef]
  26. Maxwell, A.E.; Warner, T.A.; Vanderbilt, B.C.; Ramezan, C.A. Land Cover Classification and Feature Extraction from National Agriculture Imagery Program (NAIP) Orthoimagery: A Review. Photogramm. Eng. Remote Sens. 2017, 83, 737–747. [Google Scholar] [CrossRef]
  27. Crommelinck, S.; Koeva, M.; Yang, M.Y.; Vosselman, G. Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery. Remote Sens. 2019, 11, 2505. [Google Scholar] [CrossRef]
  28. Luo, X.; Bennett, R.; Koeva, M.; Lemmen, C.; Quadros, N. Quantifying the overlap between cadastral and visual boundaries: A case study from Vanuatu. Urban Sci. 2017, 1, 32. [Google Scholar] [CrossRef]
  29. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings Part III 18. Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  30. Xu, J.; Yang, J.; Xiong, X.; Li, H.; Huang, J.; Ting, K.; Ying, Y.; Lin, T. Towards interpreting multi-temporal deep learning models in crop mapping. Remote Sens. Environ. 2021, 264, 112599. [Google Scholar] [CrossRef]
  31. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
  32. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  33. Goldberger, J.; Ben-Reuven, E. Training deep neural-networks using a noise adaptation layer. In Proceedings of the International Conference on Learning Representations, Virtual, 25–29 April 2022. [Google Scholar]
  34. Azadi, S.; Feng, J.; Jegelka, S.; Darrell, T. Auxiliary image regularization for deep cnns with noisy labels. arXiv 2015, arXiv:1511.07069. [Google Scholar]
  35. Song, H.; Yang, L.; Jung, J. Self-filtered learning for semantic segmentation of buildings in remote sensing imagery with noisy labels. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 16, 1113–1129. [Google Scholar] [CrossRef]
  36. Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J.; et al. Deep learning in environmental remote sensing: Achievements and challenges. Remote Sens. Environ. 2020, 241, 111716. [Google Scholar] [CrossRef]
  37. Cheng, X.; Sun, Y.; Zhang, W.; Wang, Y.; Cao, X.; Wang, Y. Application of deep learning in multitemporal remote sensing image classification. Remote Sens. 2023, 15, 3859. [Google Scholar] [CrossRef]
  38. Rußwurm, M.; Körner, M. Multi-temporal land cover classification with sequential recurrent encoders. ISPRS Int. J. Geo-Inf. 2018, 7, 129. [Google Scholar] [CrossRef]
  39. Ndikumana, E.; Ho Tong Minh, D.; Baghdadi, N.; Courault, D.; Hossard, L. Deep recurrent neural network for agricultural classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef]
  40. Liu, T.; Tao, D. Classification with Noisy Labels by Importance Reweighting. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 447–461. [Google Scholar] [CrossRef]
  41. Yeung, M.; Sala, E.; Schönlieb, C.B.; Rundo, L. Unified focal loss: Generalising dice and cross entropy-based losses to handle class imbalanced medical image segmentation. Comput. Med. Imaging Graph. 2022, 95, 102026. [Google Scholar] [CrossRef]
  42. Li, P.; He, X.; Qiao, M.; Cheng, X.; Li, Z.; Luo, H.; Song, D.; Li, D.; Hu, S.; Li, R.; et al. Robust deep neural networks for road extraction from remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6182–6197. [Google Scholar] [CrossRef]
  43. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
Figure 1. Our study area for the training dataset generation and the transfer learning experiment. Seventeen counties in Indiana (indicated in green) were included to generate the training datasets, and transfer learning was conducted in Fresno County, California (indicated in orange).
Figure 1. Our study area for the training dataset generation and the transfer learning experiment. Seventeen counties in Indiana (indicated in green) were included to generate the training datasets, and transfer learning was conducted in Fresno County, California (indicated in orange).
Remotesensing 16 03568 g001
Figure 2. Spectral signatures and temporal behavior over the areas in both Indiana and California.
Figure 2. Spectral signatures and temporal behavior over the areas in both Indiana and California.
Remotesensing 16 03568 g002
Figure 3. General workflow of the study.
Figure 3. General workflow of the study.
Remotesensing 16 03568 g003
Figure 4. The Unet-32 architecture used in the experiments. n, m, b, f, and c refer to the number of pixels, the number of incorporated multi-temporal images, the number of bands, the number of filters, and the number of classes.
Figure 4. The Unet-32 architecture used in the experiments. n, m, b, f, and c refer to the number of pixels, the number of incorporated multi-temporal images, the number of bands, the number of filters, and the number of classes.
Remotesensing 16 03568 g004
Figure 5. Samples of generated labels with corresponding RGB images for ‘Single temporal’ and ‘Multi-temporal’ datasets. In these images, black represents the background class, white the parcel class, red the buffer class, and green the road class. The ‘Multi-temporal’ figure highlights the variations in the RGB image over time.
Figure 5. Samples of generated labels with corresponding RGB images for ‘Single temporal’ and ‘Multi-temporal’ datasets. In these images, black represents the background class, white the parcel class, red the buffer class, and green the road class. The ‘Multi-temporal’ figure highlights the variations in the RGB image over time.
Remotesensing 16 03568 g005
Figure 6. Segmentation results for the IN-I dataset. Errors are shown in the last row. White, black, red, and blue indicate TP, TN, FP, and FN, respectively.
Figure 6. Segmentation results for the IN-I dataset. Errors are shown in the last row. White, black, red, and blue indicate TP, TN, FP, and FN, respectively.
Remotesensing 16 03568 g006
Figure 7. Box plot of IoU for each class with different training datasets (left: single-temporal, right: multi-temporal).
Figure 7. Box plot of IoU for each class with different training datasets (left: single-temporal, right: multi-temporal).
Remotesensing 16 03568 g007
Figure 8. Box plot of the F 1 score and IoU using different sample sizes for transfer learning. The zero-training-sample case leverages only the pre-trained model from the IN-VI dataset.
Figure 8. Box plot of the F 1 score and IoU using different sample sizes for transfer learning. The zero-training-sample case leverages only the pre-trained model from the IN-VI dataset.
Remotesensing 16 03568 g008
Figure 9. Segmentation results using different sample sizes on California dataset (CA-I).
Figure 9. Segmentation results using different sample sizes on California dataset (CA-I).
Remotesensing 16 03568 g009
Table 1. Summary of the generated training datasets.
Table 1. Summary of the generated training datasets.
RegionDatasetAcquisition DateTilesPatch SizePrimary Crop
IndianaIN-I201632,000512 × 512 × 4Corn
IN-II201832,000512 × 512 × 4Soybeans
IN-III202032,000512 × 512 × 4Soil, soybeans
IN-IV2016 and 201832,000512 × 512 × 8Corn, soybeans
IN-V2016 and 202032,000512 × 512 × 8Corn
IN-VI2016 and 2018 and 202032,000512 × 512 × 12Corn, soybeans
CaliforniaCA-I2016 and 2018 and 2020200512 × 512 × 12Almonds, grapes
Table 2. Evaluation metrics.
Table 2. Evaluation metrics.
MetricFormulaEquation
Precision (P) T P T P + F P (1)
Recall (R) T P T P + F N (2)
F 1 -Score ( F 1 ) 2 × P × R P + R (3)
Intersection over Union (IoU) T P T P + F P + F N (4)
Table 3. Segmentation results for the Indiana single-temporal dataset.
Table 3. Segmentation results for the Indiana single-temporal dataset.
DatasetMetricClassMacro-Averaged
BackgroundParcelRoadBuffer
IN-IP0.950.940.810.360.77
R0.860.980.910.100.71
F 1 0.900.960.860.150.72
IoU0.810.920.750.080.64
IN-IIP0.940.930.830.360.77
R0.850.980.890.060.70
F 1 0.900.960.860.100.70
IoU0.810.910.750.050.63
IN-IIIP0.950.930.840.450.79
R0.840.990.880.050.69
F 1 0.890.960.860.090.70
IoU0.800.910.750.040.63
Table 4. Segmentation results for the Indiana multi-temporal dataset.
Table 4. Segmentation results for the Indiana multi-temporal dataset.
DatasetMetricClassMacro-Averaged
BackgroundParcelRoadBuffer
IN-IVP0.950.930.840.410.78
R0.850.980.880.110.71
F 1 0.900.960.860.170.72
IoU0.810.910.750.090.64
IN-VP0.950.930.830.380.77
R0.850.980.870.080.70
F 1 0.900.960.850.130.71
IoU0.810.910.740.070.63
IN-VIP0.950.930.830.380.77
R0.860.980.880.090.70
F 1 0.900.960.860.140.71
IoU0.810.910.740.070.63
Table 5. The segmentation results of the transfer learning strategy for the California datasets, with models pre-trained on Indiana datasets. The results for “0 samples” do not include repetitions.
Table 5. The segmentation results of the transfer learning strategy for the California datasets, with models pre-trained on Indiana datasets. The results for “0 samples” do not include repetitions.
Number of Clean LabelsMetricClassMacro-Averaged
BackgroundParcelRoadBuffer
0 Samples (No Fine-tuning)P0.360.940.000.310.40
R0.620.870.000.470.49
F 1 0.450.910.000.370.43
IoU0.290.820.000.220.33
50 SamplesP0.150.990.870.390.60
R0.890.540.520.640.65
F 1 0.250.700.650.490.53
IoU0.140.540.480.320.37
100 SamplesP0.141.000.940.370.61
R0.920.510.320.650.60
F 1 0.240.680.470.460.46
IoU0.130.500.300.290.31
150 SamplesP0.540.990.890.580.75
R0.850.920.760.870.85
F 1 0.660.950.820.700.78
IoU0.490.900.690.530.66
200 SamplesP0.720.980.930.580.80
R0.770.950.680.900.83
F 1 0.740.970.780.700.80
IoU0.580.930.640.530.67
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, H.S.; Song, H.; Jung, J. Cadastral-to-Agricultural: A Study on the Feasibility of Using Cadastral Parcels for Agricultural Land Parcel Delineation. Remote Sens. 2024, 16, 3568. https://doi.org/10.3390/rs16193568

AMA Style

Kim HS, Song H, Jung J. Cadastral-to-Agricultural: A Study on the Feasibility of Using Cadastral Parcels for Agricultural Land Parcel Delineation. Remote Sensing. 2024; 16(19):3568. https://doi.org/10.3390/rs16193568

Chicago/Turabian Style

Kim, Han Sae, Hunsoo Song, and Jinha Jung. 2024. "Cadastral-to-Agricultural: A Study on the Feasibility of Using Cadastral Parcels for Agricultural Land Parcel Delineation" Remote Sensing 16, no. 19: 3568. https://doi.org/10.3390/rs16193568

APA Style

Kim, H. S., Song, H., & Jung, J. (2024). Cadastral-to-Agricultural: A Study on the Feasibility of Using Cadastral Parcels for Agricultural Land Parcel Delineation. Remote Sensing, 16(19), 3568. https://doi.org/10.3390/rs16193568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop