Next Article in Journal
Indeterminacy as a Framework for Sustainable Architecture: Lessons from Spens, a Socialist Megastructure
Previous Article in Journal
Ethical Integration of AI and STEAM Pedagogies in Higher Education: A Sustainable Learning Model for Society 5.0
Previous Article in Special Issue
A Remote Sensing Approach for Biomass Assessment in Winter Wheat Using the NDVI Second Derivative in Terms of NIR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Review of AI-Based Classifications Used in Agricultural Monitoring in the Context of Achieving the Sustainable Development Goals

by
Vasile Adrian Nan
,
Gheorghe Badea
*,
Ana Cornelia Badea
and
Anca Patricia Grădinaru
Faculty of Geodesy, Technical University of Civil Engineering, 020396 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(19), 8526; https://doi.org/10.3390/su17198526
Submission received: 19 July 2025 / Revised: 10 September 2025 / Accepted: 16 September 2025 / Published: 23 September 2025

Abstract

The integration of Artificial Intelligence (AI) into remote sensing data classification has revolutionized agriculture and environmental monitoring. AI is one of the main technologies used in smart farming that enhances and optimizes the sustainability of agricultural production. The use of AI in agriculture can involve land use mapping and crop detection, crop yield monitoring, flood-prone area detection, pest disease monitoring, droughts prediction, soil content analysis and soil production capacity detection, and for monitoring the evolution of forests and vegetation. This review examines recent advancements in AI-driven classification techniques for various applications regarding agriculture and environmental monitoring to answer the following research questions: (1) What are the main problems that can be solved through incorporating AI-driven classification techniques into the field of smart agriculture and environmental monitoring? (2) What are the main methods and strategies used in this technology? (3) What type of data can be used in this regard? For this study, a systematic literature review approach was adopted, analyzing publications from Scopus and WoS (Web of Science) between 1 January 2020 and 31 December 2024. By synthesizing recent developments, this review provides valuable insights for researchers, highlighting the current trends, challenges and future research directions, in the context of achieving the Sustainable Development Goals.

1. Introduction

Data-driven smart farming involves the use of recent technology developments, such as the Internet of Things (IoT) and Artificial Intelligence (AI) [1], and consists of a multidisciplinary effort to address the achievement of the Sustainable Development Goals (SDGs) [2]. These technologies include algorithm- and model-based tools [3] that require accurate geospatial data.
Agriculture 4.0 is characterized by the integration of advanced digital technologies to create a more efficient and sustainable farming sector. This implies big data analyses and the use of AI, the IoT and robotics to enhance the digitalization and automation of agricultural practices, improving productivity and resource management [4].
Various agricultural policies, such as the Common Agricultural Policy of the EU [5], promote the use of remote sensing as an appropriate control system for agriculture. Satellite images and aerial photographs, in combination with advanced algorithms and machine learning (ML) techniques, are used to assess if farmers are fulfilling the obligations outlined in the strategic plans [6].
Agriculture and vegetation are strongly interconnected, each influencing the other in ecological and economical aspects. Plant biodiversity can enhance ecosystem functions that benefit agriculture, while intensive farming can reduce vegetation diversity [7].
In addition to satellite and UAV (Unmanned Aerial Vehicle) images, LiDAR (Light Detection and Ranging) point clouds acquired through airborne laser scanning can contribute to the mapping and analysis of vegetation, as laser signals can penetrate through forest canopies [8].
The sustainable management of land has been a valid concern for a long time, being an issue especially relevant in the context of climate change [9]. In this regards, recent technological advancements have made the use of higher-resolution remote sensing data both feasible and cost-effective, significantly increasing their practical value [10].
Inefficient use of natural resources, such as soil and water, has been proven to cause economic losses and lead to environmental damage [11]. Remote sensing technologies can be valuable tools to support precision agriculture throughout different phases of crop production. Recent progress in sensor technologies, data processing and analytics has increased the accessibility of remote sensing in agriculture [12].
The accuracy of the methods applied in agriculture depends on multiple factors, including the image resolution and analysis technique [11].
Artificial Intelligence is increasingly being integrated into remote sensing applications for agriculture, providing efficient analysis techniques of large-scale satellite and UAV data. AI techniques, such as ML and DL (deep learning), are used for a wide range of tasks in remote sensing for agriculture, promoting agricultural reform [13]. The integration of big data into predictive models and decision-support tools allows stakeholders to mitigate the impact of climate change and other environmental threats [14]. Therefore, the use of AI in remote sensing for agriculture is transforming how data is analyzed, leading to more efficient, adaptive and cost-effective strategies applied across the entire agricultural field, while contributing to the achievement of the SDGs.
Based on the current pace of progress since their adoption in 2015, none of the 17 Sustainable Development Goals are expected to be fully achieved by 2030. Globally, SDG 2 (Zero Hunger) has been stagnating and faces major challenges (Figure 1) [15].
The SDGs are deeply interconnected [16,17], and the use of AI in remote sensing for agriculture offers a powerful tool to address multiple goals simultaneously. In Figure 2, the connections between SDG 6—Clean Water and Sanitations (a), SDG 9—Industry, Innovation and Infrastructure (b), SDG 12—Responsible Production and Consumption (c) and SDG 13—Climate Action (d) and the other 16 SDGs are highlighted. Access to clean water (SDG 6) is essential for achieving Zero Hunger (SDG 2), as agriculture depends heavily on water availability. AI models trained on satellite and UAV data can improve water resource monitoring and support precision irrigation, minimizing waste and increasing crop productivity. The link between SDG 2 and SDG 13 (Climate Action) is also relevant, as enhanced resilience to climate change can support more sustainable agriculture. AI-enhanced remote sensing enables the monitoring of climate variables—such as rising temperatures, changing precipitation patterns and the intensity and frequency of extreme weather events that profoundly affect agricultural fields. Furthermore, SDG 2 connects with SDG 12 (Responsible Consumption and Production), as the efficient use of natural resources supports efforts to increase agricultural productivity. AI-driven analytics can optimize resource use, reducing environmental impacts while improving crop yields. SDG 9 (Industry, Innovation and Infrastructure) plays an important role in enabling the technological innovation behind AI applications in agriculture. These innovations enable farmers to make data-driven decisions that contribute to food security.
Therefore, it is important to have a deeper understanding of how AI can transform agricultural monitoring, in order to achieve sustainable and resilient food systems. This systematic literature review (SLR) aims to synthesize the current research on AI-based classifications in agriculture and identify the knowledge gaps and opportunities for further research.

2. Materials and Methods

2.1. Selection Criteria

For this study, the papers that fulfilled the following criteria were selected:
  • Addressed the topics of the research questions;
  • Were published in peer-reviewed journals;
  • Were published between 2020 and 2024;
  • Were open access;
  • Were written in English.

2.2. Information Sources

The methodology applied involved searching in the Scopus and Web of Science databases (Table 1), as these met the peer-reviewed journals, use of English language and open-access criteria.

2.3. Search Strategy

The methodology used involved searching for three keyword groups (Table 2). The first group of keywords included the types of algorithms that the research focused on, the second group focused on the domains of application for these algorithms and the third group reflected the specific focus of the studies that were analyzed.
Filters for the publication date (between 1 January 2020 and 31 December 2024), language (English) and access type (open-access papers) were also applied.
The expected results were papers that used AI, DL or ML techniques applied to remote sensing, image classification, point cloud classification or agriculture in general, that were used for agricultural and environmental monitoring-related tasks, such as land use mapping, flood-prone area detection, forest monitoring, pest disease monitoring, droughts prediction, soil content analysis and production capacity detection, vegetation monitoring, crop detection or crop yield monitoring.
For each search, certain search queries were applied (Table 3). The search results showed 472 papers on Scopus and 309 papers on WoS.

2.4. Selection Process

The selection process (Figure 3) first involved removing the papers that were found in both databases. A total of 201 duplicates were identified and excluded. An initial screening was then conducted for the 580 remaining papers. The screening involved analyzing whether the papers were relevant to this research or not by reviewing their abstracts. During this step, 426 papers were excluded. The papers that were excluded fell outside the primary scope of the research, addressing topics such as wildfires which, while related, belong to different areas of study. The next stage involved full-text assessment of the remaining 154 papers. This included sorting the papers into categories and analyzing the data, methodology and algorithms that were used for each study. Following this in-depth evaluation, 46 additional papers were removed that were not relevant to this research. Although AI and remote sensing were mentioned in these papers, they were not applied to data classification or analysis, with some studies relying on soil samples instead of remote sensing data. Finally, 108 papers were included in this review. PRISMA Checklist is available in Supplementary Materials.
During the selection process, an increased interest in the research topic (Figure 4) was noticed; the number of papers published by year increased by 537.5%, from 8 papers in 2020 to 51 papers in 2024.
The geographic distribution of the study areas (Figure 5) showed a significantly higher number of case studies conducted in China, followed by a considerable number of papers where the study area was not specified. Countries such as Brazil, USA, Germany, France and Finland showed a moderate level of research activity in this domain. In particular, the map in Figure 6 highlights a clear lack of studies in Eastern Europe. This finding is especially relevant as Eastern Europe is renowned for its strong agricultural sector, particularly in countries like Ukraine, Romania, Hungary and Bulgaria, as they possess extensive arable lands and have a long-standing tradition of farming, contributing significantly to both regional and global food production. In 2023, the main cereals-producing member states of the European Union, alongside countries like Poland, Germany and France, were Romania, Bulgaria and Hungary [18]. Despite the importance of agriculture in these countries, there appears to be a gap in the research focused on the integration of advanced technologies into the agricultural sector.

3. Related Work

While existing studies have addressed individual challenges in agriculture and smart farming (Table 4), focusing on IoT technology [1,3], on the models and algorithms used, remote sensing for water resource management and smart irrigation [19,20], nutrient testing and scientific fertilization, disease and pest monitoring and ecosystem balance, crop-type classification and weed control, development stage monitoring and yield prediction, ref. [20] and evaluation of remote sensing platforms and sensors [10], none of these studies have integrated all the relevant aspects related to agriculture and vegetation, such as the data types used, types of AI algorithms for data processing and the sources of data, which defines its spatial resolution. This study fills this gap by offering a comprehensive review that takes into consideration all these relevant aspects.
Other reviews fail to comprehensively address issues regarding the use of AI and remote sensing in agricultural monitoring, as they are generally application-specific and do not compare how different domains—such as land use mapping and crop detection, crop yield monitoring, flood-prone area detection, forest and vegetation monitoring, pest disease monitoring, droughts prediction and soil content analysis and soil production capacity—overlap or diverge in terms of methods and resources. Furthermore, existing work rarely examines in an integrated way the data types employed (satellite imagery, UAV data), the AI algorithms used for data processing, the software products implemented and the data sources. By contrast, our review systematically considers all these aspects side by side across the main categories of agricultural and vegetation studies, allowing us to highlight common trends and methodological preferences.

4. Results

4.1. Answer to the First Research Question (“What Are the Main Problems That Can Be Solved Through Incorporating AI-Driven Classification Techniques in the Field of Smart Agriculture and Environmental Monitoring?”)

Technologies like ML, DL, Data Science and the IoT help select the optimal crops for specific regions by identifying threats, such as pests and diseases, and can be applied across several agricultural phases, such as soil preparation, land management and environmental factors monitoring, water monitoring, pesticide recommendation and crop damage assessment, to enhance the overall productivity of crops and reduce costs and resource waste [21].
Therefore, for this study, the papers were divided into the following categories (Figure 7):
  • Land use mapping and crop detection;
  • Crop yield monitoring;
  • Flood-prone area detection;
  • Forest and vegetation monitoring;
  • Pest disease monitoring;
  • Droughts prediction;
  • Soil content analysis and soil production capacity detection.
Figure 7. Number of papers by research category.
Figure 7. Number of papers by research category.
Sustainability 17 08526 g007

4.1.1. Land Use Mapping and Crop Detection

According to Food and Agriculture Organization of the United Nations (FAO) [22], in 2021, agricultural land occupied about a third of the world’s total land area. In an effort to achieve the SDGs proposed by the United Nations [23], specifically Goal 2—Zero Hunger, some of the targets that were set were to double agricultural productivity by 2030 and maintain the genetic diversity of seeds and cultivated plants [24]. In order to achieve these goals, it is necessary to have an efficient monitoring system for land use and crops, through the use of remote sensing and Artificial Intelligence. Numerous studies have been published that assess the efficacy of involving AI and IoT in agricultural practices.

4.1.2. Crop Yield Monitoring

Crop yield monitoring is an essential part of precision agriculture. It helps farmers take immediate action according to the crops’ needs, to increase productivity, reduce waste and make further data-driven decisions, contributing to the achievement of SDG 12 (Responsible Consumption and Production). Moreover, AI models can be used to predict crop yield, enabling better resource use planning and early detection of potential issues [25].

4.1.3. Flood-Prone Area Detection

Agriculture is highly vulnerable to the impact of climate change and the hydrological disasters associated with it. Rapid flood impact assessments can lead to better emergency response actions [26]. Periodic flood mapping can help develop flood control measures and enhance flood management, contributing as well to agricultural yield estimation [27]. Moreover, in dryland areas, farming systems can rely on supplementary water derived from floods [28].

4.1.4. Forest and Vegetation Monitoring

In a study by Chavarria et al. [29], it was proven that forests have a positive impact on the performance of neighboring farms and their productivity. There is evidence that forest management practices favor the preservation of species in agricultural lands [30]. Sustainably managed forests contribute to reducing soil erosion and minimizing the risk of landslides, ensuring a balanced water cycle, which is essential for agricultural development and food security [31]. Therefore, forests play an essential role in preserving the natural environment’s biodiversity and ecosystem.

4.1.5. Pest Disease Monitoring

Pests are the main biological threat to agricultural and forestry production [32] and identifying areas of insect damage from remote sensing data through AI models is an important part of forest and crops health monitoring [33]. Pest and disease monitoring is a fundamental practice in smart agriculture, aimed at protecting crops and ensuring food security. It involves the detection, identification and tracking of harmful organisms that can reduce crop yield or quality.

4.1.6. Droughts Prediction

In regions characterized by water shortages and frequent drought conditions, a forecasting system for water irrigation demand needs to be developed. Such a system integrates remote sensing techniques and a soil water balance model to serve as a monitoring tool and support water resources management [34]. Comprehensive studies examining the impacts of changing water use patterns resulting from agricultural expansion, particularly increased irrigation, are still limited [35].

4.1.7. Soil Content Analysis and Soil Production Capacity Detection

Accurate soil moisture estimation plays an important role in enhancing agricultural productivity and supporting effective environmental management [36]. Assessment of soil macronutrients [37] and soil organic carbon stocks [38] can also provide a comprehensive understanding of soil health, leading to more informed decisions in smart agriculture and environmental management.

4.2. Answer to the Second Research Question (“What Are the Main Methods and Strategies Used in This Technology?”)

After analyzing the AI algorithms used in the selected papers, they were split into the categories shown in Figure 8. Decision trees-based (DTs) models were the most frequent algorithms found in this SLR, with Random Forest (RF) and other derived algorithms, such as XGBoost, CART (Classification and Regression Tree), GTB (Gradient Tree Boost), Cubist or IF (Isolation Forest), found in 55 papers. A DT-based model is a supervised ML method used for classification and regression tasks. It builds a tree-like model of decision, guiding the algorithm to classify the data or predict a value by splitting the data into smaller subsets until the predefined criteria is met [39]. Deep Neural Networks (DNNs) are supervised algorithms that use labeled data and are structured in three steps. The input layer takes in a file as the data, the hidden layers pass the information from one layer to the next and the output layer produces the final result. Convolutional Neural Networks (CNNs) are a type of DNN that specialize on data with a grid-like structure, particularly images [40], and were the most used type of DNN found in this SLR. CNNs, such as DenseNet, ResNet, TempCNN, ATCNN, SLMFNet, Deeplab V3+, SegNet, U-Net, U-Net++, AlexNet, DABNet, PSPNet and other DNNs were found in 49 papers. Support Vector Machine (SVM) is a supervised ML algorithm used for classification and regression tasks. It tries to find the best boundary, known as the hyperplane, that separates the different classes in the data, with the main goal of finding the optimal boundaries between classes [41]. SVM was found in 19 papers. A significant number of papers in which hybrid models were used were also found. These models, found in 13 papers, combined different AI techniques to overcome the limitations of using a single AI method. Statistical models, such as Auto-ARIMA, GLM (Generalized Linear Model), GNB (Gaussian Naïve Bayes), LASSO (Least Absolute Shrinkage and Selection Operator), LR (Linear Regression) and NB (Normal Bayes), were found in nine papers. Among other classic supervised learning models, kNN (k-Nearest Neighbor) was the most used algorithm. It finds the closest training examples to a new, unclassified case and assigns it the most common class among those neighbors [42]. Recurrent Neural Networks (RNNs) were found in seven papers. They are DL algorithms designed for processing sequential data like time series, text or speech [43]. Transformers- and attention-based models can prioritize the center of attention and focus on the key features according to their degree of importance in complex scenarios [44]. Algorithms such as BETAM (Border-Enhanced Triple Attention Mechanism) were found in five papers in this SLR. The Fully Connected Neural Network (FCNN) MLP (Multilayer Perceptron), a type of ANN (Artificial Neural Network) with multiple layers of interconnected neurons [45], was found in four papers. Clustering algorithms, such as K-means, are unsupervised algorithms used to group similar data points into clusters by identifying the patterns and structures within the data [46], and were found in two papers. Optimization algorithms that update the parameters throughout the learning process, such as OG-WOA (Optimal Guidance-Whale Optimization Algorithm), which is applied to increase the exploitation during the search process and select the relevant features for classification [47], and QHPO (Quantum Hippopotamus Optimization algorithm), which selects the best features for the final classification using NN (Nearest Neighbor) classifiers [48], were found in two papers. Lastly, Generative Adversarial Networks (GANs), with the potential for reconstructing high-resolution images from low-quality data through a process known as super resolution [49], were found in two papers.
The implementation tools used for algorithm development and data processing were also analyzed, as illustrated in Figure 9. Python programming emerged as the most commonly used tool, appearing in 45 papers, either for training the models or for processing the data. Notably, the PyTorch open-source deep learning framework [50] was found in the majority of these papers. Google Earth Engine (GEE) was the second most frequently used platform, mentioned in 23 papers, followed by ArcGIS (Pro or Online) in 21 papers. Other notable software products included Agisoft, QGIS and SNAP (Sentinel Application Platform), each found in 10 papers. R programming was used in eight papers, Pix4D in five papers, ENVI and Matlab computing in four papers, Google Earth Pro in three papers and OTB (Orfeo Toolbox) in two papers. Other software/frameworks found in only one paper each were DrakNet Framework, C++ programming, Go programming, Grass GIS, Iota2, MODIS Reprojection Tool, SAS, Minitab 18, Sen2Cor, OpenStack, CentOS Linux programming, GDAL, ArcMap and SCP (Semi-automated Classification Plugin).

4.2.1. Land Use Mapping and Crop Detection

Out of the 44 papers in this category, only 7 were dedicated to crop detection, the other 37 being related to land use and land cover mapping. Most of them implemented DT-based algorithms. Hamidi et al. [51] proposed a method that uses an Optic-SAR (Synthetic Aperture Radar) fusion for crop mapping based on remote sensing imagery, using RF, SVM, SAE (Sparse Auto-Encoder) and GFSAE (Guided Filtered Sparse Auto-Encoder) AI algorithms and methods, to classify crops such as corn, pea, canola, soybean, oat and wheat. In a study by Snevajs et al. [52], RF was also used for crop classification based on remote sensing imagery. Gackstetter et al. [53] used RF, RNN and a transformer model for crop-type mapping based on satellite imagery. In a study by Di Tommaso et al. [54], RF was used for mapping tall and short crops based on satellite images and LiDAR data. Zhang et al. [55] used DSSGT (Dilated Spectral–Spatial Gaussian Transformer Net) to map corn, cotton, sesame, broad-leaf soybean, narrow-leaf soybean, rice and water crops. In a study by Yue & Tian [56], crops and soil were evaluated with a DT-based algorithm, RF. A specialized RF model—CRRF (Crop-Residue Random Forest)—was also proposed by Yue & Tian [56] for classifications in vegetation–soil ecosystems for crop monitoring. Alotaibi et al. [57] proposed a DTODCNN-CC (Dipper-Throated Optimization with Deep Convolutional Neural Networks-based Crop Classification) approach, applied to satellite imagery, to accurately classify crops into the following classes: maize, banana, legume and other.
Most studies in this category focused on land use and land cover classifications. For this challenge, DT-based models were preferred, with RF being the most used algorithm [58,59,60,61,62,63,64,65,66,67,68,69]. Other algorithms based on decision trees were XGBoost [70,71], CART and GTB [72].
CNNs and DNNs were the second most preferred. Rajesh et al. [73] used a CNN for land cover/land use classification from satellite imagery. In other studies, multiple types of CNNs and DNNs were used, such as MSFA-Net [74], Inception-V3 and DenseNet121 [75], ResNet and ResNet50 [47,48,76,77], TempCNN and ATCNN [70], SLMFNet [78], Deeplab V3+ [44,79], FCN and SegNet [80], UNet and UNet++ [79,81], AlexNet [71], PSPNet and DABNet [79], CMFNet [82] and other CNNs and DNNs [42,83,84,85,86,87].
In a study by Lemenkova [88], an SVM algorithm was used to map the land cover dynamics.
Hybrid models that combine different types of algorithms were also found in the literature. Zhang et al. adopted an approach based on an SACA (Semantic-Aware Contrastive Adaptation) framework for land cover classification [89]. Dantas et al. [70] proposed the use of an REFeD deep learning framework (data Reuse with Effective Feature Disentanglement) for land cover mapping, which incorporates an effective supervision scheme, reinforcing feature disentanglement through multiple levels of supervision. A KD-KNN algorithm was used by Mourtadha et al. [59] to perform land cover and land use classification on satellite imagery. Another hybrid model was Swin-UNet, a semantic segmentation network model, based on the Swin-Transformer block, used by Hao et al. [80] to classify land use based on remote sensing imagery. MCSGNet is a network that solves the issues of low precision and poor generalization ability, proposed by Hu et al. [76] for classifying land cover types from remote sense imagery. Vinaykumar et al. [47] proposed the OG-WOA-Bi-LSTM (Bi-directional Long Short-Term Memory) technique, an advanced satellite image classification approach that combines deep feature extraction, optimization algorithms and sequence-based classification to effectively reduce overfitting in land use and land cover classification.
Other classic supervised learning models, such as kNN, were also found in the literature. In a study by Souza and Rodrigues [61], kNN was used to classify satellite imagery into different land use and land cover classes. The MD (minimum distance) model is another classical classification model found in the literature [72].
RNNs, which are DL models designed for sequential data and are able to maintain a memory of previous inputs, are suitable for analyses where the order and context of data are very important [90]. In a study by Nigar et al. [83], a comparison of RF, CNN and RNN models was performed for land use and land cover classification. The results showed that the RNN model provides an accuracy similar to that of the CNN model.
Optimization algorithms were also found in the literature. A study by Albarakati et al. [48] proposed the use of QHPO optimization to select the best features for final classification of land use and land cover.
Transformer algorithms and attention-based models, such as BETAM, are also used for land cover classifications based on remote sensing images. BETAM consists of a combination of three forms of attention and was used by Wang et al. [44] to perform semantic segmentation for land cover classification.
In a study by Aljebreen et al. [85], the statistical model NB was applied to the training dataset, along other algorithms (SVM, RF, DT and ANN) to perform land use and land cover classification using PlanetScope multispectral imagery.
Clustering algorithms, such as K-means and the supervised classification model LVQ (Learning Vector Quantization), were used by Ouchra et al. [91] with Google Earth Engine for land cover classification.
The overall results of this analysis are found in Figure 10.
For land use mapping and crop detection, Python programming was the most commonly found environment for algorithm development and implementation (Figure 11). ArcGIS was also preferred, found in nine papers, while GEE software, dedicated to land surface monitoring, was found in eight papers. SNAP software was used in six papers and QGIS was chosen in five cases. Matlab Computing and R Software (programming) were each mentioned in three papers. As for the less frequent trends, Google Earth Pro and OTB were used in two papers and Agisoft and ENVI were mentioned only once each. Seven papers used other software products, while in two cases no processing software was specified.

4.2.2. Crop Yield Monitoring

DT-based algorithms were the most used for crop yield monitoring (Figure 12). Algorithms like RF were used to estimate millet FCover [92], to perform image preselection to eliminate unsuitable images for crop monitoring [93] and to predict the yields of corn, rice and soybean [94].
A CNN was used in a study by Sabo et al. [95] to forecast crop yield for barley, durum and soft wheat crops. ConvLSTM, an RNN, alongside a 3D-CNN, a fully convolutional architecture, were used by Nevavuori et al. [25] for crop yield monitoring and prediction. SVM algorithm was used by Lv et al. [96] in a study related to winter wheat fAPAR (fraction of Absorbed Photosynthetically Active Radiation) modeling.
For crop yield monitoring, five software applications/environments were found in this SLR (Figure 13). Among these, the programming language Python was used in four articles. Interestingly, Pix4D was mentioned in two papers, while Agisoft, QGIS and R software (programming) were each used in one paper.

4.2.3. Flood-Prone Area Detection

Few studies were found related to flood-prone area detection. CNNs and other DNNs, transformers- and attention-based models, decision trees-based models and Support Vector Machine models were found in one paper each. In a study by Wu et al. [97], floodwater extraction based on UAV imagery was performed using a transformer model. Pech-May et al. [98] classified flooded areas using satellite images and a U-Net neural network. Atchyuth et al. [27] used RF, CART and SVM models to map flood inundations.
For flood-prone area detection, three software products were found. ArcGIS was the most preferred, found in two articles, while Python programming and GEE were found in one study each.

4.2.4. Forest and Vegetation Monitoring

In most of the papers regarding forest and vegetation monitoring, CNNs and other DNNs were the most used algorithms (Figure 14). Xiang et al. used U-Net, ResUNet and TeranusNet to monitor forest changes using UAV data [99], and Ecke et al. used EfficientNet CNN architecture to monitor forest health based on UAV data [100]. A TempCNN model was used by Perbet et al. in a study regarding boreal forest disturbances monitoring [101]. A mask R-CNN (Mask Region-based Convolutional Neural Network) model with a ResNet-50 backbone was used by Lucas et al. to detect individual trees from aerial imagery [102]. SegForest was proposed as a segmentation model for remote sensing images by Wang et al. to estimate forest area from satellite imagery [103]. Slagter et al. used a standard CNN model to monitor tropical forest disturbance from satellite data [104]. Retallack et al. used various CNNs to detect an indicator arid shrub in UAV imagery [105], Guo et al. used an SNN (Siamese Neural Network) for forest cover change extraction from satellite images [106] and Lahssini et al. [8] used a CNN, along with other algorithms, to estimate the basal area and wood volume in forests based on LiDAR and satellite data. In a study by Mäyrä et al., a workflow for tree species classification from LiDAR data was presented, and it showed that CNN models outperform other algorithms, such as RF, GBM and SVM [107]. He et al. proposed an OUDN (Object-based U-Net-DenseNet-coupled) network for intelligent mapping of urban forests from remote sensing imagery [108]. Li et al. proposed an MSCIN (Multifeature Synthesis Perception Convolutional Network) for vegetation extraction from multisource remote sensing images [109]. The results were compared with results from other models, such as the CNN HRNet, the hybrid model OCBDL (Off-Center Bayesian Deep Learning) and RFC (Random Forest Classifier); MSCIN achieved robust vegetation extraction, having the ability to overcome internal fragmentation and unclear boundaries.
The U-Net CNN was found in multiple studies for identification of certain species of trees [110], for multi-temporal forest monitoring [111], for NDVI (Normalized Difference Vegetation Index) estimation [112], for coastal forest cover change detection [113] and for deforestation detection [114], along with other CNNs, such as DeepLabv3+, SegNet, Res-UNet and FC-DenseNet.
The CNN DeepLabv3+ was used by Andrade et al. to detect deforestation from satellite imagery [115]. Other DNNs and CNNs were found in various studies for classifying deforestation patterns [116], for NDVI reconstruction and vegetation cover study [117] and for assessing changes in boreal vegetation [118].
RNNs were found in two studies for vegetation behavior forecasting based on satellite imagery [119] and for generating dense NDVI time series and covering the gaps resulting from cloud cover [120].
DT algorithms were the second most found type of algorithm during the literature review. RF was used for forest cover change detection from satellite imagery [121], to monitor changes in boreal peatland vegetation from satellite imagery [122], to map forest habitat [123] or forest canopy height [124] with satellite imagery as the base, for a forest cover dynamics assessment from satellite imagery [125], to map landscape-scale dynamics of the plant area index from radar data [126], to monitor oak dieback from satellite imagery [127], for vegetation monitoring of mountainous regions from satellite imagery [128], to map coastal mangrove forests from satellite big data [129], for forest/nonforest segmentation based on satellite data [130], for vegetation species detection from remote sensing imagery [131] and to map vegetation cover based on UAV imagery for guided species conservation [132]. Other DT algorithms used in the literature were XGBoost [124,133], GBM [129], SS-RF (Stochastic Spatial Random Forest) [134], CART [131] and IF [116].
SVM algorithm was used for forest cover change detection [121] and for vegetation cover mapping [132], and other similar algorithms, such as SVR (Support Vector Regressor) [124] and OC-SVM (One-Class Support Vector Machine) [116], were also found in the literature.
Statistical models, such as ARIMA [119], QDA (Quadratic Discriminant Analysis) and GNB [130] and LR [132], were also found in the literature.
The classic supervised learning model kNN was used for forest canopy height mapping from satellite imagery [124], detection of different vegetation species from remote sensing images [131] and a basal area and wood volume estimation based on satellite images and LiDAR data [8].
A Fully Connected Neural Network MLP was found in two papers for deforestation patterns classification [116] and for basal area and wood volume estimation [8].
Perbet et al. [101] used a transformer-based model to detect and classify boreal forest disturbances from satellite imagery. Kong et al. used a generative AI model—a dual RSS-GAN network (Remote-Sensing Super-Resolution Generative Adversarial Network)—for long-term vegetation monitoring [49]. Liu et al. used the clustering model K-Means++ for tree species classification [135].
Other papers implemented the use of hybrid models, such as SiamHRnet-OCR [136] and ChangeFormer [137], for deforestation detection from satellite imagery. Zheng et al. used the Near Pseudo model for vegetation species detection from remote sensing imagery [131], Li et al. used the OCBDL model for remote sensing image vegetation extraction [109] and Liu et al. used the YOLOv7 (You Only Look Once) model for tree species classification from UAV data [135].
For forest and vegetation monitoring, after Python programming, which was found in 15 papers; GEE was the most used software product, found in 12 papers; followed by ArcGIS, used in 5 papers (Figure 15). Agisoft and QGIS were found in three papers each, while ENVI, Pix4D and R programming were found in two papers each. Google Earth Pro and SNAP were used in only one paper each.

4.2.5. Pest Disease Monitoring

For pest disease monitoring, CNNs or other DNNs, found in three papers, were the most used models; decision trees-based models were found in two papers and statistical models were found in one paper. In a study by Turkulainen et al. [138], bark beetle-induced spruce damage from UASs (Unoccupied Aircraft Systems) images was determined using DNNs. Zhang et al. [32] used Unet++ with a ResNeSt model as feature extraction backbone for pest-infested forest damage detection. Kislov et al. [139] used a DCNN (Deep Convolutional Neural Network) to detect forest disturbance based on satellite images.
RF was also found in the literature. Junttila et al. [140] used an RF to map spruce tree decline due to bark beetle infestation from multispectral imagery and Iordache et al. [141] used an ML approach based on RF to detect pine wilt disease from airborne spectral imagery.
A statistical model—ARIMA (Autoregressive Integrated Moving Average)—was used by Alkan & Aydin [33] to predict insect damage spread.
For pest disease monitoring, Agisoft was the most used software, found in three papers, followed by ArcGIS, used in two papers. In this case, Python programming and SNAP were found in one paper each, while one paper involved the use of other software products.

4.2.6. Droughts Prediction

Two studies on droughts prediction were found during the search, and neither of them implemented AI algorithms. Villani et al. [34] presented the iCOLT climate service, used for seasonal predictions of irrigation. It is a probabilistic approach, based on soil water balance modeling. Laipelt et al. [35] used the geeSEBAL model, an open-source implementation of the Surface Energy Balance Algorithm for Land (SEBAL) using Google Earth Engine [142], to assess changes in evapotranspiration resulting from cropland expansion.

4.2.7. Soil Content Analysis and Soil Production Capacity Detection

For soil analyses, DT-based models were found to be the most used in the literature (Figure 16). RF was used to estimate the soil moisture content based on UAV images [143] or on satellite data [36], to estimate the soil organic carbon based on satellite data [144,145], to assess the soil macronutrients status based on satellite imagery [37] and to map soil pollution using drone image recognition on an arsenic-contaminated agricultural field [146]. Other decision trees-based models, such as GBDT (gradient descent-boosted regression tree), which was used to estimate soil organic carbon stocks based on satellite imagery [38], Cubist [37] and ERF (Extreme Random Forest) [146], were found in the literature.
SVM was also used to estimate the soil moisture content based on satellite data [36] and to map soil pollution using drone image recognition on an arsenic-contaminated agricultural field [146]. Another similar algorithm, SVR-RBF (Support Vector Regression with Radial Basis Function) was found in a study on organic carbon estimation in soil [144].
The statistical model GLM was used to assess soil macronutrients status based on satellite imagery [37] and to assess atrazine metabolite effects from soil contamination based on UAV images [147]. Another statistical model—LASSO—was found in the literature, implemented in a soil organic carbon estimation study [144].
Other algorithms included FCNNs [38,146], the RNN LSTM [36] and the hybrid model GA-RF (Genetic Algorithm-Optimized Random Forest) [143].
For soil content analysis and soil production capacity, Python programming and ArcGIS were preferred (Figure 17), found in three papers each. Agisoft, SNAP, GEE and R programming were used in two papers each, while QGIS, Matlab, Envi and Pix4D were found in one paper each.

4.3. Answer to the Third Research Question (“What Type of Data Can Be Used in This Regards?”)

The data sources were classified into the categories found in Figure 18. Satellite imagery was by far the most used data source, found in 78 papers. Next, UAV imagery was found in 18 papers and Radar data in 14 papers. Five studies used unspecified remote sensing imagery, while LiDAR data was found in four papers. In a single study, a stereo camera system placed at the study location was used.
The sources of data used in the selected papers were also investigated (Figure 19). Among the many available open-access satellite datasets, Sentinel data was the most commonly used, found in 43 studies. The Sentinel-1 mission, part of the Copernicus Initiative by the European Commission (EC) and the European Space Agency (ESA), comprises a constellation of two sun-synchronous polar-orbiting satellites, flying in the same orbital plane but phased 180° apart, equipped with C-band SAR systems, enabling all-weather, day-and-night Earth observations. Sentinel-1 operates in four distinct imaging modes, offering spatial resolutions down to 5 m and swath widths up to 400 km [148]. The Sentinel-2 mission is a European initiative focused on wide-swath, high-resolution, multispectral Earth observations. It consists of a pair of satellites achieving global coverage every 5 days at the Equator. Each satellite is equipped with a single payload: the Multispectral Instrument (MSI) is capable of capturing imagery across 13 spectral bands. These include the following: four bands at a 10 m resolution, six bands at a 20- m resolution and three bands at a 60 m resolution, with a swath width of 290 km [149].
Landsat data was found in 31 papers. Landsat-8, launched by NASA in 2013, is equipped with the Operational Land Imager (OLI), which captures Earth imagery across the visible, near-infrared (NIR) and shortwave infrared (SWIR) spectrum. The satellite system features two additional spectral bands: Band 1 is deep blue and optimized for coastal zone observation and Band 9 is optimized for cirrus cloud detection in the NIR. Landsat-8 can ensure a 15 m resolution for panchromatic images and 30 m for multispectral ones [150]. Landsat 4, built and launched by NASA–NOAA, is equipped with the Landsat 4 Thematic Mapper (TM) sensor which features seven spectral bands, capturing data across the blue, green, red, NIR, mid-infrared (two bands) and thermal infrared (TIR) regions of the electromagnetic spectrum [151].
Gaofen (GF) is a series of Chinese high-resolution Earth imaging satellites, launched as part of the China High-Resolution Earth Observation System (CHEOS), and GF data was found in six papers, including images from the Gaofen-1 (GF-1) mission, Gaofen-2 (GF-2) mission and Gaofen-6 (GF-6) mission. GF-1 is equipped with two main imaging systems: a panchromatic and multispectral camera (PMC) and a wide-field imager (WFI). The PMC is a high-resolution pushbroom imager featuring Time Delay Integration (TDI), consisting of two cameras that provide a swath width of 69 km at nadir. It captures imagery in one NIR and three visible bands—blue, green and red—with a spatial resolution of 8 m. Additionally, it provides 2 m resolution panchromatic images in the visible range. It makes observations in three visible bands and one NIR band, offering 16 m spatial resolution [152]. A GF-2 satellite is equipped with two PAN/MS (panchromatic/multispectral) cameras. It is capable of capturing high-resolution imagery with a Ground Sampling Distance (GSD) of 0.81 m in the panchromatic band and 3.24 m in the multispectral bands across a 45 km-wide swath [153]. A GF-6 satellite is equipped with a panchromatic/hyperspectral camera offering resolutions of 2 m and 8 m, respectively, with an image swath exceeding 90 km. Additionally, it carries a wide-angle camera with a resolution of 16 m and an expansive swath of 800 km. Both cameras employ three-mirror anastigmatic telescopes and are capable of imaging from the visible to the NIR spectrum, covering wavelengths of approximately 450–900 nm [154].
Planet Scope data was found in four papers. The Planet Scope satellite camera operates in eight spectral bands—Coastal Blue, Blue, Green I, Green, Yellow, Red, RedEdge and NIR—with a GSD (at nadir) between 3 m and 4.1 m [155].
WorldView data was found in three papers, including WorldView-2, WorldView-3 and WorldView-4. WorldView-2 is capable of capturing high-resolution imagery with a GSD of 0.46 m in the panchromatic and 1.8 m in the eight multispectral bands [156]. WorldView-3 captures imagery at 0.31 m resolution in the panchromatic, 1.24 m in the eight VNIR (visible-to-near-infrared) bands, 3.7 m in the eight SWIR bands and 30 m in the CAVIS (Clouds, Aerosols, Vapors, Ice and Snow) bands [157]. WorldView-4 was designed to expand European Space Imaging’s offerings with ultra-high-resolution Earth observation capabilities, and it captures imagery at 31 cm resolution in panchromatic mode and 1.23 m resolution in multispectral mode [158].
SPOT (Satellite pour l’Observation de la Terre) data was used in two papers found in this SLR. Initiated by the French National Centre for Space Studies (CNES), five satellites were deployed between 1986 and 2015. These missions offered detailed views of the Earth’s surface, enabling significant advances in areas such as cartography, vegetation monitoring, land use and land cover analysis and disaster impact assessments. All satellites in the series feature both panchromatic and multispectral imaging capabilities, covering a swath width of 60 km. SPOT-6 and SPOT-7 cover a GSD of 1.5 m at nadir in panchromatic mode and 6 m at nadir in multispectral mode [159].
NASA’s UAVSTAR (Uninhabited Arial Vehicle Synthetic Aperture Radar) was found as the data source in two papers. It uses airborne radar and operates by transmitting and receiving radio waves to gather information about the Earth’s surface features. UAVSAR has collected data since 2007 in the following fields of study: vegetation, ice and glaciers, soil moisture, oceanography, earthquakes, volcanoes and other [160].
The Global Ecosystem Dynamics Investigation (GEDI) is a LiDAR instrument mounted on the International Space Station, designed to deliver high-resolution observations of Earth’s three-dimensional forest structure and was used in two papers in this study. It captures data at a spatial resolution of approximately 30 m per footprint, with samples spaced every 60 m along its orbital path. Additionally, it offers a spectral resolution of 1 m [161].
It is worth noting that MAXAR satellite imagery was found in one paper. Although it offers a spatial resolution ranging from 30 cm to 1.2 m and supports spectral analysis across the visible, NIR and RedEdge bands [162], the high cost of the images can be a limiting factor for broader use.
In 20 papers, UAVs and other personal data sources were used. The spatial resolution of UAV imagery is typically very high, often better than satellite imagery, depending on the flight altitude, sensor quality and camera settings, with some of the most advanced cameras ensuring a GSD up to 3 cm [163].
Other sources, each found in only one paper, include ESA’s RapidEye data, SRTM DTM mission, PolSAR data, UC Merced Land Use Dataset, ICESat-2, ISPRS Potsdam Benchmark dataset, DeepGlobe dataset, ZY-3 dataset, Fuyang remote sensing image dataset, NASA SMAP mission data, NWPU-RESISC dataset, Bing Maps aerial imagery, NASA MODIS data, Planet Fusion CubeSat imagery, NICFI satellite data, Pleiades imagery, VHR satellite imagery, LISS IV images, NASA EO-1 Hyperion sensor, NCALM sensor, QuickBird satellite imagery, IKONOS satellite imagery, ESA’s RADARSAT data, Xiongan dataset and SWISSIMAGE HIST 1946 data.

4.3.1. Land Use Mapping and Crop Detection

For land use mapping and crop detection, satellite imagery was by far the most preferred data type (Figure 20). Sentinel and Landsat data were the most used (Figure 21) due to their wide availability. Other satellite data sources were Gaofen, SPOT and WorldView, found in two papers each. Other types of data found in the SLR included radar data, UAV imagery, unspecified remote sensing imagery and LiDAR data. The sources of this data were either NASA UAVSTAR, NASA GEDI, UAVs and other personal sources or other sources less frequently found in this study.

4.3.2. Crop Yield Monitoring

For crop yield monitoring, satellite imagery and UAV imagery were equally preferred (Figure 22), found in three papers each, while radar data was used in only one paper and in one study a stereo camera system was used. In this case, Sentinel data was also the most used (Figure 23) due to its availability and high resolution. Landsat, Planet Scope data and NASA UAVSTAR data were found in one paper each, while three studies used UAVs or other personal sources.

4.3.3. Flood-Prone Area Detection

For flood-prone area detection, radar data was preferred and used in two studies, while satellite and UAV imagery were found in one study each. For the data sources, Sentinel data was used in two studies, while Gaofen and UAV imagery were used in one study each.

4.3.4. Forest and Vegetation Monitoring

Satellite imagery was widely used for forest and vegetation monitoring and was found in 31 papers included in this SLR (Figure 24). UAV imagery was used in five papers, radar data in three papers and LiDAR data in two papers. In two studies, unspecified remote sensing images were used. Regarding the data sources (Figure 25), Sentinel and Landsat data were the most used, being found in 17 and 14 papers, respectively. Other satellite data sources included Gaofen, PlanetScope, WorldView and Maxar, which was used in only one paper. NASA GEDI data was used in one paper, while five other studies included UAV and other personal data sources.

4.3.5. Pest Disease Monitoring

For pest disease monitoring, satellite imagery was found in four papers, with Sentinel being the main data source, while UAV imagery was used in three papers and one study involved the use of other data source.

4.3.6. Droughts Prediction

While performing the initial analysis of the selected papers for this SLR, two papers regarding draughts prediction were found, which were excluded as they did not include the use of AI. In both papers, satellite imagery data sources Sentinel and Landsat were used.

4.3.7. Soil Content Analysis and Soil Production Capacity Detection

For soil content analysis and soil production capacity, satellite and UAV imagery were the most used data types (Figure 26), with Sentinel data found in three papers and Landsat data in one paper (Figure 27). Two papers included the use of radar data.

5. Discussion

During this systematic literature review, 472 papers from Scopus and 309 papers from Web of Science were identified. After removing 201 duplicates, an initial screening of titles and abstracts was conducted on the remaining records, leading to the exclusion of 426 papers. A full-text assessment was then performed on the remaining articles, resulting in the exclusion of an additional 46 papers. Ultimately, 108 studies met the inclusion criteria and were retained for the final review.
The first significant finding of the systematic literature review is the growing interest in the application of AI in remote sensing for agricultural research, with the number of relevant publications increasing from 8 in 2020 to 51 in 2024. In terms of geographical distribution, China stands out as the country with the highest number of studies. In contrast, regions such as Eastern Europe remain underrepresented in research on the use of AI and remote sensing in agriculture. This gap is particularly noteworthy given that countries like Romania, Ukraine and Bulgaria are recognized for their extensive arable lands and significant agricultural sectors.
Based on the search queries, to answer the first research question—“What are the main problems that can be solved through incorporating AI-driven classification techniques in the field of smart agriculture and environmental monitoring?” (Figure 28)—the papers were divided into the following categories: land use mapping and crop detection, crop yield monitoring, flood-prone area detection, forest and vegetation monitoring, pest disease monitoring, droughts prediction and soil content analysis and soil production capacity detection. The categories with the highest research interest were land use mapping and crop detection (44 papers) and forest and vegetation monitoring (41 papers). In contrast, fewer studies addressed soil content analysis and soil production capacity detection (eight papers), pest disease monitoring (six papers), crop yield monitoring (six papers) and flood-prone area detection (three papers). Notably, no studies meeting the inclusion criteria were found in the category of droughts prediction. This absence highlights a significant research gap and underscores the need for further studies in this area, especially considering the increasing impact of climate change on agricultural productivity. Furthermore, other categories, such as flood-prone area detection, pest disease monitoring, crop yield monitoring and soil content analysis and soil production capacity detection were also underrepresented. These findings highlight the need for more research in these areas to support sustainable and data-driven agricultural practices.
To answer the second research question—“What are the main methods and strategies used in this technology?” (Figure 29)—the AI algorithms found in this SLR were divided into the following categories: decision trees-based models, Convolutional Neural Networks and other Deep Neural Networks, Support Vector Machine, hybrid models, statistical models, other classic supervised learning models, Recurrent Neural Networks, transformers and attention-based models, Fully Connected Neural Networks, clustering algorithms, optimization algorithms and generative models. Among these, decision trees-based models (found in 55 papers) and Convolutional Neural Networks and other Deep Neural Networks (found in 49 papers) were the most commonly employed, highlighting their large-scale applicability. The widespread use of CNNs, in particular, can be attributed to their ability to effectively process grid-like data structures, such as imagery. Hybrid models are especially interesting (found in 13 papers), as they combine different AI techniques to overcome the limitations of using a single AI method. Regarding the implementation tools used for algorithm development and data processing, Python was the most commonly used programming language, mentioned in 45 papers. Its popularity is likely due to its open-source nature, extensive libraries for machine learning and remote sensing and overall versatility in handling large-scale data. Google Earth Engine was found in 23 papers in total, reflecting its relevance as a cloud-based, free for non-commercial use platform, for processing and analyzing large volumes of remote sensing data. Interestingly, ArcGIS was used in 21 papers, despite its high costs, highlighting its robust geospatial analysis capabilities and user-friendly interface, which, in addition to the available geoprocessing tools that simplify the overall classification workflows, allows for Python scripts integration for advanced algorithm development. The main methods used in each category were further studied. Decision trees-based models were the most preferred for land use mapping and crop detection, crop yield monitoring and soil content analysis and soil production capacity detection, while for the forest and vegetation monitoring and pest disease monitoring categories Convolutional Neural Networks and other Deep Neural Networks were the most used. In the three papers on flood-prone area detection, one algorithm from each of the following four categories was found: CNNs and other DNNs, transformer and attention-based models, decision-trees-based models and SVM. Python programming was the most used environment for land use mapping and crop detection, crop yield monitoring, forest and vegetation monitoring and soil content analysis and soil production capacity detection; for flood-prone area detection, ArcGIS software was the most used; and for pest disease monitoring, Agisoft was the most frequently found.
To answer the third research question—“What type of data can be used in this regards?” (Figure 30)—the data sources were classified into the following categories: satellite imagery, being by far the most frequently found type of data (78 papers); UAV imagery (found in 18 papers); radar data (found in 14 papers); unspecified remote sensing imagery (used in 5 papers); LiDAR data (found in 4 papers) and a stereo camera system (found in 1 paper). The widespread use of satellite imagery highlights its accessibility, broad spatial coverage and the availability of free and high-resolution datasets, making it a preferred choice for large-scale agricultural monitoring, while UAV data is preferred for local, more accurate observations. Overall, the open-source Sentinel data was the most used, found in 43 papers, due to its wide availability and high-spatial and -temporal resolutions, followed by the Landsat data, used in 31 papers, with a slightly lower spatial resolution. This type of data can support multi-temporal analyses, benefitting from a long historical archive. Other high-resolution data sources, such as QuickBird, were used in some studies, but their applicability is often limited by their high acquisition costs and restricted temporal coverage. In contrast, Sentinel and Landsat provide free, openly accessible data with global coverage, making them more practical for large-scale and multi-temporal studies, which explains their prevalence in the literature. Satellite imagery was the preferred data source for land use mapping and crop detection, forest and vegetation monitoring and pest disease monitoring. For crop yield monitoring and soil content analysis and soil production capacity detection, satellite and UAV imagery were equally preferred, while for flood-prone area detection, radar data was the most frequently used, due to its capacity to penetrate clouds and operate in all kinds of weather conditions. In all categories, Sentinel data was the most used data source, due to its availability, lack of costs, high resolution and versatility.

6. Conclusions

This systematic literature review highlights the growing interest in research on the integration of AI and remote sensing techniques into agriculture from 2020 to 2024. The increase in publications reflects a rising global interest in developing tools to address agricultural and environmental challenges. This review revealed that the most studied topics were land use mapping and crop detection and forest and vegetation monitoring, where AI and remote sensing proved to be highly effective. However, notable research gaps were identified, particularly in droughts prediction, gaps that are especially relevant given the context of climate change and food security.
CNNs and DT-based models emerged as the most frequently used AI techniques in the analyzed studies. Their popularity is likely due to their ability to handle the complex and large datasets commonly used in remote sensing applications. Satellite imagery—particularly from open-access platforms such as Sentinel and Landsat—played an important role in these analyses, offering high-spatial, -spectral and -temporal resolutions. The accessibility and continuity of satellite data make it especially relevant for monitoring agricultural dynamics, providing up-to-date insights into land use changes and crop health.
In Table 5, the most used algorithms, software products, data type and data source for each category are highlighted, indicating the current trends in land use mapping and crop detection, crop yield monitoring, flood-prone area detection, forest and vegetation monitoring, pest disease monitoring, droughts prediction and soil content analysis and soil production capacity detection.
By facilitating smarter, more efficient and data-driven agricultural practices, the integration of AI and remote sensing contributes directly to the achievement of several SDGs, including SDG 2 (Zero Hunger), SDG 9 (Industry, Innovation and Infrastructure), SDG 12 (Responsible Consumption and Production) and SDG 13 (Climate Action). The findings are relevant for SDG 6 (Clean Water and Sanitation), as remote sensing and AI techniques applied to flood-prone area detection and droughts prediction can improve water resource management and reduce the risks associated with water scarcity or excess. They also contribute to SDG 9 (Industry, Innovation and Infrastructure), as digital technologies, AI-driven analyses and open-source satellite platforms such as Sentinel and Landsat can foster innovation in agricultural monitoring and support the development of resilient digital infrastructures for sustainable farming. Furthermore, the subject aligns with SDG 12 (Responsible Consumption and Production), since AI-based tools for soil content analysis, crop yield monitoring and pest and disease monitoring provide insights that encourage efficient resource use, minimize waste and support sustainable production systems. Finally, the study is closely linked to SDG 13 (Climate Action), as AI and remote sensing are essential for monitoring climate-related issues, such as droughts and floods.
Possible limitations of this SLR include the exclusive use of Scopus and Web of Science databases for the literature retrieval, which may have resulted in the omission of relevant studies indexed elsewhere. Additionally, only papers written in English were considered, potentially excluding relevant research published in other languages. This review focused solely on publications from 2020 onwards, and therefore may not have captured earlier foundational work in this field. During the selection process, some topics—such as mapping wildfires with AI—were excluded for being outside the scope of the SLR. Other subjects, such as the influence of wildlife on agricultural activities, were not considered in our search strategy. While agricultural activities can indeed be influenced by wildlife from surrounding forests, and AI algorithms are increasingly being applied for wildlife detection [164,165], this line of research does not necessarily rely on remote sensing data.
Finally, we observed an uneven geographic representation of the studies, with certain regions—particularly Eastern Europe—remaining underrepresented. While this could be partly due to our study’s limitation to using only Scopus and WoS, it could also be the result of differences in policy support, research funding and national priorities, and variations in data availability and openness, as open-access satellite imagery and AI-ready datasets are more widely used in regions with established digital infrastructure and strong governmental support for smart agriculture initiatives.
To support the wider adoption of AI and remote sensing for sustainable agriculture, particularly in developing countries, several policy measures can be considered. Governments and institutions can promote the use of free, open-access satellite data such as Sentinel and Landsat, which provide high-quality imagery without significant financial investment. Policies that encourage data sharing, collaboration between research institutions and public–private partnerships can further enhance access to necessary infrastructure and expertise. By adopting such measures, developing countries can implement low-cost remote sensing and AI technologies to improve agricultural and vegetation monitoring, optimize resource use and increase resilience to climate-related risks.
We propose that future activities focus on filling the research gaps and continuing to promote the use of open-source data and tools, particularly in underrepresented regions, to ensure more efficient and sustainable agriculture.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su17198526/s1, PRISMA 2020 Checklist. Reference [166] is cited in the Supplementary Materials.

Author Contributions

Conceptualization, V.A.N., G.B., A.C.B. and A.P.G.; methodology, V.A.N. and A.P.G.; validation, G.B. and A.C.B.; formal analysis, V.A.N., G.B., A.C.B. and A.P.G.; investigation, V.A.N. and A.P.G.; resources, V.A.N.; data curation, V.A.N.; writing—original draft preparation, V.A.N.; writing—review and editing, V.A.N., G.B., A.C.B. and A.P.G.; visualization, V.A.N., G.B., A.C.B. and A.P.G.; supervision, G.B. and A.C.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

This study was conducted within the Geodetic Engineering Measurements and Spatial Data Infrastructures Research Centre, Faculty of Geodesy, Technical University of Civil Engineering Bucharest.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ANNArtificial Neural Network
ARIMAAutoregressive Integrated Moving Average
BETAMBorder-Enhanced Triple Attention Mechanism
CARTClassification and Regression Tree
CHEOSChina High-Resolution Earth Observation System
CNNConvolutional Neural Network
CRRFCrop-Residue Random Forest
DLDeep Learning
DNNDeep Neural Network
DTDecision Tree
ECEuropean Commission
ERFExtreme Random Forest
ESAEuropean Space Agency
FAOFood and Agriculture Organization
FCNNFully Connected Neural Network
GANGenerative Adversarial Network
GA-RFGenetic Algorithm-Optimized Random Forest
GEDIGlobal Ecosystem Dynamics Investigation
GEEGoogle Earth Engine
GFGaofen
GLMGeneralized Linear Model
GNBGaussian Naïve Bayes
GSDGround Sampling Distance
GTBGradient Tree Boost
IFIsolation Forest
IoTInternet of Things
kNNk-Nearest Neighbor
LASSOLeast Absolute Shrinkage and Selection Operator
LiDARLight Detection and Ranging
LRLinear Regression
LSTMLong Short-Term Memory
LVQLearning Vector Quantization
MLMachine Learning
MLPMultilayer Perceptron
MSIMultispectral Instrument
NBNormal Bayes
NDVINormalized Difference Vegetation Index
NIRNear Infrared
NNNearest Neighbor
OC-SVMOne-Class Support Vector Machine
OG-WOAOptimal Guidance-Whale Optimization Algorithm
OLIOperational Land Imager
OTBOrfeo Toolbox
OUDNObject-Based U-Net-DenseNet
PMCPanchromatic and Multispectral Camera
QDAQuadratic Discriminant Analysis
QHPOQuantum Hippopotamus Optimization
RFRandom Forest
RNNRecurrent Neural Network
RSS-GANRemote Sensing Super-Resolution Generative Adversarial Network
SARSynthetic Aperture Radar
SCPSemi-Automated Classification Plugin
SDGSustainable Development Goal
SEBALSurface Energy Balance Algorithm for Land
SLRSystematic Literature Review
SNAPSentinel Application Platform
SPOTSatellite pour l’Observation de la Terre
SS-RFStochastic Spatial Random Forest
SVMSupport Vector Machine
SWIRShortwave Infrared
TIRThermal Infrared
UAVUnmanned Aerial Vehicle
UAVSTARUninhabited Arial Vehicle Synthetic Aperture Radar

References

  1. Huo, D.; Malik, A.W.; Ravana, S.D.; Rahman, A.U.; Ahmedy, I. Mapping Smart Farming: Addressing Agricultural Challenges in Data-Driven Era. Renew. Sustain. Energy Rev. 2024, 189, 113858. [Google Scholar] [CrossRef]
  2. THE 17 GOALS|Sustainable Development. Available online: https://sdgs.un.org/goals (accessed on 9 July 2025).
  3. Maroli, A.; Narwane, V.S.; Gardas, B.B. Applications of IoT for Achieving Sustainability in Agricultural Sector: A Comprehensive Review. J. Environ. Manag. 2021, 298, 113488. [Google Scholar] [CrossRef]
  4. Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R. Enhancing Smart Farming through the Applications of Agriculture 4.0 Technologies. Int. J. Intell. Netw. 2022, 3, 150–164. [Google Scholar] [CrossRef]
  5. CAP at a Glance—European Commission. Available online: https://agriculture.ec.europa.eu/common-agricultural-policy/cap-overview/cap-glance_en (accessed on 9 July 2025).
  6. van der Velden, D.; Klerkx, L.; Dessein, J.; Debruyne, L. Governance by Satellite: Remote Sensing, Bureaucrats and Agency in the Common Agricultural Policy of the European Union. J. Rural. Stud. 2025, 114, 103558. [Google Scholar] [CrossRef]
  7. Cappelli, S.L.; Domeignoz-Horta, L.A.; Loaiza, V.; Laine, A.-L. Plant Biodiversity Promotes Sustainable Agriculture Directly and via Belowground Effects. Trends Plant Sci. 2022, 27, 674–687. [Google Scholar] [CrossRef] [PubMed]
  8. Lahssini, K.; Teste, F.; Dayal, K.R.; Durrieu, S.; Ienco, D.; Monnet, J.-M. Combining LiDAR Metrics and Sentinel-2 Imagery to Estimate Basal Area and Wood Volume in Complex Forest Environment via Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4337–4348. [Google Scholar] [CrossRef]
  9. Pereira, P.; Brevik, E.C.; Muñoz-Rojas, M.; Miller, B.A.; Smetanova, A.; Depellegrin, D.; Misiune, I.; Novara, A.; Cerdà, A. Chapter 2—Soil Mapping and Processes Modeling for Sustainable Land Management. In Soil Mapping and Process Modeling for Sustainable Land Use Management; Pereira, P., Brevik, E.C., Muñoz-Rojas, M., Miller, B.A., Eds.; Elsevier: Amsterdam, The Netherlands, 2017; pp. 29–60. ISBN 978-0-12-805200-6. [Google Scholar]
  10. Weiss, M.; Jacob, F.; Duveiller, G. Remote Sensing for Agricultural Applications: A Meta-Review. Remote Sens. Environ. 2020, 236, 111402. [Google Scholar] [CrossRef]
  11. Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of Remote Sensing in Precision Agriculture: A Review. Remote Sens. 2020, 12, 3136. [Google Scholar] [CrossRef]
  12. Verma, Y.; Verma, A.; Chatterjee, S.; Sagar, P. Remote Sensing Applications in Agriculture. In Recent Trends in Agriculture; Integrated Publications: New Delhi, India, 2024; pp. 159–180. ISBN 978-93-5834-170-6. [Google Scholar]
  13. Wang, J.; Wang, Y.; Li, G.; Qi, Z. Integration of Remote Sensing and Machine Learning for Precision Agriculture: A Comprehensive Perspective on Applications. Agronomy 2024, 14, 1975. [Google Scholar] [CrossRef]
  14. Jung, J.; Maeda, M.; Chang, A.; Bhandari, M.; Ashapure, A.; Landivar-Bowles, J. The Potential of Remote Sensing and Artificial Intelligence as Tools to Improve the Resilience of Agriculture Production Systems. Curr. Opin. Biotechnol. 2021, 70, 15–22. [Google Scholar] [CrossRef]
  15. Sustainable Development Report 2025. Available online: https://dashboards.sdgindex.org/ (accessed on 9 July 2025).
  16. Interlinkages—Goals|KnowSDGs. Available online: https://knowsdgs.jrc.ec.europa.eu/interlinkages/goals (accessed on 9 July 2025).
  17. Badea, A.C. Overview of the Sustainable Development Goals in Action and on the Involvement of UTCB in Sustainability Projects. EU-CONEXUS Seminar, 2025. Available online: https://www.eu-conexus.eu/en/joint-research-institutes-seminars/overview-of-the-sustainable-development-goals-in-action-and-on-the-involvement-of-utcb-in-sustainability-projects/ (accessed on 16 June 2025).
  18. Agricultural Production—Crops. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Agricultural_production_-_crops (accessed on 9 July 2025).
  19. Parra-López, C.; Ben Abdallah, S.; Garcia-Garcia, G.; Hassoun, A.; Trollman, H.; Jagtap, S.; Gupta, S.; Aït-Kaddour, A.; Makmuang, S.; Carmona-Torres, C. Digital Technologies for Water Use and Management in Agriculture: Recent Applications and Future Outlook. Agric. Water Manag. 2025, 309, 109347. [Google Scholar] [CrossRef]
  20. Wang, X.; Zeng, H.; Yang, X.; Shu, J.; Wu, Q.; Que, Y.; Yang, X.; Yi, X.; Khalil, I.; Zomaya, A.Y. Remote Sensing Revolutionizing Agriculture: Toward a New Frontier. Future Gener. Comput. Syst. 2025, 166, 107691. [Google Scholar] [CrossRef]
  21. Bilal, A.; Liu, X.; Long, H.; Shafiq, M.; Waqar, M. Increasing Crop Quality and Yield with a Machine Learning-Based Crop Monitoring System. Comput. Mater. Contin. 2023, 76, 2401–2426. [Google Scholar] [CrossRef]
  22. LavagnedOrtigue, O. (ESS) Land Statistics and Indicators 2000–2021. Available online: https://openknowledge.fao.org/server/api/core/bitstreams/5c8b2707-1bcf-4c29-90e2-3487e583f71e/content (accessed on 9 July 2025).
  23. Sustainable Development. Available online: https://sdgs.un.org/ (accessed on 9 July 2025).
  24. Goal 2|Department of Economic and Social Affairs. Available online: https://sdgs.un.org/goals/goal2#targets_and_indicators (accessed on 9 July 2025).
  25. Nevavuori, P.; Narra, N.; Linna, P.; Lipping, T. Crop Yield Prediction Using Multitemporal UAV Data and Spatio-Temporal Deep Learning Models. Remote Sens. 2020, 12, 4000. [Google Scholar] [CrossRef]
  26. Assessing Flood Impact on Agricultural Production. Available online: https://openknowledge.fao.org/server/api/core/bitstreams/89ae84da-5b93-464a-87b4-4d99c3083528/content (accessed on 9 July 2025).
  27. Atchyuth, B.A.S.; Swain, R.; Das, P. Near Real-Time Flood Inundation and Hazard Mapping of Baitarani River Basin Using Google Earth Engine and SAR Imagery. Environ. Monit. Assess. 2023, 195, 1331. [Google Scholar] [CrossRef]
  28. Liman Harou, I.; Whitney, C.; Kung’u, J.; Luedeling, E. Mapping Flood-Based Farming Systems with Bayesian Networks. Land 2020, 9, 369. [Google Scholar] [CrossRef]
  29. Duriaux Chavarría, J.-Y.; Baudron, F.; Sunderland, T. Retaining Forests within Agricultural Landscapes as a Pathway to Sustainable Intensification: Evidence from Southern Ethiopia. Agric. Ecosyst. Environ. 2018, 263, 41–52. [Google Scholar] [CrossRef]
  30. Fragoso-Medina, M.d.C.; Navarrete-Segueda, A.; Ceccon, E.; Martínez-Ramos, M. Effects of the Forests-Agriculture Conversion on the Availability and Diversity of Forest Products in a Neotropical Rainforest Region. Trees For. People 2024, 15, 100481. [Google Scholar] [CrossRef]
  31. FAO Forests and Forest Soils: An Essential Contribution to Agricultural Production and Global Food Security. Available online: http://www.fao.org/soils-portal (accessed on 9 July 2025).
  32. Zhang, J.; Cong, S.; Zhang, G.; Ma, Y.; Zhang, Y.; Huang, J. Detecting Pest-Infested Forest Damage through Multispectral Satellite Imagery and Improved UNet++. Sensors 2022, 22, 7440. [Google Scholar] [CrossRef]
  33. Alkan, E.; Aydin, A. Machine Learning-Based Prediction of Insect Damage Spread Using Auto-ARIMA Model. Croat. J. For. Eng. J. Theory Appl. For. Eng. 2024, 45, 351–364. [Google Scholar] [CrossRef]
  34. Villani, G.; Tomei, F.; Pavan, V.; Pirola, A.; Spisni, A.; Marletto, V. The iCOLT Climate Service: Seasonal Predictions of Irrigation for Emilia-Romagna, Italy. Meteorol. Appl. 2021, 28, e2007. [Google Scholar] [CrossRef]
  35. Laipelt, L.; Rossi, J.B.; de Andrade, B.C.; Scherer-Warren, M.; Ruhoff, A. Assessing Evapotranspiration Changes in Response to Cropland Expansion in Tropical Climates. Remote Sens. 2024, 16, 3404. [Google Scholar] [CrossRef]
  36. Singh, T.; Kundroo, M.; Kim, T. WSN-Driven Advances in Soil Moisture Estimation: A Machine Learning Approach. Electronics 2024, 13, 1590. [Google Scholar] [CrossRef]
  37. Salmanpour, A.; Jamshidi, M.; Fatehi, S.; Ghanbarpouri, M.; Mirzavand, J. Assessment of Macronutrients Status Using Digital Soil Mapping Techniques: A Case Study in Maru’ak Area in Lorestan Province, Iran. Environ. Monit. Assess. 2023, 195, 513. [Google Scholar] [CrossRef] [PubMed]
  38. Budak, M.; Günal, E.; Kılıç, M.; Çelik, İ.; Sırrı, M.; Acir, N. Improvement of Spatial Estimation for Soil Organic Carbon Stocks in Yuksekova Plain Using Sentinel 2 Imagery and Gradient Descent–Boosted Regression Tree. Environ. Sci. Pollut. Res. 2023, 30, 53253–53274. [Google Scholar] [CrossRef]
  39. Chen, J.; Du, H.; Mao, F.; Huang, Z.; Chen, C.; Hu, M.; Li, X. Improving Forest Age Prediction Performance Using Ensemble Learning Algorithms Base on Satellite Remote Sensing Data. Ecol. Indic. 2024, 166, 112327. [Google Scholar] [CrossRef]
  40. Ameslek, O.; Zahir, H.; Latifi, H.; Bachaoui, E.M. Combining OBIA, CNN, and UAV Imagery for Automated Detection and Mapping of Individual Olive Trees. Smart Agric. Technol. 2024, 9, 100546. [Google Scholar] [CrossRef]
  41. Trentin, C.; Ampatzidis, Y.; Lacerda, C.; Shiratsuchi, L. Tree Crop Yield Estimation and Prediction Using Remote Sensing and Machine Learning: A Systematic Review. Smart Agric. Technol. 2024, 9, 100556. [Google Scholar] [CrossRef]
  42. Vasilakos, C.; Kavroudakis, D.; Georganta, A. Machine Learning Classification Ensemble of Multitemporal Sentinel-2 Images: The Case of a Mixed Mediterranean Ecosystem. Remote Sens. 2020, 12, 2005. [Google Scholar] [CrossRef]
  43. Li, Z.; Chen, B.; Wu, S.; Su, M.; Chen, J.M.; Xu, B. Deep Learning for Urban Land Use Category Classification: A Review and Experimental Assessment. Remote Sens. Environ. 2024, 311, 114290. [Google Scholar] [CrossRef]
  44. Wang, G.; Chen, J.; Mo, L.; Wu, P.; Yi, X. Border-Enhanced Triple Attention Mechanism for High-Resolution Remote Sensing Images and Application to Land Cover Classification. Remote Sens. 2024, 16, 2814. [Google Scholar] [CrossRef]
  45. Multilayer Perceptron—An Overview|ScienceDirect Topics. Available online: https://www.sciencedirect.com/topics/computer-science/multilayer-perceptron (accessed on 9 July 2025).
  46. Gevaert, C.M.; Belgiu, M. Assessing the Generalization Capability of Deep Learning Networks for Aerial Image Classification Using Landscape Metrics. Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 103054. [Google Scholar] [CrossRef]
  47. Vinaykumar, V.N.; Babu, J.A.; Frnda, J. Optimal Guidance Whale Optimization Algorithm and Hybrid Deep Learning Networks for Land Use Land Cover Classification. EURASIP J. Adv. Signal Process. 2023, 2023, 13. [Google Scholar] [CrossRef]
  48. Albarakati, H.M.; Khan, M.A.; Hamza, A.; Khan, F.; Kraiem, N.; Jamel, L.; Almuqren, L.; Alroobaea, R. A Novel Deep Learning Architecture for Agriculture Land Cover and Land Use Classification from Remote Sensing Images Based on Network-Level Fusion of Self-Attention Architecture. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 6338–6353. [Google Scholar] [CrossRef]
  49. Kong, J.; Ryu, Y.; Jeong, S.; Zhong, Z.; Choi, W.; Kim, J.; Lee, K.; Lim, J.; Jang, K.; Chun, J.; et al. Super Resolution of Historic Landsat Imagery Using a Dual Generative Adversarial Network (GAN) Model with CubeSat Constellation Imagery for Spatially Enhanced Long-Term Vegetation Monitoring. ISPRS J. Photogramm. Remote Sens. 2023, 200, 1–23. [Google Scholar] [CrossRef]
  50. PyTorch. Available online: https://pytorch.org/ (accessed on 9 July 2025).
  51. Hamidi, M.; Homayouni, S.; Safari, A.; Hasani, H. Deep Learning Based Crop-Type Mapping Using SAR and Optical Data Fusion. Int. J. Appl. Earth Obs. Geoinf. 2024, 129, 103860. [Google Scholar] [CrossRef]
  52. Snevajs, H.; Charvat, K.; Onckelet, V.; Kvapil, J.; Zadrazil, F.; Kubickova, H.; Seidlova, J.; Batrlova, I. Crop Detection Using Time Series of Sentinel-2 and Sentinel-1 and Existing Land Parcel Information Systems. Remote Sens. 2022, 14, 1095. [Google Scholar] [CrossRef]
  53. Gackstetter, D.; Körner, M.; Yu, K. Approaching Holistic Crop Type Mapping in Europe through Winter Vegetation Classification and the Hierarchical Crop and Agriculture Taxonomy. Int. J. Appl. Earth Obs. Geoinf. 2024, 134, 104159. [Google Scholar] [CrossRef]
  54. Di Tommaso, S.; Wang, S.; Vajipey, V.; Gorelick, N.; Strey, R.; Lobell, D.B. Annual Field-Scale Maps of Tall and Short Crops at the Global Scale Using GEDI and Sentinel-2. Remote Sens. 2023, 15, 4123. [Google Scholar] [CrossRef]
  55. Zhang, Z.; Wang, S.; Zhang, W. Dilated Spectral–Spatial Gaussian Transformer Net for Hyperspectral Image Classification. Remote Sens. 2024, 16, 287. [Google Scholar] [CrossRef]
  56. Yue, J.; Tian, Q. Estimating Fractional Cover of Crop, Crop Residue, and Soil in Cropland Using Broadband Remote Sensing Data and Machine Learning. Int. J. Appl. Earth Obs. Geoinf. 2020, 89, 102089. [Google Scholar] [CrossRef]
  57. Alotaibi, Y.; Rajendran, B.; Rani, K.G.; Rajendran, S. Dipper Throated Optimization with Deep Convolutional Neural Network-Based Crop Classification for Remote Sensing Image Analysis. PeerJ Comput. Sci. 2024, 10, e1828. [Google Scholar] [CrossRef]
  58. Aljanabi, F.; Dedeoglu, M.; Şeker, C. Environmental Monitoring of Land Use/ Land Cover by Integrating Remote Sensing and Machine Learning Algorithms. J. Eng. Sustain. Dev. 2024, 28, 455–466. [Google Scholar] [CrossRef]
  59. Sachit, M.; Khazael, S.; Atiyah, H. Evaluating the Performance of Supervised Classification Algorithms in Classifying SENTINEL-2A Images on Land Cover and Land Uses of a Part of Baghdad, Iraq. Iraqi J. Sci. 2024, 65, 7349–7363. [Google Scholar] [CrossRef]
  60. Atef, I.; Ahmed, W.; Abdel-Maguid, R.H. Modelling of Land Use Land Cover Changes Using Machine Learning and GIS Techniques: A Case Study in El-Fayoum Governorate, Egypt. Environ. Monit. Assess. 2023, 195, 637. [Google Scholar] [CrossRef]
  61. de Souza, F.E.S.; Rodrigues, J.I.d.J. Evaluation of Machine Learning Algorithms in the Classification of Multispectral Images from the Sentinel-2A/2B Orbital Sensor for Mapping the Environmental Dynamics of Ria Formosa (Algarve, Portugal). ISPRS Int. J. Geo-Inf. 2023, 12, 361. [Google Scholar] [CrossRef]
  62. Mhanna, S.; Halloran, L.J.S.; Zwahlen, F.; Asaad, A.H.; Brunner, P. Using Machine Learning and Remote Sensing to Track Land Use/Land Cover Changes Due to Armed Conflict. Sci. Total Environ. 2023, 898, 165600. [Google Scholar] [CrossRef]
  63. Lukas, P.; Melesse, A.M.; Kenea, T.T. Prediction of Future Land Use/Land Cover Changes Using a Coupled CA-ANN Model in the Upper Omo–Gibe River Basin, Ethiopia. Remote Sens. 2023, 15, 1148. [Google Scholar] [CrossRef]
  64. Kutz, K.; Cook, Z.; Linderman, M. Object Based Classification of a Riparian Environment Using Ultra-High Resolution Imagery, Hierarchical Landcover Structures, and Image Texture. Sci. Rep. 2022, 12, 11291. [Google Scholar] [CrossRef]
  65. Ali, U.; Esau, T.J.; Farooque, A.A.; Zaman, Q.U.; Abbas, F.; Bilodeau, M.F. Limiting the Collection of Ground Truth Data for Land Use and Land Cover Maps with Machine Learning Algorithms. ISPRS Int. J. Geo-Inf. 2022, 11, 333. [Google Scholar] [CrossRef]
  66. Lin, M.-H.; Lin, Y.-T.; Tsai, M.-L.; Chen, Y.-Y.; Chen, Y.-C.; Wang, H.-C.; Wang, C.-K. Mapping Land-Use and Land-Cover Changes through the Integration of Satellite and Airborne Remote Sensing Data. Environ. Monit. Assess. 2024, 196, 246. [Google Scholar] [CrossRef]
  67. Ojwang, G.O.; Ogutu, J.O.; Said, M.Y.; Ojwala, M.A.; Kifugo, S.C.; Verones, F.; Graae, B.J.; Buitenwerf, R.; Olff, H. An Integrated Hierarchical Classification and Machine Learning Approach for Mapping Land Use and Land Cover in Complex Social-Ecological Systems. Front. Remote Sens. 2024, 4, 1188635. [Google Scholar] [CrossRef]
  68. Barboza, E.; Salazar, W.; Gálvez-Paucar, D.; Valqui Valqui, L.; Valqui Valqui, L.; Zagaceta, L.; Malca, J.; Vásquez, H.; Arbizu, C. Cloud Computing Application for the Analysis of Land Use and Land Cover Changes in Dry Forests of Peru. Int. J. Environ. Impacts 2024, 7, 505–514. [Google Scholar] [CrossRef]
  69. Ha, T.V.; Tuohy, M.; Irwin, M.; Tuan, P.V. Monitoring and Mapping Rural Urbanization and Land Use Changes Using Landsat Data in the Northeast Subtropical Region of Vietnam. Egypt. J. Remote Sens. Space Sci. 2020, 23, 11–19. [Google Scholar] [CrossRef]
  70. Dantas, C.F.; Gaetano, R.; Paris, C.; Ienco, D. Reuse Out-of-Year Data to Enhance Land Cover Mapping via Feature Disentanglement and Contrastive Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 1681–1694. [Google Scholar] [CrossRef]
  71. Rousset, G.; Despinoy, M.; Schindler, K.; Mangeas, M. Assessment of Deep Learning Techniques for Land Use Land Cover Classification in Southern New Caledonia. Remote Sens. 2021, 13, 2257. [Google Scholar] [CrossRef]
  72. Ouchra, H.; Belangour, A.; Erraissi, A. Comparison of Machine Learning Methods for Satellite Image Classification: A Case Study of Casablanca Using Landsat Imagery and Google Earth Engine. J. Environ. Earth Sci. 2023, 5, 118–134. [Google Scholar] [CrossRef]
  73. Rajesh, S.; Nisia, T.G.; Arivazhagan, S.; Abisekaraj, R. Land Cover/Land Use Mapping of LISS IV Imagery Using Object-Based Convolutional Neural Network with Deep Features. J. Indian Soc. Remote Sens. 2020, 48, 145–154. [Google Scholar] [CrossRef]
  74. Wang, J.; Feng, Y.; Tong, X.; Lei, Z.; Xi, M.; Zhou, Y.; Tang, P. Classification Evaluation and Improvement of Airborne PolSAR Images for Land Use Mapping Using Deep Learning. Geocarto Int. 2024, 39, 2401937. [Google Scholar] [CrossRef]
  75. Fayaz, M.; Nam, J.; Dang, L.M.; Song, H.-K.; Moon, H. Land-Cover Classification Using Deep Learning with High-Resolution Remote-Sensing Imagery. Appl. Sci. 2024, 14, 1844. [Google Scholar] [CrossRef]
  76. Hu, K.; Zhang, E.; Dai, X.; Xia, M.; Zhou, F.; Weng, L.; Lin, H. MCSGNet: A Encoder–Decoder Architecture Network for Land Cover Classification. Remote Sens. 2023, 15, 2810. [Google Scholar] [CrossRef]
  77. Sahu, M.; Dash, R. Cognitive Land Cover Mapping: A Three-Layer Deep Learning Architecture for Remote Sensing Data Classification. Environ. Chall. 2024, 15, 100876. [Google Scholar] [CrossRef]
  78. Li, X.; Zhao, H.; Wu, D.; Liu, Q.; Tang, R.; Li, L.; Xu, Z.; Lyu, X. SLMFNet: Enhancing Land Cover Classification of Remote Sensing Images through Selective Attentions and Multi-Level Feature Fusion. PLoS ONE 2024, 19, e0301134. [Google Scholar] [CrossRef]
  79. Xiao, J.; Cheng, T.; Chen, D.; Chen, H.; Li, N.; Lu, Y.; Cheng, L. Robust Land Cover Classification with Local–Global Information Decoupling to Address Remote Sensing Anomalous Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5774–5789. [Google Scholar] [CrossRef]
  80. Hao, M.; Dong, X.; Jiang, D.; Yu, X.; Ding, F.; Zhuo, J. Land-Use Classification Based on High-Resolution Remote Sensing Imagery and Deep Learning Models. PLoS ONE 2024, 19, e0300473. [Google Scholar] [CrossRef] [PubMed]
  81. Masolele, R.; De Sy, V.; Marcos, D.; Verbesselt, J.; Gieseke, F.; Mulatu, K.; Moges, Y.; Sebrala Ahmed, H.; Martius, C.; Herold, M. Using High-Resolution Imagery and Deep Learning to Classify Land-Use Following Deforestation: A Case Study in Ethiopia. GIScience Remote Sens. 2022, 59, 1446–1472. [Google Scholar] [CrossRef]
  82. Wang, S.; Zou, X.; Li, K.; Xing, J.; Cao, T.; Tao, P. Towards Robust Pansharpening: A Large-Scale High-Resolution Multi-Scene Dataset and Novel Approach. Remote Sens. 2024, 16, 2899. [Google Scholar] [CrossRef]
  83. Nigar, A.; Li, Y.; Jat Baloch, M.Y.; Alrefaei, A.F.; Almutairi, M.H. Comparison of Machine and Deep Learning Algorithms Using Google Earth Engine and Python for Land Classifications. Front. Environ. Sci. 2024, 12, 1378443. [Google Scholar] [CrossRef]
  84. Ali, K.; Johnson, B.A. Land-Use and Land-Cover Classification in Semi-Arid Areas from Medium-Resolution Remote-Sensing Imagery: A Deep Learning Approach. Sensors 2022, 22, 8750. [Google Scholar] [CrossRef]
  85. Aljebreen, M.; Mengash, H.A.; Alamgeer, M.; Alotaibi, S.S.; Salama, A.S.; Hamza, M.A. Land Use and Land Cover Classification Using River Formation Dynamics Algorithm with Deep Learning on Remote Sensing Images. IEEE Access 2024, 12, 11147–11156. [Google Scholar] [CrossRef]
  86. Zaabar, N.; Niculescu, S.; Kamel, M.M. Application of Convolutional Neural Networks with Object-Based Image Analysis for Land Cover and Land Use Mapping in Coastal Areas: A Case Study in Ain Témouchent, Algeria. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5177–5189. [Google Scholar] [CrossRef]
  87. Mahmoud, R.; Hassanin, M.; Al Feel, H.; Badry, R.M. Machine Learning-Based Land Use and Land Cover Mapping Using Multi-Spectral Satellite Imagery: A Case Study in Egypt. Sustainability 2023, 15, 9467. [Google Scholar] [CrossRef]
  88. Lemenkova, P. Support Vector Machine Algorithm for Mapping Land Cover Dynamics in Senegal, West Africa, Using Earth Observation Data. Earth 2024, 5, 420–462. [Google Scholar] [CrossRef]
  89. Zhang, L.; Xu, T.; Zeng, C.; Hao, Q.; Chen, Z.; Liang, X. Semantic-Aware Contrastive Adaptation Bridges Domain Discrepancy for Unsupervised Remote Sensing. IEEE Access 2024, 12, 140734–140747. [Google Scholar] [CrossRef]
  90. Mienye, I.D.; Swart, T.G.; Obaido, G. Recurrent Neural Networks: A Comprehensive Review of Architectures, Variants, and Applications. Information 2024, 15, 517. [Google Scholar] [CrossRef]
  91. Ouchra, H.; Belangour, A.; Erraissi, A. Comparing Unsupervised Land Use Classification of Landsat 8 OLI Data Using K-Means and LVQ Algorithms in Google Earth Engine: A Case Study of Casablanca. Int. J. Geoinform. 2023, 19, 83–92. [Google Scholar] [CrossRef]
  92. Diack, I.; Diene, S.M.; Louise, L.; Aziz, D.A.; Benjamin, H.; Olivier, R.; Philippe, L.; Alain, A.; Idrissa, S.; Moussa, D. Combining UAV and Sentinel-2 Imagery for Estimating Millet FCover in a Heterogeneous Agricultural Landscape of Senegal. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 7305–7322. [Google Scholar] [CrossRef]
  93. Kobe, M.; Elias, M.; Merbach, I.; Schädler, M.; Bumberger, J.; Pause, M.; Mollenhauer, H. Automated Workflow for High-Resolution 4D Vegetation Monitoring Using Stereo Vision. Remote Sens. 2024, 16, 541. [Google Scholar] [CrossRef]
  94. Kalecinski, N.I.; Skakun, S.; Torbick, N.; Huang, X.; Franch, B.; Roger, J.-C.; Vermote, E. Crop Yield Estimation at Different Growing Stages Using a Synergy of SAR and Optical Remote Sensing Data. Sci. Remote Sens. 2024, 10, 100153. [Google Scholar] [CrossRef]
  95. Sabo, F.; Meroni, M.; Waldner, F.; Rembold, F. Is Deeper Always Better? Evaluating Deep Learning Models for Yield Forecasting with Small Data. Environ. Monit. Assess. 2023, 195, 1153. [Google Scholar] [CrossRef]
  96. Lv, Z.; Meng, R.; Man, J.; Zeng, L.; Wang, M.; Xu, B.; Gao, R.; Sun, R.; Zhao, F. Modeling of Winter Wheat fAPAR by Integrating Unmanned Aircraft Vehicle-Based Optical, Structural and Thermal Measurement. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102407. [Google Scholar] [CrossRef]
  97. Wu, Z.; Dong, Z.; Yang, K.; Liu, Q.; Wang, W. Floodwater Extraction from UAV Orthoimagery Based on a Transformer Model. Remote Sens. 2024, 16, 4052. [Google Scholar] [CrossRef]
  98. Pech-May, F.; Aquino-Santos, R.; Delgadillo-Partida, J. Sentinel-1 SAR Images and Deep Learning for Water Body Mapping. Remote Sens. 2023, 15, 3009. [Google Scholar] [CrossRef]
  99. Xiang, J.; Zang, Z.; Tang, X.; Zhang, M.; Cao, P.; Tang, S.; Wang, X. Rapid Forest Change Detection Using Unmanned Aerial Vehicles and Artificial Intelligence. Forests 2024, 15, 1676. [Google Scholar] [CrossRef]
  100. Ecke, S.; Stehr, F.; Frey, J.; Tiede, D.; Dempewolf, J.; Klemmt, H.-J.; Endres, E.; Seifert, T. Towards Operational UAV-Based Forest Health Monitoring: Species Identification and Crown Condition Assessment by Means of Deep Learning. Comput. Electron. Agric. 2024, 219, 108785. [Google Scholar] [CrossRef]
  101. Perbet, P.; Guindon, L.; Côté, J.-F.; Béland, M. Evaluating Deep Learning Methods Applied to Landsat Time Series Subsequences to Detect and Classify Boreal Forest Disturbances Events: The Challenge of Partial and Progressive Disturbances. Remote Sens. Environ. 2024, 306, 114107. [Google Scholar] [CrossRef]
  102. Lucas, M.; Pukrop, M.; Beckschäfer, P.; Waske, B. Individual tree detection and crown delineation in the Harz National Park from 2009 to 2022 using mask R–CNN and aerial imagery. ISPRS Open J. Photogramm. Remote Sens. 2024, 13, 100071. [Google Scholar] [CrossRef]
  103. Wang, H.; Hu, C.; Zhang, R.; Qian, W. SegForest: A Segmentation Model for Remote Sensing Images. Forests 2023, 14, 1509. [Google Scholar] [CrossRef]
  104. Slagter, B.; Reiche, J.; Marcos, D.; Mullissa, A.; Lossou, E.; Peña-Claros, M.; Herold, M. Monitoring Direct Drivers of Small-Scale Tropical Forest Disturbance in near Real-Time with Sentinel-1 and -2 Data. Remote Sens. Environ. 2023, 295, 113655. [Google Scholar] [CrossRef]
  105. Retallack, A.; Finlayson, G.; Ostendorf, B.; Lewis, M. Using Deep Learning to Detect an Indicator Arid Shrub in Ultra-High-Resolution UAV Imagery. Ecol. Indic. 2022, 145, 109698. [Google Scholar] [CrossRef]
  106. Guo, Y.; Long, T.; Jiao, W.; Zhang, X.; He, G.; Wang, W.; Peng, Y.; Xiao, H. Siamese Detail Difference and Self-Inverse Network for Forest Cover Change Extraction Based on Landsat 8 OLI Satellite Images. Remote Sens. 2022, 14, 627. [Google Scholar] [CrossRef]
  107. Mäyrä, J.; Keski-Saari, S.; Kivinen, S.; Tanhuanpää, T.; Hurskainen, P.; Kullberg, P.; Poikolainen, L.; Viinikka, A.; Tuominen, S.; Kumpula, T.; et al. Tree Species Classification from Airborne Hyperspectral and LiDAR Data Using 3D Convolutional Neural Networks. Remote Sens. Environ. 2021, 256, 112322. [Google Scholar] [CrossRef]
  108. He, S.; Du, H.; Zhou, G.; Li, X.; Mao, F.; Zhu, D.; Xu, Y.; Zhang, M.; Huang, Z.; Liu, H.; et al. Intelligent Mapping of Urban Forests from High-Resolution Remotely Sensed Imagery Using Object-Based U-Net-DenseNet-Coupled Network. Remote Sens. 2020, 12, 3928. [Google Scholar] [CrossRef]
  109. Li, Y.; Min, S.; Song, B.; Yang, H.; Wang, B.; Wu, Y. Multisource High-Resolution Remote Sensing Image Vegetation Extraction with Comprehensive Multifeature Perception. Remote Sens. 2024, 16, 712. [Google Scholar] [CrossRef]
  110. Zheng, J.; Chen, D.; Zhang, H.; Zhang, G.; Zhen, Q.; Liu, S.; Zhang, N.; Zhao, H. Remote Sensing Identification of Picea schrenkiana Var. tianschanica in GF-1 Images Based on a Multiple Mixed Attention U-Net Model. Forests 2024, 15, 2039. [Google Scholar] [CrossRef]
  111. Nguyen, T.-A.; Rußwurm, M.; Lenczner, G.; Tuia, D. Multi-Temporal Forest Monitoring in the Swiss Alps with Knowledge-Guided Deep Learning. Remote Sens. Environ. 2024, 305, 114109. [Google Scholar] [CrossRef]
  112. Roßberg, T.; Schmitt, M. A Globally Applicable Method for NDVI Estimation from Sentinel-1 SAR Backscatter Using a Deep Neural Network and the SEN12TP Dataset. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2023, 91, 171–188. [Google Scholar] [CrossRef]
  113. Nguyen-Trong, K.; Tran-Xuan, H. Coastal Forest Cover Change Detection Using Satellite Images and Convolutional Neural Networks in Vietnam. IAES Int. J. Artif. Intell. (IJ-AI) 2022, 11, 930–938. [Google Scholar] [CrossRef]
  114. Torres, D.L.; Turnes, J.N.; Soto Vega, P.J.; Feitosa, R.Q.; Silva, D.E.; Marcato Junior, J.; Almeida, C. Deforestation Detection with Fully Convolutional Networks in the Amazon Forest from Landsat-8 and Sentinel-2 Images. Remote Sens. 2021, 13, 5084. [Google Scholar] [CrossRef]
  115. de Andrade, R.B.; Mota, G.L.A.; da Costa, G.A.O.P. Deforestation Detection in the Amazon Using DeepLabv3+ Semantic Segmentation Model Variants. Remote Sens. 2022, 14, 4694. [Google Scholar] [CrossRef]
  116. Rodrigues, J.; Dias, M.A.; Negri, R.; Hussain, S.M.; Casaca, W. A Robust Dual-Mode Machine Learning Framework for Classifying Deforestation Patterns in Amazon Native Lands. Land 2024, 13, 1427. [Google Scholar] [CrossRef]
  117. Wang, S.; Cui, D.; Wang, L.; Peng, J. Applying Deep-Learning Enhanced Fusion Methods for Improved NDVI Reconstruction and Long-Term Vegetation Cover Study: A Case of the Danjiang River Basin. Ecol. Indic. 2023, 155, 111088. [Google Scholar] [CrossRef]
  118. Sklyar, E.; Rees, G. Assessing Changes in Boreal Vegetation of Kola Peninsula via Large-Scale Land Cover Classification between 1985 and 2021. Remote Sens. 2022, 14, 5616. [Google Scholar] [CrossRef]
  119. Marsetič, A.; Kanjir, U. Forecasting Vegetation Behavior Based on PlanetScope Time Series Data Using RNN-Based Models. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5015–5025. [Google Scholar] [CrossRef]
  120. Roßberg, T.; Schmitt, M. Dense NDVI Time Series by Fusion of Optical and SAR-Derived Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 7748–7758. [Google Scholar] [CrossRef]
  121. Wahelo, T.T.; Mengistu, D.A.; Merawi, T.M. Spatiotemporal Trends and Drivers of Forest Cover Change in Metekel Zone Forest Areas, Northwest Ethiopia. Environ. Monit. Assess. 2024, 196, 1170. [Google Scholar] [CrossRef] [PubMed]
  122. Isoaho, A.; Elo, M.; Marttila, H.; Rana, P.; Lensu, A.; Räsänen, A. Monitoring Changes in Boreal Peatland Vegetation after Restoration with Optical Satellite Imagery. Sci. Total Environ. 2024, 957, 177697. [Google Scholar] [CrossRef]
  123. Prodromou, M.; Theocharidis, C.; Gitas, I.Z.; Eliades, F.; Themistocleous, K.; Papasavvas, K.; Dimitrakopoulos, C.; Danezis, C.; Hadjimitsis, D. Forest Habitat Mapping in Natura2000 Regions in Cyprus Using Sentinel-1, Sentinel-2 and Topographical Features. Remote Sens. 2024, 16, 1373. [Google Scholar] [CrossRef]
  124. Mansouri, J.; Jafari, M.; Taheri Dehkordi, A. Continuous Mapping of Forest Canopy Height Using ICESat-2 Data and a Weighted Kernel Integration of Multi-Temporal Multi-Source Remote Sensing Data Aided by Google Earth Engine. Environ. Sci. Pollut. Res. 2024, 31, 49757–49779. [Google Scholar] [CrossRef]
  125. Engda, E.M.; Bojer, A.K.; Mekonnen, Z.A.; Tegegn, W.W. Assessment of Forest Cover Dynamics in Dedo District, Oromia Region, Ethiopia, Using Machine Learning Algorithm. J. Sens. 2024, 2024, 4020990. [Google Scholar] [CrossRef]
  126. Ziegler, A.; Heisig, J.; Ludwig, M.; Reudenbach, C.; Meyer, H.; Nauss, T. Using GEDI as Training Data for an Ongoing Mapping of Landscape-Scale Dynamics of the Plant Area Index. Environ. Res. Lett. 2023, 18, 075003. [Google Scholar] [CrossRef]
  127. Mouret, F.; Morin, D.; Martin, H.; Planells, M.; Vincent-Barbaroux, C. Toward an Operational Monitoring of Oak Dieback with Multispectral Satellite Time Series: A Case Study in Centre-Val De Loire Region of France. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 643–659. [Google Scholar] [CrossRef]
  128. Jiang, H.; Chen, A.; Wu, Y.; Zhang, C.; Chi, Z.; Li, M.; Wang, X. Vegetation Monitoring for Mountainous Regions Using a New Integrated Topographic Correction (ITC) of the SCS + C Correction and the Shadow-Eliminated Vegetation Index. Remote Sens. 2022, 14, 3073. [Google Scholar] [CrossRef]
  129. Liu, X.; Fatoyinbo, T.E.; Thomas, N.M.; Guan, W.W.; Zhan, Y.; Mondal, P.; Lagomasino, D.; Simard, M.; Trettin, C.C.; Deo, R.; et al. Large-Scale High-Resolution Coastal Mangrove Forests Mapping Across West Africa with Machine Learning Ensemble and Satellite Big Data. Front. Earth Sci. 2021, 8, 560933. [Google Scholar] [CrossRef]
  130. Guisao-Betancur, A.; Gómez Déniz, L.; Marulanda-Tobón, A. Forest/Nonforest Segmentation Using Sentinel-1 and -2 Data Fusion in the Bajo Cauca Subregion in Colombia. Remote Sens. 2024, 16, 5. [Google Scholar] [CrossRef]
  131. Zheng, X.; Jia, J.; Chen, J.; Guo, S.; Sun, L.; Zhou, C.; Wang, Y. Hyperspectral Image Classification with Imbalanced Data Based on Semi-Supervised Learning. Appl. Sci. 2022, 12, 3943. [Google Scholar] [CrossRef]
  132. Rose, M.B.; Mills, M.; Franklin, J.; Larios, L. Mapping Fractional Vegetation Cover Using Unoccupied Aerial Vehicle Imagery to Guide Conservation of a Rare Riparian Shrub Ecosystem in Southern California. Remote Sens. 2023, 15, 5113. [Google Scholar] [CrossRef]
  133. John, E.; Bunting, P.; Hardy, A.; Silayo, D.S.; Masunga, E. A Forest Monitoring System for Tanzania. Remote Sens. 2021, 13, 3081. [Google Scholar] [CrossRef]
  134. Holloway-Brown, J.; Helmstedt, K.J.; Mengersen, K.L. Stochastic Spatial Random Forest (SS-RF) for Interpolating Probabilities of Missing Land Cover Data. J. Big Data 2020, 7, 55. [Google Scholar] [CrossRef]
  135. Liu, Y.; Zhao, Q.; Wang, X.; Sheng, Y.; Tian, W.; Ren, Y. A Tree Species Classification Model Based on Improved YOLOv7 for Shelterbelts. Front. Plant Sci. 2024, 14, 1265025. [Google Scholar] [CrossRef]
  136. Wang, Z.; Mo, Z.; Liang, Y.; Yang, Z.; Liao, X.; Wang, Z.; Zhang, Q. A Web-Based Prototype System for Deforestation Detection on High-Resolution Remote Sensing Imagery with Deep Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 18593–18612. [Google Scholar] [CrossRef]
  137. Alshehri, M.; Ouadou, A.; Scott, G.J. Deep Transformer-Based Network Deforestation Detection in the Brazilian Amazon Using Sentinel-2 Imagery. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3355104. [Google Scholar] [CrossRef]
  138. Turkulainen, E.; Honkavaara, E.; Näsi, R.; Oliveira, R.A.; Hakala, T.; Junttila, S.; Karila, K.; Koivumäki, N.; Pelto-Arvo, M.; Tuviala, J.; et al. Comparison of Deep Neural Networks in the Classification of Bark Beetle-Induced Spruce Damage Using UAS Images. Remote Sens. 2023, 15, 4928. [Google Scholar] [CrossRef]
  139. Kislov, D.E.; Korznikov, K.A.; Altman, J.; Vozmishcheva, A.S.; Krestov, P.V. Extending Deep Learning Approaches for Forest Disturbance Segmentation on Very High-Resolution Satellite Images. Remote Sens. Ecol. Conserv. 2021, 7, 355–368. [Google Scholar] [CrossRef]
  140. Junttila, S.; Näsi, R.; Koivumäki, N.; Imangholiloo, M.; Saarinen, N.; Raisio, J.; Holopainen, M.; Hyyppä, H.; Hyyppä, J.; Lyytikäinen-Saarenmaa, P.; et al. Multispectral Imagery Provides Benefits for Mapping Spruce Tree Decline Due to Bark Beetle Infestation When Acquired Late in the Season. Remote Sens. 2022, 14, 909. [Google Scholar] [CrossRef]
  141. Iordache, M.-D.; Mantas, V.; Baltazar, E.; Pauly, K.; Lewyckyj, N. A Machine Learning Approach to Detecting Pine Wilt Disease Using Airborne Spectral Imagery. Remote Sens. 2020, 12, 2280. [Google Scholar] [CrossRef]
  142. Et-Brasil/geeSEBAL 2025. Available online: https://github.com/et-brasil/geeSEBAL (accessed on 9 July 2025).
  143. Liu, Q.; Wu, Z.; Cui, N.; Zheng, S.; Zhu, S.; Jiang, S.; Wang, Z.; Gong, D.; Wang, Y.; Zhao, L. Soil Moisture Content Estimation of Drip-Irrigated Citrus Orchard Based on UAV Images and Machine Learning Algorithm in Southwest China. Agric. Water Manag. 2024, 303, 109069. [Google Scholar] [CrossRef]
  144. dos Santos, E.P.; Moreira, M.C.; Fernandes-Filho, E.I.; Demattê, J.A.M.; Dionizio, E.A.; Silva, D.D.d.; Cruz, R.R.P.; Moura-Bueno, J.M.; dos Santos, U.J.; Costa, M.H. Sentinel-1 Imagery Used for Estimation of Soil Organic Carbon by Dual-Polarization SAR Vegetation Indices. Remote Sens. 2023, 15, 5464. [Google Scholar] [CrossRef]
  145. Tayebi, M.; Rosas, J.T.F.; Mendes, W.d.S.; Poppiel, R.R.; Ostovari, Y.; Ruiz, L.F.C.; dos Santos, N.V.; Cerri, C.E.P.; Silva, S.H.G.; Curi, N.; et al. Drivers of Organic Carbon Stocks in Different LULC History and along Soil Depth for a 30 Years Image Time Series. Remote Sens. 2021, 13, 2223. [Google Scholar] [CrossRef]
  146. Jia, X.; Cao, Y.; O’Connor, D.; Zhu, J.; Tsang, D.C.W.; Zou, B.; Hou, D. Mapping Soil Pollution by Using Drone Image Recognition and Machine Learning at an Arsenic-Contaminated Agricultural Field. Environ. Pollut. 2021, 270, 116281. [Google Scholar] [CrossRef]
  147. Boonupara, T.; Udomkun, P.; Gibson-Poole, S.; Hamilton, A.; Kaewlom, P. Assessing Plant Pigmentation Impacts: A Novel Approach Integrating UAV and Multispectral Data to Analyze Atrazine Metabolite Effects from Soil Contamination. Smart Agric. Technol. 2024, 9, 100570. [Google Scholar] [CrossRef]
  148. S1 Mission. Available online: https://sentiwiki.copernicus.eu/web/s1-mission (accessed on 9 July 2025).
  149. S2 Mission. Available online: https://sentiwiki.copernicus.eu/web/s2-mission (accessed on 9 July 2025).
  150. Landsat-8—Earth Online. Available online: https://earth.esa.int/eogateway/missions/landsat-8 (accessed on 9 July 2025).
  151. Landsat 4|Landsat Science. Available online: https://landsat.gsfc.nasa.gov/satellites/landsat-4/ (accessed on 9 July 2025).
  152. GF-1 (Gaofen-1)—eoPortal. Available online: https://www.eoportal.org/satellite-missions/gaofen-1 (accessed on 9 July 2025).
  153. GF-2 (Gaofen-2)—eoPortal. Available online: https://www.eoportal.org/satellite-missions/gaofen-2 (accessed on 9 July 2025).
  154. NASA—NSSDCA—Spacecraft—Details. Available online: https://www.nasa.gov/ (accessed on 9 July 2025).
  155. PlanetScope—Earth Online. Available online: https://earth.esa.int/eogateway/missions/planetscope (accessed on 9 July 2025).
  156. WorldView-2—Earth Online. Available online: https://earth.esa.int/eogateway/missions/worldview-2 (accessed on 9 July 2025).
  157. WorldView-3—Earth Online. Available online: https://earth.esa.int/eogateway/missions/worldview-3 (accessed on 9 July 2025).
  158. WorldView-4—Earth Online. Available online: https://earth.esa.int/eogateway/missions/worldview-4 (accessed on 9 July 2025).
  159. SPOT—Earth Online. Available online: https://earth.esa.int/eogateway/missions/spot (accessed on 9 July 2025).
  160. What Is UAVSAR?—UAVSAR. Available online: https://uavsar.jpl.nasa.gov/education/what-is-uavsar.html (accessed on 9 July 2025).
  161. Earth Science Data Systems, N. GEDI Lidar|NASA Earthdata. Available online: https://www.earthdata.nasa.gov/data/instruments/gedi-lidar (accessed on 9 July 2025).
  162. DigitalGlobe—30 Cm. Available online: https://microsites.maxar.com/30cm/ (accessed on 9 July 2025).
  163. Zenmuse P1—UAV Load Gimbal Camera—DJI Enterprise. Available online: https://enterprise.dji.com/cn/zenmuse-p1 (accessed on 9 July 2025).
  164. Yang, W.; Liu, T.; Jiang, P.; Qi, A.; Deng, L.; Liu, Z.; He, Y. A Forest Wildlife Detection Algorithm Based on Improved YOLOv5s. Animals 2023, 13, 3134. [Google Scholar] [CrossRef] [PubMed]
  165. Ma, Z.; Dong, Y.; Xia, Y.; Xu, D.; Xu, F.; Chen, F. Wildlife Real-Time Detection in Complex Forest Scenes Based on YOLOv5s Deep Learning Network. Remote Sens. 2024, 16, 1350. [Google Scholar] [CrossRef]
  166. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
Figure 1. World SDGs (Sustainable Development Goals) dashboard 2025 [15].
Figure 1. World SDGs (Sustainable Development Goals) dashboard 2025 [15].
Sustainability 17 08526 g001
Figure 2. SDGs (Sustainable Development Goals) interlinkages [16]. (a) The connection between SDG 6—Clean Water and Sanitations and the other SDGs; (b) The connection between SDG 9—Industry, Innovation and Infrastructure and the other SDGs; (c) The connection between SDG 12—Responsible Production and Consumption and the other SDGs; (d) The connection between SDG 13—Climate Action and the other SDGs.
Figure 2. SDGs (Sustainable Development Goals) interlinkages [16]. (a) The connection between SDG 6—Clean Water and Sanitations and the other SDGs; (b) The connection between SDG 9—Industry, Innovation and Infrastructure and the other SDGs; (c) The connection between SDG 12—Responsible Production and Consumption and the other SDGs; (d) The connection between SDG 13—Climate Action and the other SDGs.
Sustainability 17 08526 g002
Figure 3. Prisma flow diagram.
Figure 3. Prisma flow diagram.
Sustainability 17 08526 g003
Figure 4. Number of papers published by year.
Figure 4. Number of papers published by year.
Sustainability 17 08526 g004
Figure 5. Geographic distribution of study areas.
Figure 5. Geographic distribution of study areas.
Sustainability 17 08526 g005
Figure 6. Map with geographic distribution of study areas.
Figure 6. Map with geographic distribution of study areas.
Sustainability 17 08526 g006
Figure 8. Algorithms used in different studies by frequency.
Figure 8. Algorithms used in different studies by frequency.
Sustainability 17 08526 g008
Figure 9. Software products used in different studies by frequency.
Figure 9. Software products used in different studies by frequency.
Sustainability 17 08526 g009
Figure 10. Artificial Intelligence models used for land use mapping and crop detection.
Figure 10. Artificial Intelligence models used for land use mapping and crop detection.
Sustainability 17 08526 g010
Figure 11. Software products used for land use mapping and crop detection.
Figure 11. Software products used for land use mapping and crop detection.
Sustainability 17 08526 g011
Figure 12. Artificial Intelligence models used for crop yield monitoring.
Figure 12. Artificial Intelligence models used for crop yield monitoring.
Sustainability 17 08526 g012
Figure 13. Software products used for crop yield monitoring.
Figure 13. Software products used for crop yield monitoring.
Sustainability 17 08526 g013
Figure 14. Artificial Intelligence models used for forest and vegetation monitoring.
Figure 14. Artificial Intelligence models used for forest and vegetation monitoring.
Sustainability 17 08526 g014
Figure 15. Software products used for forest and vegetation monitoring.
Figure 15. Software products used for forest and vegetation monitoring.
Sustainability 17 08526 g015
Figure 16. Artificial Intelligence models used for soil content analysis and soil production capacity.
Figure 16. Artificial Intelligence models used for soil content analysis and soil production capacity.
Sustainability 17 08526 g016
Figure 17. Software products used for soil content analysis and soil production capacity.
Figure 17. Software products used for soil content analysis and soil production capacity.
Sustainability 17 08526 g017
Figure 18. Papers by data type used in the studies.
Figure 18. Papers by data type used in the studies.
Sustainability 17 08526 g018
Figure 19. Papers by data source used in the studies.
Figure 19. Papers by data source used in the studies.
Sustainability 17 08526 g019
Figure 20. Data type used in land use mapping and crop detection.
Figure 20. Data type used in land use mapping and crop detection.
Sustainability 17 08526 g020
Figure 21. Data sources used in land use mapping and crop detection.
Figure 21. Data sources used in land use mapping and crop detection.
Sustainability 17 08526 g021
Figure 22. Data type used in crop yield monitoring.
Figure 22. Data type used in crop yield monitoring.
Sustainability 17 08526 g022
Figure 23. Data sources used in crop yield monitoring.
Figure 23. Data sources used in crop yield monitoring.
Sustainability 17 08526 g023
Figure 24. Data type used in forest and vegetation monitoring.
Figure 24. Data type used in forest and vegetation monitoring.
Sustainability 17 08526 g024
Figure 25. Data sources used in forest and vegetation monitoring.
Figure 25. Data sources used in forest and vegetation monitoring.
Sustainability 17 08526 g025
Figure 26. Data type used in soil content analysis and soil production capacity detection.
Figure 26. Data type used in soil content analysis and soil production capacity detection.
Sustainability 17 08526 g026
Figure 27. Data sources used in soil content analysis and soil production capacity detection.
Figure 27. Data sources used in soil content analysis and soil production capacity detection.
Sustainability 17 08526 g027
Figure 28. Answer to the first research question (“What are the main problems that can be solved through incorporating AI-driven classification techniques in the field of smart agriculture and environmental monitoring?”).
Figure 28. Answer to the first research question (“What are the main problems that can be solved through incorporating AI-driven classification techniques in the field of smart agriculture and environmental monitoring?”).
Sustainability 17 08526 g028
Figure 29. Answer to the second research question—“What are the main methods and strategies used in this technology?” (CNNs—Convolutional Neural Networks; DNNs—Deep Neural Networks; RNNs—Recurrent Neural Networks; FCNNs—Fully Connected Neural Networks; SVM—Support Vector Machine; GEE—Google Earth Engine; OTB—Orfeo Toolbox).
Figure 29. Answer to the second research question—“What are the main methods and strategies used in this technology?” (CNNs—Convolutional Neural Networks; DNNs—Deep Neural Networks; RNNs—Recurrent Neural Networks; FCNNs—Fully Connected Neural Networks; SVM—Support Vector Machine; GEE—Google Earth Engine; OTB—Orfeo Toolbox).
Sustainability 17 08526 g029
Figure 30. Answer to the third research question—“What type of data can be used in this regards?” (GEDI—Global Ecosystem Dynamics Investigation; UAVSAR—Uninhabited Arial Vehicle Synthetic Aperture Radar; SPOT—Satellite pour l’Observation de la Terre; LiDAR—Light Detection and Ranging).
Figure 30. Answer to the third research question—“What type of data can be used in this regards?” (GEDI—Global Ecosystem Dynamics Investigation; UAVSAR—Uninhabited Arial Vehicle Synthetic Aperture Radar; SPOT—Satellite pour l’Observation de la Terre; LiDAR—Light Detection and Ranging).
Sustainability 17 08526 g030
Table 1. Information sources.
Table 1. Information sources.
Data SourceTypeURL
ScopusDigital Libraryhttps://www.scopus.com (accessed on 18 March 2025)
Web of ScienceDigital Libraryhttps://clarivate.com/academia-government/scientific-and-academic-research/research-discovery-and-referencing/web-of-science/
(accessed on 18 March 2025)
Table 2. Groups of keywords.
Table 2. Groups of keywords.
GroupKeywords
Group 1“Artificial Intelligence”, “Deep Learning”, “Machine Learning”
Group 2“Remote Sensing”, “Image Classification”, “Point Cloud Classification”, “Agriculture”
Group 3“Environmental Monitoring”, “Land Use Mapping”, “Flood-Prone Area Detection”, “Forest Monitoring”, “Pest Disease Monitoring”, “Droughts Prediction”, “Soil Content Analysis”, “Soil Production Capacity Detection”, “Vegetation Monitoring”, “Crop Detection”, “Crop Yield Monitoring”
Table 3. Search queries.
Table 3. Search queries.
Digital LibrarySearch Algorithm
Scopus(TITLE-ABS-KEY(“Artificial Intelligence” OR “Deep Learning” OR “Machine Learning”))
AND
(TITLE-ABS-KEY(“Remote Sensing” OR “Image Classification” OR “Point Cloud Classification” OR “Agriculture”))
AND
(TITLE-ABS-KEY(“Environmental Monitoring” OR “Land Use Mapping” OR “Flood-Prone Area Detection” OR “Forest Monitoring” OR “Pest Disease Monitoring” OR “Droughts Prediction” OR “Soil Content Analysis” OR “Soil Production Capacity Detection” OR “Vegetation Monitoring” OR “Crop Detection” OR “Crop Yield Monitoring”))
WoSTS = (“Artificial Intelligence” OR “Deep Learning” OR “Machine Learning”)
AND
TS = (“Remote Sensing” OR “Image Classification” OR “Point Cloud Classification” OR “Agriculture”)
AND
TS = (“Environmental Monitoring” OR “Land Use Mapping” OR “Flood-Prone Area Detection” OR “Forest Monitoring” OR “Pest Disease Monitoring” OR “Droughts Prediction” OR “Soil Content Analysis” OR “Soil Production Capacity Detection” OR “Vegetation Monitoring” OR “Crop Detection” OR “Crop Yield Monitoring”)
Table 4. Related work.
Table 4. Related work.
Main TopicFocuses on Problems
Related to
Agriculture
Focuses on Problems
Related to
Vegetation and Forests
Focuses on Types of AI
Algorithms for Data Extraction
Focuses on Available Data TypesFocuses on Data Sources Reference
Applications of IoT in smart farmingxxxxhttps://www.sciencedirect.com/science/article/pii/S1364032123007165 (accessed on 9th July 2025) [1]
Applications of IoT in agricultural sectorxxxxhttps://www.sciencedirect.com/science/article/pii/S0301479721015504 (accessed on 9th July 2025) [3]
The use of remote sensing in agriculturexxhttps://www.sciencedirect.com/science/article/pii/S0167739X24006551 (accessed on 9th July 2025) [20]
The use of remote sensing in agriculturexxxhttps://www.sciencedirect.com/science/article/pii/S0034425719304213 (accessed on 9th July 2025) [10]
Water resources use and management in agriculturexxxxhttps://www.sciencedirect.com/science/article/pii/S0378377425000617 (accessed on 9th July 2025) [19]
The use of remote sensing and machine learning in agriculturexxxhttps://www.mdpi.com/2073-4395/14/9/1975 (accessed on 9th July 2025) [13]
Table 5. Most used algorithms, software products, data type and data source for each category.
Table 5. Most used algorithms, software products, data type and data source for each category.
CategoryMost Used Algorithm TypeMost Used Software ProductMost Used Data TypeMost Used Data Source
Land use mapping and crop detectionDT-based modelsPython programmingSatellite imagerySentinel data
Crop yield monitoringDT-based modelsPython programmingSatellite imagerySentinel data
Flood-prone area detectionCNNs and other DNNsArcGISRadar dataSentinel data
Forest and vegetation monitoringCNNs and other DNNsPython programmingSatellite imagerySentinel data
Pest disease monitoringCNNs and other DNNsAgisoftSatellite imagerySentinel data
Droughts prediction----
Soil content analysis and soil production capacity detectionDT-based modelsPython programmingSatellite imagerySentinel data
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nan, V.A.; Badea, G.; Badea, A.C.; Grădinaru, A.P. A Systematic Review of AI-Based Classifications Used in Agricultural Monitoring in the Context of Achieving the Sustainable Development Goals. Sustainability 2025, 17, 8526. https://doi.org/10.3390/su17198526

AMA Style

Nan VA, Badea G, Badea AC, Grădinaru AP. A Systematic Review of AI-Based Classifications Used in Agricultural Monitoring in the Context of Achieving the Sustainable Development Goals. Sustainability. 2025; 17(19):8526. https://doi.org/10.3390/su17198526

Chicago/Turabian Style

Nan, Vasile Adrian, Gheorghe Badea, Ana Cornelia Badea, and Anca Patricia Grădinaru. 2025. "A Systematic Review of AI-Based Classifications Used in Agricultural Monitoring in the Context of Achieving the Sustainable Development Goals" Sustainability 17, no. 19: 8526. https://doi.org/10.3390/su17198526

APA Style

Nan, V. A., Badea, G., Badea, A. C., & Grădinaru, A. P. (2025). A Systematic Review of AI-Based Classifications Used in Agricultural Monitoring in the Context of Achieving the Sustainable Development Goals. Sustainability, 17(19), 8526. https://doi.org/10.3390/su17198526

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop