Next Article in Journal
A Target Near-Field Scattering Measurement Technique Utilizing 3D Near-Field Imaging via Cylindrical Scanning
Previous Article in Journal
MFSM-Net: Multimodal Feature Fusion for the Semantic Segmentation of Urban-Scale Textured 3D Meshes
Previous Article in Special Issue
A Review of Practical AI for Remote Sensing in Earth Sciences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Integration of Hyperspectral Imaging and AI Techniques for Crop Type Mapping: Present Status, Trends, and Challenges

1
Center for Remote Sensing Applications (CRSA), Mohammed VI Polytechnic University (UM6P), Ben Guerir 43150, Morocco
2
Analytics Lab (A-Lab), Mohammed VI Polytechnic University (UM6P), Rabat 11103, Morocco
3
Department of Cartography and Photogrammetry, School of Geomatics and Surveying Engineering, IAV Hassan II, Rabat 10101, Morocco
4
Department of Geography, Environment & Geomatics, University of Guelph, Guelph, ON N1G 2W1, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(9), 1574; https://doi.org/10.3390/rs17091574
Submission received: 15 February 2025 / Revised: 2 April 2025 / Accepted: 3 April 2025 / Published: 29 April 2025

Abstract

:
Accurate and efficient crop maps are essential for decision-makers to improve agricultural monitoring and management, thereby ensuring food security. The integration of advanced artificial intelligence (AI) models with hyperspectral remote sensing data, which provide richer spectral information than multispectral imaging, has proven highly effective in the precise discrimination of crop types. This systematic review examines the evolution of hyperspectral platforms, from Unmanned Aerial Vehicle (UAV)-mounted sensors to space-borne satellites (e.g., EnMAP, PRISMA), and explores recent scientific advances in AI methodologies for crop mapping. A review protocol was applied to identify 47 studies from databases of peer-reviewed scientific publications, focusing on hyperspectral sensors, input features, and classification architectures. The analysis highlights the significant contributions of Deep Learning (DL) models, particularly Vision Transformers (ViTs) and hybrid architectures, in improving classification accuracy. However, the review also identifies critical gaps, including the under-utilization of hyperspectral space-borne imaging, the limited integration of multi-sensor data, and the need for advanced modeling approaches such as Graph Neural Networks (GNNs)-based methods and geospatial foundation models (GFMs) for large-scale crop type mapping. Furthermore, the findings highlight the importance of developing scalable, interpretable, and transparent models to maximize the potential of hyperspectral imaging (HSI), particularly in underrepresented regions such as Africa, where research remains limited. This review provides valuable insights to guide future researchers in adopting HSI and advanced AI models for reliable large-scale crop mapping, contributing to sustainable agriculture and global food security.

1. Introduction

As part of the 2030 Agenda for Sustainable Development, the United Nations (UN) established 17 sustainable development goals (SDGs), among which “Zero Hunger” stands as the most important [1]. This goal aims to tackle the significant global challenges of population growth and climate change. The world’s population is expected to reach 8.6 billion by 2030, while climate change is already causing more severe weather events, such as persistent droughts and floods [2].
These challenges are even more pronounced in Africa, which is expected to have a population of 1.71 billion in 2030 [3]. Consequently, increasing productivity within existing agricultural lands seems to be a practical approach to address current and future food security issues [4]. However, this must be achieved without compromising soil and water resources, a fundamental concept of sustainable agriculture that aims to balance increased productivity with environmental protection for future generations [5].
Achieving this requires decision-makers in both the public and private sectors to have access to reliable and up-to-date agricultural data. Such data would help to monitor food demand better, reduce costs, and refine agricultural policies. To this end, there is a need for accurate crop maps, which serve as an important foundation for various agricultural applications, such as crop management, yield prediction, and yield gap analysis, defined as the difference between the actual yield and potential yield [6].
According to the Joint Experiment for Crop Assessment and Monitoring (JECAM) guidelines [7], crop type classification involves identifying different crop varieties across species, such as the distinction between winter (e.g., wheat, barley, pea), summer (e.g., maize, rice, tomato), and perennial (e.g., apple, olive, almond) cultivations. This identification requires either a multi-class classification approach, where a model directly differentiates multiple crop types, or a one-versus-all binary classification, where separate classifiers distinguish each crop type from all others [8].
Given this need, Earth Observation (EO) data from diverse platforms at multiple scales, combined with advanced modeling approaches, have proven highly effective [9,10]. Indeed, multispectral images (i.e., Sentinel-2, Landsat) have been frequently used to classify crops by analyzing their spectral properties and spatial patterns [11]. This classification usually uses spectral signatures of crops that vary due to factors like plant health, pigments, leaf water content, and ground cover, giving each species a unique spectral response [12]. However, the limited number of spectral bands can lead to misclassification, particularly when distinguishing between crop types with similar spectral characteristics. This issue is especially pronounced in small-grain cereal crops, such as soft wheat, durum wheat, oat, and barley, which exhibit highly similar spectral responses, making their differentiation challenging [13]. To overcome this challenge, hyperspectral remote sensing imaging has emerged as a promising alternative, capturing detailed spectral signatures across hundreds of continuous narrow bands, allowing advancement in the understanding and classification of crop types [14]. This has been confirmed by previous research that demonstrated the superior performance of hyperspectral over multispectral images in monitoring vegetation properties, such as estimating the leaf area index [15] and discriminating crop types [16,17].
Recently, there has been an increasing interest in hyperspectral imaging (HSI), mainly due to the development of various hyperspectral sensors available for commercial use, such as HySpex and SPECIM. These sensors can be mounted on UAVs to capture hyperspectral data at high spectral and spatial resolutions for various monitoring applications [18]. Moreover, access to HSI over large areas is becoming more available with the new era of hyperspectral satellites such as DESIS [19], PRISMA [20] and EnMAP [21], as well as upcoming launches like CHIME scheduled in 2028 [22]. At the same time, there is a growing trend towards commercial hyperspectral space missions, including Pixxel [23] and Planet’s Tanager, which can complement scientific missions by offering innovative solutions to a wide range of global end-users, such as the government and private sector. Furthermore, with the advancement in big data, AI, large-scale GPUs, and cloud computing, the Geospatial Artificial Intelligence (GeoAI) field [24] has seen significant progress within the EO community, particularly in large-scale classification tasks. AI, particularly deep learning (DL), has proven more efficient than traditional methods for handling hyperspectral datasets, surpassing conventional machine learning (ML) in crop mapping tasks [25]. For instance, DL can automatically extract spatial, temporal, and spectral features, reducing the need for manual feature engineering [26]. Therefore, integrating AI and HSI unlocks new opportunities to revolutionize various agricultural applications such as monitoring crop growth, predicting yields, and analyzing yield gaps. This convergence is set to play a crucial role in enhancing decision-making and promoting a more sustainable and productive agricultural sector.
In this context, this study investigates the use of AI models for mapping crop types using HSI. It covers the data sources, the used model architectures, and the challenges encountered. The review is organized into different sections: Section 2 describes the selection process of studies included in the review; Section 3 highlights the contributions of relevant review studies; Section 4 and Section 5 provide an overview of HSI platforms and AI models, respectively; and Section 6 and Section 7 analyze and summarize the findings of the research.

2. Literature Identification

In our research on crop mapping using HSI, we adopted a comprehensive methodology designed to select pertinent academic studies in this field and enable an in-depth review and synthesis of those that align with our specific research focus. Our approach follows the guidelines outlined by Snyder et al. [27] and the methodological advice from [28]. This ensures a rigorously structured scientific process, including well-defined phases from formulating research questions to evaluating the most critical findings, as illustrated in Figure 1.

2.1. Formulating Research Questions

We began by posing relevant questions to clarify the research concept and to describe and analyze crop mapping studies involving HSI. The following research questions were identified:
  • Q.1: What are the most common hyperspectral remote sensing platforms and sensors used for crop mapping in the literature?
  • Q.2: What HSI features are employed in the literature to identify crops accurately?
  • Q.3: Which ML and DL models have been tested in the literature for crop type mapping using HSI?
  • Q.4: What challenges do researchers face when using HSI for classification?
This step’s main objective is to facilitate a better understanding of previous publications in this field of research.

2.2. Search Strategy

In our research methodology, we conducted an extensive online bibliographic search using three scientific databases: Scopus, Web of Science, and Google Scholar. This choice was motivated by their broad coverage of peer-reviewed journals and conference proceedings and their advanced literature analysis tools.
Each of these databases were queried using specific keywords covering all possible combinations of terms related to hyperspectral remote sensing, crop classification, and advanced AI algorithms. Multiple synonyms were selected and incorporated into the query for each primary search term, considering that authors might use different terms, such as ‘Crop Mapping’ and ‘Crop Classification.’ This approach was applied consistently across all the three search terms. The search query consisted of three parts, each representing one search term, as illustrated in Table 1.
For each query part, we used the Boolean operator ‘OR’ to make sure that at least one of the synonym words is present in the title, abstract, and keywords of a paper. Additionally, we combined the three query parts into a single search query using the ‘AND’ logical operator to capture relevant studies that intersected all these fields, ensuring that each query part was represented in the results. By employing these targeted search strategies, we included the most relevant studies, providing a robust foundation for our literature review.

2.3. Selection Criteria

Inclusion and exclusion criteria are essential for ensuring the relevance of selected studies to our research topic. The primary focus was on studies written in English and published in peer-reviewed journals and conference proceedings between 2019 and 2024. This time-frame was selected because 2019 marked the launch of the new era of hyperspectral satellites, such as DESIS and PRISMA, which offered open-access hyperspectral data. Moreover, this period saw significant advancements in AI algorithms for hyperspectral data processing, improving capabilities for tasks like classification, and benefiting from cloud and high-performance computing developments. However, it is important to mention that studies that did not align with our specific research focus were excluded. For instance, only studies that directly addressed crop mapping using HSI with advanced AI algorithms were included. Additionally, studies representing the same methodological approach were excluded. Due to their nature, survey articles were also excluded from the analysis. However, they were not entirely disregarded during the reading process. Their valuable insights enhanced the discussion points within our review.
The complete literature screening process is illustrated in Figure 2.
Figure 3 presents the most frequently occurring terms in the titles and keywords of the reviewed studies, with font size corresponding to their frequency. The cloud tag provides an overview of the dominant themes covered within the literature. It highlights that hyperspectral remote sensing imaging is the most frequently cited input data, while Convolutional Neural Networks (CNNs) are the most widely used DL method for crop classification.

3. Overview of Existing Surveys with Current Review Study

Agricultural research focusing on crop mapping through EO and AI algorithms has grown significantly, necessitating a comprehensive survey to track advancements and innovations in this domain. Several reviews have been published on crop mapping using remote sensing techniques. For instance, Alami Machichi et al. [29] provided an extensive overview of the history and evolution of crop mapping, discussing the latest scientific advances in the field, which involve ML and DL models. Their research highlighted various classification approaches and methods in crop mapping, emphasizing the contributions of cloud computing platforms and the importance of spectral, temporal, and high-spatial-resolution features in crop identification. Notably, they recommended using new spatio-temporal fusion methods of multi-sensor imagery to enhance the identification of spectral characteristics of crop species.
Similarly, Orynbaikyzy et al. [30] presented a systematic review of the combination and fusion of optical and radar remote sensing data, focusing on fusion approaches and classification strategies. Their findings indicated a significant trend towards feature-level fusion using pixel-based classification approaches, demonstrating the enhanced accuracy in crop classification through optical and radar data fusion. However, they noted limitations in the spatial and temporal transferability of the models.
This review was complemented by a recent work conducted by Choukri et al. [4], which presented an extensive analysis of the challenges in agricultural monitoring and mapping in Africa, as well as the current research progress in crop mapping based on optical and radar satellites. Their study underscored the importance of these satellites in penetrating cloud cover, particularly in African tropical regions. The results highlighted the potential of advanced modeling and high-performance computing while emphasizing the need for accurate ground truth data to improve the accuracy of crop type classification.
While these reviews address crop mapping using remote sensing techniques, they lack a detailed examination of the use and added value of HSI in crop identification. Recent surveys in the field of HSI, such as the work conducted by Lu et al. [12] and Karfi et al. [31], highlighted the performance of HSI over multispectral imaging in monitoring vegetation properties and discriminating crop types. The authors provided an overview of HSI platforms and sensors, as well as methods for processing and analyzing hyperspectral data in agricultural applications. Their findings indicated that HSI, combined with DL and big data, has a high potential for monitoring a wide range of agricultural features, and further research in this area is recommended.
In this context, Ram et al. [32] presented a comprehensive overview of recent trends in HSI for precision agriculture, focusing on real-time applications involving field, lab, and UAV hyperspectral platforms. They observed a growing trend towards utilizing high-spatial-resolution hyperspectral sensors in agricultural applications, alongside an increased use of DL. These algorithms have proven to be powerful compared to conventional ML techniques due to their ability to identify and extract discriminative features (spatial and spectral) from 3D hyperspectral data [33]. Moreover, Guerri et al. [25] confirmed the trend toward using DL techniques for analyzing HSI in agricultural applications. Their study highlighted the employment of various DL methodologies, including auto-encoders, CNNs in 1D, 2D, and 3D configurations, recurrent neural networks (RNNs), deep belief networks (DBNs), and generative adversarial networks (GANs), which open numerous new research opportunities in real-world applications.
Although these surveys have extensively reviewed the literature on the application of HSI in agriculture using AI, their focus was not solely on crop type mapping. They also included other applications such as soil analysis, crop yield estimation, plant disease monitoring, and nutrient estimation. Additionally, these reviews did not consider the recent generation of hyperspectral space-borne sensors, nor did they address new trends in AI algorithms such as Vision Transformers (ViTs), geospatial foundation models (GFMs), and Graph Neural Networks (GNNs).
Our review is distinguished by its focus on using HSI for crop mapping. Particularly, we will explore the various methodologies employed in this area of research, assess the effectiveness of advanced AI models employed for classification, and evaluate their performance. This will help us identify current challenges and provide guidance for future research in this domain.

4. Overview of Hyperspectral Imaging Sensors and Platforms Used for Crop Type Mapping

Imaging spectroscopy, also known as HSI, merges traditional spectroscopy with imaging techniques, allowing for the simultaneous acquisition of spatial and spectral data from an image [34].
Spectroscopy focuses on measuring specific spectra, typically within the range of 0.4 μ m to 2.5 μ m, to identify the composition and characteristics of materials, as shown in Figure 4, by studying the interaction of light with matter [35].
On the other hand, imaging systems convert visual characteristics of objects (e.g., scenes) into digital signals, which computers process to create digital images [37]. Therefore, the combination of spectroscopy technology and the modern imaging system could measure a spectrum for every pixel in an image, as demonstrated in Figure 5. This provides a revolutionary way of observing the earth by collecting information of each pixel in the field of view across the electromagnetic spectrum using various platforms at multiple scales [38].

4.1. Satellite-Mounted Sensors

Multispectral sensors such as Landsat and Sentinel-2 have been widely used for agricultural mapping due to their public accessibility and high temporal resolution [40]. However, their limited spectral resolution hinders the accurate discrimination of different crop types.
On the other hand, the emergence of high-resolution multispectral satellites (e.g, WorldView-3, PlanetScope) has addressed some spatial limitations of public missions, offering sub-5 meter pixel resolution. These high-resolution sensors are particularly beneficial for crop boundary delineation [41], fine-scale classification [42], and precision agriculture monitoring [43]. However, the high operational costs and limited spectral resolution of these commercial systems restrict their utility for large-scale agricultural monitoring and research.
Compared to multispectral systems, hyperspectral sensors provide significantly higher spectral resolution by capturing hundreds of continuous narrow bands (i.e., 240 and 246 bands for PRISMA and EnMAP, respectively), allowing for more precise differentiation among various crop species. Building on this advantage, a number of space-borne hyperspectral missions, such as Hyperion (EO-1), DESIS, PRISMA, and the recently launched EnMAP, as presented in Table 2, have shown strong potential for large-scale agricultural applications [44].
For instance, the launch of NASA’s EO-1 Hyperion in 2000 marked the beginning of the hyperspectral era in space-based imaging [45]. Hyperion captured data across 220 spectral bands ranging from 0.4 to 2.5 micrometers, with a spectral resolution of 10 nanometers (nm) and a spatial resolution of 30 m. Its ability to collect data over such a wide spectral range enabled researchers to conduct numerous studies. A notable example is a comparative assessment between Landsat 8 and Hyperion data; the latter outperformed multispectral data in terms of crop discrimination with an overall accuracy (OA) around 80% [46]. Another study by Surase et al. [47] in 2018 focused on discriminating crops like wheat, sorghum, and cotton using traditional machine learning algorithms such as Support Vector Machines (SVMs), demonstrating the potential of hyperspectral imagery for crop mapping, though it also highlighted the challenge of mixed pixels and recommended further research in that area.
Following the success of Hyperion, the CHRIS/PROBA mission, launched in 2001 by the European Space Agency (ESA), became the next significant space-borne hyperspectral mission. CHRIS/PROBA offered 19 fully programmable spectral bands ranging from 400 to 1050 nm with a spatial resolution of 17 m, providing valuable hyperspectral data for various agricultural applications [48]. However, the development of new space-borne hyperspectral missions slowed over the next two decades due to technological and financial constraints. During this period, research continued through airborne platforms, along with a few national hyperspectral missions that primarily served their respective countries [49]. Notable examples include HySIS, launched in 2018 by the Indian Space Research Organization (ISRO), which was designed for agricultural and forestry applications, and HJ-1, launched in 2008 by the Chinese Academy of Space Technology (CAST), which provides hyperspectral imaging for environmental monitoring and disaster management. However, these missions did not offer open-access data for global scientific use, limiting their impact on international research and broader applications. This paved the way for more advanced hyperspectral sensors like PRISMA, launched in 2019 by the Italian Space Agency (ASI). PRISMA captures data across a spectral range from 400 to 2500 nm with a spatial resolution of 30 m comparable to that of Hyperion.
During the same period, the DESIS hyperspectral mission jointly developed through a partnership between Teledyne Brown Engineering (TBE) and the German Aerospace Center (DLR) was deployed and became operational aboard the International Space Station (ISS). It collects data across 235 spectral bands with a 2.55 nm bandwidth, covering the 400–1000 nm spectral range and offering a 30 m spatial resolution. The simultaneous availability of DESIS and PRISMA has facilitated numerous comparative studies, with findings showing a strong correlation between the two sensors, reflected in R 2 values ranging from 0.9 to 0.95 [50]. These studies have identified optimal hyperspectral narrowbands (OHNBs) from both platforms, which are crucial for differentiating crop types and estimating vegetation biophysical parameters [50,51].
This was followed by the launch of EnMAP by the DLR in 2023, which captures data across 246 spectral bands within the 420–2450 nm spectral range, offering a spatial resolution of 30 m. A recent study by Bourriz et al. [44] reported a high degree of consistency and complementarity between PRISMA and EnMAP, with Pearson correlation coefficients ranging from 0.91 to 0.93, highlighting their value for crop monitoring, particularly in the face of limited revisit frequencies.
However, while HSI offers enhanced spectral resolution, it often comes with a trade-off in spatial resolution. The typical 30 m spatial resolution of hyperspectral sensors poses challenges for precision agriculture, which requires fine spatial detail and field-level insights [52].
To address this limitation, combining hyperspectral data with high-resolution optical imagery from platforms like Sentinel-2, or commercial satellites such as WorldView and QuickBird, can help enhance spatial detail [53]. Additionally, integrating data from other hyperspectral missions, such as PRISMA, DESIS, and EMIT, can improve temporal coverage, enabling the generation of more comprehensive and global hyperspectral time-series [21]. These synergies will facilitate better crop type mapping, object recognition, and product validation in agricultural applications, enhancing both temporal and spatial insights for precision farming.

4.2. Airborne and UAVs Mounted Sensors

Airborne and UAV platforms have been widely used in crop mapping, offering high efficiency and flexibility for monitoring agricultural fields across different scales, from local plots to entire regions (Figure 6). Operating at lower altitudes compared to satellites, these platforms allow the integration of multiple sensors, including multispectral, hyperspectral, thermal, and LiDAR, leading to high spatial and spectral resolutions [54].
Many of the most prominent hyperspectral datasets used in agricultural research have been acquired via airborne missions. A key example is the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), which, since its development in 1987, has captured spectral data in 224 bands across the visible to shortwave infrared (SWIR) range, with a spectral resolution of 20 m. AVIRIS played a foundational role in advancing HSI, giving rise to benchmark agricultural datasets such as Indian Pines and Salinas [25]. These datasets have significantly contributed to validate various DL models for crop type mapping.
Beyond AVIRIS, several airborne hyperspectral sensors are employed in agricultural and environmental studies, such as the Compact Airborne Spectrographic Imager (CASI), Airborne Spectrographic Imager (SASI), and Airborne Imaging Spectrometer for Applications (AISA). For instance, the CASI sensor, with 96 bands covering wavelengths from 380 nm to 1057.3 nm, and the SASI sensor, with 100 bands ranging from 957.7 nm to 2442.5 nm, were used in a comparative study alongside PRISMA satellite data for the classification of tomato and corn crops using one-dimensional (1D) convolutional layer [56]. The results demonstrated that CASI/SASI and PRISMA yielded similar classification performances, reinforcing the reliability of spaceborne sensors in crop mapping. Additionally, AISA systems have been integral in more recent projects, such as the AI4GEO field campaign, where they collected a comprehensive multi-source dataset over Toulouse in 2021 [57,58]. This dataset serves as a new hyperspectral benchmark designed to support developing and validating advanced AI models for land use and land cover (LULC) classification (Table 3).
Most airborne hyperspectral data are captured by aircraft operating at medium to high altitudes (1–20 Km), typically resulting in medium to high spatial resolution (0.73–20 m) [12,59]. These resolutions are sufficient for mapping various crops and soil properties [60]. However, using airborne hyperspectral imaging in precision agriculture faces several limitations. For instance, scheduling flight missions can take months, if not years, and the associated costs are quite high. These factors have limited the widespread application of airborne hyperspectral technology in agriculture.
As a cost-effective and flexible alternative, UAVs provide high spatial–spectral resolution due to the lower flight altitudes (<120 m) and the ability to mount a range of sensors, including hyperspectral ones [61]. In recent years, several lightweight hyperspectral sensors have been specifically designed. Notable examples include the Headwall and Resonon sensors, which have been extensively used to collect benchmark hyperspectral datasets (Table 4). These datasets, along with those previously obtained from airborne platforms, provide valuable resources for the development and validation of advanced DL algorithms for crop mapping.
Unlike hyperspectral imaging satellites, which are typically corrected by the data provider and ready for analysis, airborne and UAV hyperspectral data require substantial pre-processing. Various calibrations, such as radiometric and atmospheric corrections, are required to remove undesirable components, such as solar illumination and sensor response, as well as to convert raw radiance data into absolute reflectance values [12,62]. Alongside these corrections, geometric adjustments must also be made to compensate for motion distortions caused by the instability of airborne/UAVs and to accurately geo-register the images to field locations [18]. These processes are necessary for producing high-quality, accurate geo-referenced data, which are crucial for detailed crop and soil analysis. Furthermore, many UAV-mounted hyperspectral systems, such as those from Headwall and HySpex, integrate LiDAR to enhance spatial accuracy by providing detailed 3D structural information, making it useful for applications such as terrain correction [63] and canopy height estimation [64]. However, combining multiple sensor modalities adds significant complexity and cost, requiring additional calibration, synchronization, and processing resources [54].

5. Overview of Artificial Intelligence Models for Crop Mapping Using Hyperspectral Imaging

AI plays an increasingly important role in EO, significantly improving our ability to understand and analyze vast amounts of satellite data [65]. Hyperspectral sensors, in particular, provide detailed spectral information across the Earth’s surface. However, these data only becomes valuable when effective processing pipelines are in place to transform raw pixel data into meaningful information for decision-makers [66]. The rapid advancement of DL, supported by cloud computing and high-performance systems, has greatly expanded the potential for data analysis. Moreover, breakthroughs in computer vision (CV) and natural language processing (NLP) have inspired innovative approaches in remote sensing [67]. Leveraging these advancements, crop mapping has greatly benefited from the capacity of DL models to extract meaningful insights from hyperspectral data, mainly through their capabilities to extract and select the most important features.
In this section, we review both conventional and the most recent AI techniques applied to HSI classification.

5.1. Conventional AI Approaches in Crop Mapping

In earlier years, ML algorithms, such as support vector machine (SVM) and random forest (RF), were widely adopted for crop classification [48]. These algorithms became popular due to their robustness in handling large feature sets and their strong theoretical foundations.

5.1.1. Support Vector Machine

SVM has proven to be effective in classifying hyperspectral data due to their ability to manage high-dimensional feature spaces [68]. By maximizing the margin between classes, SVM reduces overfitting and enhances generalization. Kernel functions like radial basis, polynomial, and sigmoid further enable the modeling of complex, non-linear patterns in the data [23]. For multi-class problems, approaches such as ‘one-against-one’ are commonly used, making SVM suitable for distinguishing among multiple crop types [69]. Several studies have demonstrated the success of these approaches, achieving high OA ranging from 75% to 90% in distinguishing between crops like wheat, maize, and others [48,70,71].

5.1.2. Random Forest

On the other hand, RF is an ensemble learning method that constructs multiple decision trees using bootstrapped subsets of the training data. Its strength lies in bagging, which reduces overfitting and balances bias and variance [23]. RF has been widely adopted in hyperspectral crop mapping due to its robustness, ability to model non-linear relationships, and effectiveness in detecting outliers [72]. Beyond classification, RF is also used for feature selection by ranking the importance of spectral bands. For example, Monaco et al. [73] used RF to identify the most important bands, which were then used to train a 2D-CNN, significantly enhancing classification accuracy to 92% in OA.
SVM and RF (Figure 7), as traditional supervised machine learning algorithms, have proven their effectiveness in a wide range of classification tasks, including HSI. However, these methods do not inherently account for the spatial relationships between pixels, as they treat each pixel as an independent observation and rely exclusively on spectral information for classification [74]. Furthermore, both SVM and RF require manual feature engineering, which requires considerable domain expertise to design appropriate features.
In contrast, DL architectures offer the ability to automatically learn complex patterns from data without requiring prior knowledge about the statistical distribution of the input [75]. These models can autonomously extract linear and nonlinear features from the data, provided that sufficient labeled training data and computational resources are available [25].

5.1.3. Convolutional Neural Network

Convolutional Neural Networks (CNNs) have become the dominant deep learning architecture for remote sensing, particularly for HSI classification, due to their efficiency and ability to learn hierarchical features from complex data [76]. A typical CNN consists of convolutional layers for feature extraction, pooling layers for spatial dimensionality reduction, and fully connected layers that produce class predictions by mapping extracted features to labels [77,78]. Their strength in recognizing patterns across multidimensional data makes them highly effective for modeling spectral–spatial relationships in HSI.
CNNs have evolved into several variants (Figure 8), each designed to improve learning efficiency and adapt to different data characteristics.
For instance, 1D-CNNs focus on extracting temporal or spectral features from single pixels without accounting for spatial inter-pixel relationships or contextual information [76]. In contrast, two-dimensional 2D-CNNs typically segment large input images into smaller patches, with each label corresponding to the central pixel of the patch [66]. However, 3D-CNNs provide a more comprehensive approach by simultaneously learning from both spatial and spectral dimensions, making them particularly suitable for hyperspectral data [79].
Several studies have demonstrated the value of CNNs for crop mapping. Mirzaei et al. [80] showed that combining PRISMA hyperspectral time series with a 1D-CNN yielded highly accurate early-season crop maps, outperforming conventional methods (e.g., SVM, RF) with 80% in OA.
Similarly, Ashraf et al. [81] and Kanthi et al. [79] employed 3D-CNNs on benchmark datasets (e.g., Salinas, Indian Pines), achieving superior accuracy compared to traditional state-of-the-art models by leveraging both spectral and spatial features.
However, some limitations are associated, notably the high computational demands and the need for large amounts of labeled training data, which can be challenging for hyperspectral datasets.

5.1.4. Recurrent Neural Networks

On the other hand, Recurrent Neural Networks (RNNs) have gained prominence as dynamic and adaptable tools for HSI, offering unique capabilities for analyzing the spectral dimension of hyperspectral data cubes [25]. Their loop-based architecture (Figure 9) allows them to retain contextual information from previous inputs, making them well suited for analyzing spectral signatures [82].
However, standard RNNs often struggle with vanishing or exploding gradients when handling long sequences [66]. To address these limitations, advanced variants like Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs) have been introduced, offering improved performance in capturing long-term dependencies through gated memory mechanisms [82].
To better capture spectral–spatial relationships in hyperspectral data, RNN architectures have been enhanced through hybrid designs. For example, the Bidirectional-Convolutional LSTM (Bi-CLSTM) integrates convolutional layers with bidirectional LSTM units to jointly learn spectral and spatial features, leading to improved classification performance [83].
These advanced RNN-based approaches have been validated on benchmark datasets such as Salinas and Indian Pines. Zhou et al. [84] proposed a novel framework using two specialized LSTM branches (Spectral LSTM and Spatial LSTM) to treat spectral and spatial feature extraction as sequential learning tasks. The outputs were fused using a decision-level strategy, resulting in 1–2% in OA improvements over conventional methods, demonstrating the potential of RNN-based models for enhancing crop mapping from HSI.

5.2. Advanced AI Approaches in Crop Mapping

In the above section, we saw that crop type mapping using HSI has long relied on traditional ML methods, such as SVM and RF. These methods require manual feature engineering, treating each pixel as an independent observation. They also ignore the spatial–spectral relationships present in hyperspectral data, which are crucial for precise crop type classification. In contrast, advanced AI approaches, driven by DL architectures, automatically learn features from data by integrating spectral and spatial dimensions. Transformers, in particular, surpass CNNs and RNNs due to their attention mechanisms, which capture both contextual and spatial relationships present in hyperspectral data, overcoming limitations like fixed receptive fields in CNNs and vanishing gradient issues in RNNs. This section explores these advancements, starting with transformers (Section 5.2.1), followed by hybrid models (Section 5.2.2), which integrate the strengths of two architectures, foundation models (Section 5.2.3) for scalable generalization, and GNNs (Section 5.2.4), which excel in modeling spatial–spectral relationships for improved classification.

5.2.1. Transformer

Unlike conventional DL models such as CNNs which are effective at capturing local spatial patterns, transformers offer a more advanced solution for HSI analysis through their self-attention mechanism. This allows them to model global dependencies and dynamically weigh interactions between distant spectral bands, enabling the extraction of rich spectral–spatial features [85]. As a result, transformers can learn more discriminative representations and often outperform CNNs in handling the complexity of hyperspectral data.
The core of the transformer architecture lies in its self-attention mechanism, which updates each token by aggregating contextual information from all other tokens in the sequence. Introduced by Vaswani et al. [86], the original transformer model consists of an encoder–decoder structure, both incorporating self-attention to process and generate sequences.
Building on advances in CV, transformer architectures, particularly the Vision Transformer (ViT) (Figure 10) have been effectively adapted for HSI tasks, demonstrating strong potential for spectral–spatial learning.
ViTs convert HSI into token sequences, enabling the model to efficiently capture intricate spatial and spectral patterns through a standard transformer encoder, similar to how sequential data is processed in NLP [66]. Several ViT-based models have demonstrated superior performance in hyperspectral image classification. For instance, Xie et al. [87] proposed HyperSFormer, built on a Swin Transformer encoder, which effectively captures both local spectral and global spatial features. In a comparative analyses, this model outperformed state-of-the-art algorithms such as DeepLabv3 and U-Net with an OA of 98%, showcasing its effectiveness for hyperspectral image classification, notably for complex agricultural scenarios where fine spectral distinctions are essential for crop type discrimination.
Similarly, Zhang et al. [88] evaluated eight classification models on the large-scale Heilongjiang dataset, showing that transformer-based models like Spectral-former and ViT achieved over 80% in OA using only 10% of labeled data, significantly surpassing traditional ML approaches by better capturing spectral–spatial features in agricultural landscapes.

5.2.2. Hybrid Architectures

Recently, hybrid models (Figure 11) combining multiple architectures within a unified framework have gained significant attention in precision agriculture, particularly for crop mapping. These models are often bench-marked against state-of-the-art architectures, demonstrating improved performance by leveraging the complementary strengths of different components [82].
Hybrid architectures commonly combine CNNs and transformers, utilizing the CNN’s ability for local feature extraction alongside the transformer’s capacity to capture global dependencies. These hybrid frameworks are implemented in two forms: a single-branch structure combining CNN and transformer features and a dual-branch structure where each branch focuses on specific features [89].
Research has shown that single-branch frameworks are insufficient for fully representing the rich spectral and spatial information found in HSI, as the local and global features captured by one branch are not always complementary [66]. Therefore, dual-branch approaches are recommended for their enhanced capacity to represent spatial and spectral features, especially for crop classification tasks [89].
In this context, Guo et al. [90] proposed Convolutional Meet Transformer Network (CMTNet), a dual-branch hybrid model that combines CNN and transformer modules for enhanced crop type mapping. The architecture includes a spectral–spatial feature extractor that uses 3D and 2D convolutions, a dual-branch fusion module, and a multi-output constraint module to optimize learning through multi-scale losses. CMTNet outperformed existing models on large-scale datasets (WHU-Hi LongKou, HanChuan, and HongHu) with an OA reaching 98%.
Similarly, Wang et al. [91] introduced a hybrid model integrating CNN-based spectral–spatial extraction with an improved ViT. Their approach features a dynamic spectral–spatial kernel and a re-attention mechanism to diversify attention maps and adapt receptive fields. The model showed a 3.17% gain in OA over state-of-the-art methods like CNN, SVM, and HybridSN across three benchmark datasets, confirming its robustness and effectiveness.

5.2.3. Geospatial Foundation Models

AI and DL advancements have opened a new era characterized by large-scale, pre-trained models known as foundation models. These models can perform a wide range of language and vision tasks. Inspired by the success of models like GPT [92] and Gemini [93], researchers have begun exploring the potential of foundation models in other fields, including remote sensing [94]. As a result, considerable efforts have been devoted to developing foundation models for evaluating their performance across different applications. Examples include SAM by Meta-AI [95], Prithvi by NASA-IBM [96], and the Clay Foundation Model, which primarily targets high-resolution aerial RGB, multispectral, and synthetic aperture radar (SAR) imagery. However, training these models presents considerable challenges, mainly due to their large scale, which necessitates vast training datasets that are often difficult to obtain [96]. Fortunately, advanced methodologies such as self-supervised learning (SSL), ViT, and CNNs have been instrumental in addressing these challenges, making it feasible to train foundation models effectively despite limited data availability [94].
SSL has emerged as an effective approach for learning meaningful representations from unlabeled data. SSL predicts parts of the input based on other available parts, thereby reducing the need for large-labeled datasets [97,98]. SSL can be divided into two commonly used methodologies: Contrastive Learning [94] and Masked Image Modeling (MIM) [99].
On the other hand, ViTs and CNNs serve as foundational backbones, providing rich feature representations from input data that subsequent components can use for tasks like classification [94].
However, despite the rich spectral detail of hyperspectral data, its integration into geospatial foundation models (GFMs) (Figure 12) has been limited due to challenges such as high dimensionality, redundancy, and the need for large labeled datasets [100]. To bridge this gap, Braham et al. [101] introduced SpectralEarth, a large-scale, multi-temporal hyperspectral dataset from the EnMAP satellite designed to support foundation model development and enhance generalization across sensors like DESIS and PRISMA.
In parallel, Wang et al. [100] developed HyperSIGMA, the first GFM specifically designed for hyperspectral data. Based on a ViT architecture, it incorporates a Sparse Sampling Attention (SSA) mechanism to efficiently manage spectral–spatial redundancy. HyperSIGMA achieved notable performance in classification tasks, crucial for agricultural monitoring outperforming established models such as FullyContNet, SpectralFormer, and spectral–spatial global graph reasoning (SSGRN) on datasets like Indian Pines, HanChuan, and HongHu, which are widely used for crop mapping applications.
Building on these advancements and to overcome the limitations of remote sensing foundation models often restricted to a single modality or spatial resolution, Xiong et al. [102] proposed Dynamic One-For-All (DOFA), a neural plasticity-inspired, multi-modal foundation model. DOFA integrates hyperspectral data with other modalities such as multispectral, high-resolution optical, and SAR imagery. Its core component is a dynamic hyper-network that generates adaptive weights based on spectral band wavelengths, enabling a unified Transformer-based framework for diverse input types [96]. Trained on a comprehensive multi-modal dataset including EnMAP, Sentinel-1/2, NAIP, and Gaofen imagery, DOFA demonstrated strong generalization and performance across tasks like land cover classification and crop mapping, surpassing models such as SatMAE with 92.2% in OA. Additionally, its use of MIM minimizes reliance on extensive labeled datasets, addressing one of the primary challenges in developing hyperspectral foundation models.

5.2.4. Graph Neural Networks

Graph Neural Networks (GNNs) (Figure 13) have emerged as a promising approach in EO and HSI classification, effectively addressing the limitations of conventional DL models like CNNs when applied to non-Euclidean data structures [66]. Graphs offer a flexible data representation used across various domains such as agriculture, land cover classification, weather, and air quality monitoring by modeling relationships between entities as nodes and edges [103]. These edges may be weighted or unweighted, directed or undirected, depending on the application context [104]. While CNNs are proficient at extracting local spectral–spatial features from grid-based data, they struggle to capture complex, global interactions, especially in irregular spatial arrangements [105]. GNNs overcome this limitation by explicitly modeling the connectivity and interactions between spatial regions, making them particularly suitable for applications involving heterogeneous landscapes or interconnected land cover types [103,104].
Recent studies have explored the potential of GNNs in HSI classification, particularly for crop mapping. To better capture long-range dependencies in HSI, Liu et al. [106] proposed the Adaptive Multi-Feature Fusion Graph Convolutional Network (AMF-GCN). This model addresses limitations of traditional fusion techniques, which are often prone to redundancy, by employing a multi-branch architecture that processes spectral, textural, and fused features in parallel. Each pixel is treated as a graph node, and adaptive feature fusion is achieved using a GCN with an attention mechanism, enabling the model to focus on the most informative features. AMF-GCN achieved 98.03% accuracy on the Salinas dataset, outperforming existing GNN and DL models.
Building on this, Yang et al. [107] introduced GraphMamba, an efficient graph structure learning framework tailored for HSI classification. It incorporates the HyperMamba Module, which employs a global mask for spectral redundancy reduction and enables parallel computation, and a SpatialGCN module that uses weighted multi-hop aggregation to capture spatial relationships. By constructing spatial–spectral cubes, GraphMamba enhances feature preservation and learning efficiency. It achieved superior results on benchmark datasets like Indian Pines and Salinas, outperforming models such as SVM and 3D-CNN. These findings highlight the strength of GNNs in modeling complex spectral–spatial dependencies that conventional models often overlook. They also suggest promising directions for future research, including hybrid architectures that integrate GNNs with CNNs or transformers to further enhance crop mapping accuracy using HSI.
All in all, AI techniques for crop type mapping using HSI vary widely in their methodologies and capabilities. Traditional ML approaches such as SVM and RF are well suited for handling high-dimensional data due to their robust statistical foundations. However, these models treat pixels independently, often neglecting spatial context and requiring manual feature selection. DL techniques, particularly CNNs, offer a significant advancement by automatically learning spatial–spectral features. More sophisticated architectures like 3D-CNNs enhance this capability by simultaneously capturing information across both spatial and spectral dimensions. Emerging approaches, including transformers and GNNs, further push the boundaries by modeling complex global dependencies and non-Euclidean spatial relationships, addressing limitations of earlier models. In addition, hybrid models that combine CNNs with transformers effectively balance local feature extraction with global context modeling, making them particularly effective for the intricate nature of HSI data. On the other side, GFMs, designed for large-scale pre-training, introduce a new paradigm by enabling transfer learning and reducing reliance on labeled data. This scalability makes them highly promising for operational crop mapping at regional or national scales.
Given the high variability in the performance and capabilities of ML and DL classifiers, it is challenging to assert the efficiency of one model over the others. The choice of approach depends on several factors, such as the complexity of the hyperspectral data, the availability and quality of labeled ground truth data, computational resources, and the specific objectives of the classification task. For instance, traditional ML methods may suffice for smaller datasets with limited computational resources, while DL or hybrid models are better suited for large-scale, complex datasets requiring advanced feature extraction and global context modeling. Ultimately, the selection of an appropriate model should be guided by careful consideration of these factors to achieve optimal results for the given application, such as crop type mapping.

6. Results and Discussion

This study conducted a comprehensive full-text analysis of the 47 selected articles to address the research questions outlined in Section 2.1. Most of these articles were published in peer-reviewed journals and conference papers. This distribution highlights our focus on high-quality, detailed research contributions, which are more commonly found in journal publications, as illustrated in Figure 14.
Our research results, as shown in Figure 15, indicate that the period between 2019 and 2020 stagnated in publications related to crop mapping using HSI. This stagnation can be attributed to several factors.
First, the high cost of hyperspectral sensors and platforms, particularly UAVs and airborne systems, limited their adoption. At the same time, hyperspectral satellite missions were still under-explored and remained scarce. Furthermore, the computational capabilities available during this period were insufficient to handle the massive and complex datasets generated by HSI, posing significant data processing and analysis challenges. Moreover, advanced modeling techniques, such as DL and other sophisticated algorithms, have not yet been widely adopted or made accessible for HSI analysis. However, crop mapping research using HSI significantly increased in 2021. This growth can be attributed to the successful integration of DL models, which demonstrated superior performance in analyzing hyperspectral data. Additionally, the democratization and accessibility of hyperspectral data from satellite research programs such as DESIS, PRISMA, and EnMAP enabled large-scale crop type mapping. Advances in cloud computing further allowed researchers to efficiently process, analyze, and model the vast and complex datasets generated by HSI. Moreover, the growing global focus on critical challenges like food security and climate change accelerated research efforts to develop innovative solutions for reducing the yield gap and improving agricultural productivity. These factors have collectively contributed to the growing interest in HSI-based crop mapping.
The global distribution of study areas for crop type mapping using HSI are disproportionately distributed as illustrated in Figure 16. The resulting studies were mainly conducted in the US and China. Meanwhile, regions such as Africa, South America, and Southeast Asia remain significantly underrepresented, despite their extensive amounts of croplands [108]. Future research efforts should include these developing regions to support agricultural innovation and contribute to long-term economic development.

6.1. Sensors and Platforms

To address the first question in Section 2.1, our analysis of the literature, as illustrated in Figure 17, shows that hyperspectral benchmark datasets such as Indian Pines, Salinas, and WHU-HI Datasets are by far the most widely used for crop mapping. This importance can be attributed to the availability of reliable ground truth, as well as the standardized and pre-processed nature of these datasets, which makes them a trusted resource for testing and developing new architectures for crop type mapping.
The second most utilized platforms are satellite-based sensors, which have seen a significant increase in adoption. This trend is mainly driven by the launch of research-oriented hyperspectral satellites such as EnMAP, PRISMA, and DESIS, along with commercial satellites like Pixxel, Tanager, and Wyvern. These advancements have marked a turning point, enabling large-scale, cost-effective crop type mapping with global coverage. Nevertheless, a major challenge remains the quality and availability of ground truth data, which are crucial for accurately training classifiers and addressing issues like the Hughes phenomenon [66], where high data dimensionality combined with insufficient training samples can influence classification performance. In contrast, hyperspectral UAVs and airborne platforms remain underutilized in crop type mapping, despite their capacity to produce high-resolution crop maps. The limited adoption of these platforms can primarily be attributed to their high operational costs and restricted spatial coverage, which make them less suitable for large-scale applications.
Most studies in crop type mapping using HSI rely only on a single sensor, with limited exploration of multi-sensor or multi-modal approaches. To date, only one study by [109] has investigated the potential of combining hyperspectral imaging with LiDAR-derived products for crop type mapping, highlighting an area that remains largely understudied.
Similarly, the synergistic integration of hyperspectral data with existing optical (e.g., Sentinel-2, Landsat) and radar missions (e.g., Sentinel-1) has not yet been fully explored [21]. Moreover, integrating current imaging spectroscopy missions remains largely unexplored, yet it holds immense potential for advancing crop type mapping. These missions can enable the development of innovative methodologies to effectively address challenges across spectral, spatial, and temporal domains. Therefore, these unexplored opportunities represent significant potential for future research to address current limitations and drive advancements in precision agriculture, focusing on improving the accuracy and scalability of crop type mapping.

6.2. Modeling

To address the second and third research questions, our analysis highlighted a wide range of architectures and models employed for crop type mapping using hyperspectral imaging. To offer a clear understanding, we present a comparative overview of recent studies conducted between 2019 and 2024 that have tested and evaluated various crop mapping models (Table 5). The evaluation of these methods was primarily based on the OA, which was the most widely adopted metric for performance assessment across the reviewed studies.
A wide range of models have been utilized for crop mapping using HSI. However, given the variability of ML and DL classifiers, it is challenging to determine the superiority of one approach over another. Traditional ML methods, such as SVM and RF, remain widely adopted for large-scale crop mapping applications using hyperspectral satellite imagery. Their popularity can be attributed to their simplicity and robustness in handling high-dimensional spectral data. However, these methods often fail to capture the complex spatial–spectral relationships in HSI data. To overcome these limitations, DL models, particularly CNN and their advanced variants, have emerged as the dominant approach in hyperspectral crop mapping. CNN-based methods can process spectral and spatial dimensions using a patch-based approach, where imagery is divided into smaller patches and fed into the CNN model.
Furthermore, optimization techniques have also played a crucial role in enhancing model performance. Methods such as adaptive kernels and attention mechanisms have been employed in different studies [91,124] to improve feature selection and reduce over-fitting in DL models. Additionally, hybrid architectures that combine multiple approaches to capture spatial, spectral, and temporal features have significantly advanced crop mapping accuracy. For instance, models that integrate ViT with CNN have achieved OA values exceeding 98% across various datasets, demonstrating the immense potential of innovative architectures in HSI [87,138].
Despite these advancements, significant research gaps remain. For instance, GNN-based approaches, which are well suited for modeling relationships in graph-structured data, are underutilized in hyperspectral crop mapping. GNNs have the potential to naturally represent spatial relationships between pixels and spectral dimensions, offering a promising path for future research [105]. Similarly, GFMs, an emerging area of research in the hyperspectral domain, remain largely unexplored [100]. These models could revolutionize hyperspectral crop mapping by enabling transfer learning, scalability across diverse datasets, and integration of multi-modal data sources.
Another limitation lies in the reliance on benchmark datasets for developing and testing advanced algorithms. While these datasets provide standardized and reliable inputs, they often lack the variability and scalability required for real-world agricultural applications. Future research should prioritize adapting these advanced algorithms for large-scale agricultural applications, particularly by exploiting hyperspectral satellite imagery from platforms such as PRISMA, EnMAP, and DESIS, which offer broader coverage.
Regarding features, spatial–spectral inputs are employed in many studies. These inputs have been critical for achieving high classification accuracy, particularly in DL models. Furthermore, additional features such as texture, which captures spatial patterns within pixel distributions, spectral indices like the Normalized Difference Vegetation Index (NDVI), and LiDAR-derived products have been integrated in some studies [109,131] to improve classification performance. Although features such as phenological characteristics, climatic variables, topographic data, and additional modalities from sensors like radar and multispectral imagery hold great potential, they remain underutilized in crop mapping research. Their integration into deep learning-based multi-modal frameworks presents a promising opportunity to enhance the accuracy and reliability of crop classification. For instance, some studies [26,142] have successfully combined weather data with multispectral satellite imagery, demonstrating the value of integrating diverse data sources. However, the application of such multi-modal approaches using HSI remains unexplored. Bridging this gap could lead to significant advancements in generating more precise crop maps, thereby improving decision-making in precision agriculture and addressing critical challenges in sustainable farming.

6.3. Synthesis of Key Challenges in Crop Type Mapping Using HSI

Crop type mapping using HSI faces several challenges, starting with the high dimensionality and computational complexity of hyperspectral data, which can comprise hundreds of continuous narrow-bands, leading to substantial demands in processing and computational resources; this is further complicated by the significant financial investment required for high-performance computing infrastructure, making HSI less accessible to many researchers and creating a barrier to widespread adoption and innovation [25]. Addressing these challenges requires the development of computationally efficient algorithms and techniques for dimensionality reduction, such as band selection and feature extraction [143]. Additionally, cloud computing represents an effective and promising solution for facilitating hyperspectral data analysis and storage. The increasing availability of hyperspectral datasets, tools, and algorithms on cloud platforms, such as Google Earth Engine (GEE), enables researchers to access scalable computational resources, and collaborate more effectively [144]. This democratization of cloud-based hyperspectral tools is expected to significantly enhance accessibility, reduce costs, and accelerate advancements in hyperspectral remote sensing research. Another major challenge is the limited availability and quality of labeled training data, often constrained by proprietary rights or governmental restrictions, limiting their availability for research and model development [4]. Additionally, the high cost and time-intensive nature of collecting and labeling data limit the development of robust classification algorithms. Recent advancements, including transfer learning and SSL, have shown promise in mitigating data scarcity by employing pre-trained models [145]. Furthermore, the interpretability of ML/DL models remains a pressing challenge, particularly in hyperspectral crop mapping. Many advanced models, especially DL architectures, are often described as ‘black boxes’, as their internal mechanisms and decision-making processes are not easily understood [146]. Efforts to enhance model interpretability, such as feature importance analysis and the integration of explainable AI (XAI) techniques are essential for improving the usability and accountability of these models in remote sensing applications [75,147]. Moreover, HSI suffers from low spatial resolution due to trade-offs in sensor design, leading to the issue of mixed pixels, where multiple spectral signatures are combined within a single pixel, affecting accurate crop type classification. Addressing this challenge requires advanced methodologies, such as super-resolution techniques to enhance spatial resolution [148] or spectral unmixing methods to separate mixed pixel information [149]. Lastly, despite the increasing interest in remote sensing for agricultural monitoring, studies focusing on crop type mapping in Africa, particularly in Morocco remain limited. Most studies in the region [150,151] have relied on multispectral imagery, often combined with radar imagery, and traditional ML/DL algorithms, which struggle to differentiate similar crops like cereal varieties. HSI, with its potential for detailed spectral analysis, has rarely been used, as existing research [152,153] in the region has primarily focused on soil property estimation rather than crop differentiation. The scarcity of HSI-based crop mapping studies in Morocco and across Africa represent a critical research gap, especially as hyperspectral space-borne imagery becomes increasingly accessible. To address this, future research should focus on improving ground truth data collection, promoting data sharing, and applying advanced AI techniques to enhance crop classification and support sustainable agriculture.

7. Conclusions

This paper provided a comprehensive overview of studies on crop type classification models using hyperspectral imaging (HSI) data. It highlights the current state-of-the-art approaches regarding platforms and sensors for data acquisition, input features, and the architectures and models developed for classification tasks in agricultural applications. The key findings of this review can be summarized in the following five points. First, HSI demonstrates significant potential for crop mapping due to its rich spectral information, which offers highly discriminative capabilities for classifying crops. The increasing availability and accessibility of hyperspectral data further underscores its growing importance in precision agriculture. Second, advanced artificial intelligence (AI) techniques and big data, supported by the rapid development of cloud and high-performance computing, have greatly enhanced the ability to process and analyze hyperspectral data for crop mapping applications. Third, most reviewed studies rely on benchmark datasets for algorithm development. However, these datasets often fail to represent the complexity and variability of real-world agricultural conditions, highlighting the need for studies that focus on large-scale and diverse environments. Fourth, the transferability of classification models across different datasets and regions remains an under-explored area of research, presenting a critical gap that requires further investigation. Fifth, there is a notable scarcity of studies integrating HSI and AI within Africa. This gap presents a significant opportunity to explore how these technologies can address the continent’s unique challenges, with a primary focus on enhancing crop monitoring and ensuring food security.

Author Contributions

All the authors have contributed substantially to this manuscript. Conceptualization, A.L., F.B. and A.C.; methodology, M.B., H.H. and N.E.; validation, M.B., H.H., A.L. and N.E.; formal analysis, M.B. and H.H.; investigation, M.B. and N.E.; writing—original draft preparation, M.B.; writing—review and editing, M.B., H.H., A.L., H.A.A. and A.E.-B.; supervision, A.L. and F.B.; co-supervision, A.E.-B., A.C., H.H., H.A.A. and A.A.; project administration, A.A. and A.L.; funding acquisition, A.C., A.L. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This project was financially supported by The Yield Gap project (agreement between OCP Foundation and the Mohammed VI Polytechnic University (UM6P)). The lead author received financial support from UM6P.

Data Availability Statement

No data were used for the research described in this article.

Acknowledgments

The authors acknowledge all the technical support of those who helped in conducting the study. We also thank UM6P for providing ChatGPT Edu to enhance the grammar, clarity, and linguistic proficiency of the text in the article. The authors would like to acknowledge the academic editor and anonymous reviewers in advance for accepting to review earlier versions of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yailymova, H.; Yailymov, B.; Mazur, Y.; Kussul, N.; Shelestov, A. Sustainable Development Goal 2.4.1 for Ukraine Based on Geospatial Data. Proc. Int. Conf. Appl. Innov. IT 2023, 11, 67–73. [Google Scholar] [CrossRef]
  2. Calicioglu, O.; Flammini, A.; Bracco, S.; Bellù, L.; Sims, R. The Future Challenges of Food and Agriculture: An Integrated Analysis of Trends and Solutions. Sustainability 2019, 11, 222. [Google Scholar] [CrossRef]
  3. United Nations, Department of Economic and Social Affairs, Population Division. Population 2030: Demographic Challenges and Opportunities for Sustainable Development Planning; ST/ESA/SER.A/389; United Nations: New York, NY, USA, 2015. [Google Scholar]
  4. Choukri, M.; Laamrani, A.; Chehbouni, A. Use of Optical and Radar Imagery for Crop Type Classification in Africa: A Review. Sensors 2024, 24, 3618. [Google Scholar] [CrossRef] [PubMed]
  5. Devkota, K.P.; Bouasria, A.; Devkota, M.; Nangia, V. Predicting Wheat Yield Gap and Its Determinants Combining Remote Sensing, Machine Learning, and Survey Approaches in Rainfed Mediterranean Regions of Morocco. Eur. J. Agron. 2024, 158, 127195. [Google Scholar] [CrossRef]
  6. Khechba, K.; Laamrani, A.; Dhiba, D.; Misbah, K.; Chehbouni, A. Monitoring and Analyzing Yield Gap in Africa through Soil Attribute Best Management Using Remote Sensing Approaches: A Review. Remote Sens. 2021, 13, 4602. [Google Scholar] [CrossRef]
  7. Defourny, P.; Jarvis, I.; Blaes, X. JECAM Guidelines for Cropland and Crop Type Definition and Field Data Collection. 2014. Available online: http://jecam.org/wp-content/uploads/2018/10/JECAM_Guidelines_for_Field_Data_Collection_v1_0.pdf (accessed on 19 February 2025).
  8. Korotkova, I.; Efremova, N. AI for Agriculture: The Comparison of Semantic Segmentation Methods for Crop Mapping with Sentinel-2 Imagery. arXiv 2023, arXiv:2311.12993. [Google Scholar]
  9. Kamenova, I.; Chanev, M.; Dimitrov, P.; Filchev, L.; Bonchev, B.; Zhu, L.; Dong, Q. Crop Type Mapping and Winter Wheat Yield Prediction Utilizing Sentinel-2: A Case Study from Upper Thracian Lowland, Bulgaria. Remote Sens. 2024, 16, 1144. [Google Scholar] [CrossRef]
  10. Zhang, J.; You, S.; Liu, A.; Xie, L.; Huang, C.; Han, X.; Li, P.; Wu, Y.; Deng, J. Winter Wheat Mapping Method Based on Pseudo-Labels and U-Net Model for Training Sample Shortage. Remote Sens. 2024, 16, 2553. [Google Scholar] [CrossRef]
  11. Gao, M.; Lu, T.; Wang, L. Crop Mapping Based on Sentinel-2 Images Using Semantic Segmentation Model of Attention Mechanism. Sensors 2023, 23, 7008. [Google Scholar] [CrossRef]
  12. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  13. Alami Machichi, M.; El Mansouri, L.; Imani, Y.; Bourja, O.; Hadria, R.; Lahlou, O.; Benmansour, S.; Zennayi, Y.; Bourzeix, F. CerealNet: A Hybrid Deep Learning Architecture for Cereal Crop Mapping Using Sentinel-2 Time-Series. Informatics 2022, 9, 96. [Google Scholar] [CrossRef]
  14. Aneece, I.; Thenkabail, P. Accuracies Achieved in Classifying Five Leading World Crop Types and Their Growth Stages Using Optimal Earth Observing-1 Hyperion Hyperspectral Narrowbands on Google Earth Engine. Remote Sens. 2018, 10, 2027. [Google Scholar] [CrossRef]
  15. Hamzeh, S.; Hajeb, M.; Alavipanah, S.K.; Verrelst, J. RETRIEVAL OF SUGARCANE LEAF AREA INDEX FROM PRISMA HYPERSPECTRAL DATA. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, X-4-W1-2022, 271–277. [Google Scholar] [CrossRef]
  16. Aneece, I.; Foley, D.; Thenkabail, P.; Oliphant, A.; Teluguntla, P. New Generation Hyperspectral Data From DESIS Compared to High Spatial Resolution PlanetScope Data for Crop Type Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7846–7858. [Google Scholar] [CrossRef]
  17. Meng, S.; Wang, X.; Hu, X.; Luo, C.; Zhong, Y. Deep Learning-Based Crop Mapping in the Cloudy Season Using One-Shot Hyperspectral Satellite Imagery. Comput. Electron. Agric. 2021, 186, 106188. [Google Scholar] [CrossRef]
  18. Thiele, S.T.; Bnoulkacem, Z.; Lorenz, S.; Bordenave, A.; Menegoni, N.; Madriz, Y.; Dujoncquoy, E.; Gloaguen, R.; Kenter, J. Mineralogical Mapping with Accurately Corrected Shortwave Infrared Hyperspectral Data Acquired Obliquely from UAVs. Remote Sens. 2022, 14, 5. [Google Scholar] [CrossRef]
  19. Alonso, K.; Bachmann, M.; Burch, K.; Carmona, E.; Cerra, D.; de los Reyes, R.; Dietrich, D.; Heiden, U.; Hölderlin, A.; Ickes, J.; et al. Data Products, Quality and Validation of the DLR Earth Sensing Imaging Spectrometer (DESIS). Sensors 2019, 19, 4471. [Google Scholar] [CrossRef] [PubMed]
  20. Cogliati, S.; Sarti, F.; Chiarantini, L.; Cosi, M.; Lorusso, R.; Lopinto, E.; Miglietta, F.; Genesio, L.; Guanter, L.; Damm, A.; et al. The PRISMA Imaging Spectroscopy Mission: Overview and First Performance Analysis. Remote Sens. Environ. 2021, 262, 112499. [Google Scholar] [CrossRef]
  21. Chabrillat, S.; Foerster, S.; Segl, K.; Beamish, A.; Brell, M.; Asadzadeh, S.; Milewski, R.; Ward, K.J.; Brosinsky, A.; Koch, K.; et al. The EnMAP Spaceborne Imaging Spectroscopy Mission: Initial Scientific Results Two Years after Launch. Remote Sens. Environ. 2024, 315, 114379. [Google Scholar] [CrossRef]
  22. Nieke, J.; Despoisse, L.; Gabriele, A.; Weber, H.; Strese, H.; Ghasemi, N.; Gascon, F.; Alonso, K.; Boccia, V.; Tsonevska, B.; et al. The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME): An Overview of Its Mission, System and Planning Status. In Proceedings of the SPIE 12729, Sensors, Systems, and Next-Generation Satellites XXVII, Amsterdam, The Netherlands, 19 October 2023; p. 7. [Google Scholar] [CrossRef]
  23. Petropoulos, G.P.; Detsikas, S.E.; Lemesios, I.; Raj, R. Obtaining LULC Distribution at 30-m Resolution from Pixxel’s First Technology Demonstrator Hyperspectral Imagery. Int. J. Remote Sens. 2024, 45, 4883–4896. [Google Scholar] [CrossRef]
  24. Li, W.; Arundel, S.; Gao, S.; Goodchild, M.; Hu, Y.; Wang, S.; Zipf, A. GeoAI for Science and the Science of GeoAI. J. Spat. Inf. Sci. 2024, 29, 1–17. [Google Scholar] [CrossRef]
  25. Guerri, M.F.; Distante, C.; Spagnolo, P.; Bougourzi, F.; Taleb-Ahmed, A. Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review. ISPRS Open J. Photogramm. Remote Sens. 2024, 12, 100062. [Google Scholar] [CrossRef]
  26. Addimando, N.; Engel, M.; Schwarz, F.; Batič, M. A DEEP LEARNING APPROACH FOR CROP TYPE MAPPING BASED ON COMBINED TIME SERIES OF SATELLITE AND WEATHER DATA. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B3-2022, 1301–1308. [Google Scholar] [CrossRef]
  27. Snyder, H. Literature Review as a Research Methodology: An Overview and Guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
  28. Ecarnot, F.; Seronde, M.F.; Chopard, R.; Schiele, F.; Meneveau, N. Writing a Scientific Article: A Step-by-Step Guide for Beginners. Eur. Geriatr. Med. 2015, 6, 573–579. [Google Scholar] [CrossRef]
  29. Alami Machichi, M.; Mansouri, l.E.; Imani, y.; Bourja, O.; Lahlou, O.; Zennayi, Y.; Bourzeix, F.; Hanadé Houmma, I.; Hadria, R. Crop Mapping Using Supervised Machine Learning and Deep Learning: A Systematic Literature Review. Int. J. Remote. Sens. 2023, 44, 2717–2753. [Google Scholar] [CrossRef]
  30. Orynbaikyzy, A.; Gessner, U.; Conrad, C. Crop Type Classification Using a Combination of Optical and Radar Remote Sensing Data: A Review. Int. J. Remote Sens. 2019, 40, 6553–6595. [Google Scholar] [CrossRef]
  31. Karfi, K.E.; Fkihi, S.E.; Mansouri, L.E.; Naggar, O. Classification of Hyperspectral Remote Sensing Images for Crop Type Identification: State of the Art. In Proceedings of the 2nd International Conference on Advanced Technologies for Humanity—Volume 1: ICATH; SciTePress: Rabat, Morocco, 2020; pp. 11–18. [Google Scholar] [CrossRef]
  32. Ram, B.G.; Oduor, P.; Igathinathane, C.; Howatt, K.; Sun, X. A Systematic Review of Hyperspectral Imaging in Precision Agriculture: Analysis of Its Current State and Future Prospects. Comput. Electron. Agric. 2024, 222, 109037. [Google Scholar] [CrossRef]
  33. Moharram, M.A.; Sundaram, D.M. Land Use and Land Cover Classification with Hyperspectral Data: A Comprehensive Review of Methods, Challenges and Future Directions. Neurocomputing 2023, 536, 90–113. [Google Scholar] [CrossRef]
  34. Akewar, M.; Chandak, M. Hyperspectral Imaging Algorithms and Applications: A Review. TechRxiv 2024. [Google Scholar]
  35. Barbedo, J.G.A. A Review on the Combination of Deep Learning Techniques with Proximal Hyperspectral Images in Agriculture. Comput. Electron. Agric. 2023, 210, 107920. [Google Scholar] [CrossRef]
  36. Siegmund, A.; Menz, G. Fernes nah gebracht–Satelliten-und Luftbildeinsatz zur Analyse von Umweltveränderungen im Geographieunterricht. Geogr. Sch. 2005, 154, 2–10. [Google Scholar]
  37. Khan, A.; Vibhute, A.; Mali, S.; Patil, C. A Systematic Review on Hyperspectral Imaging Technology with a Machine and Deep Learning Methodology for Agricultural Applications. Ecol. Inform. 2022, 69, 101678. [Google Scholar] [CrossRef]
  38. Aiazzi, B.; Alparone, L.; Baronti, S.; Lastri, C.; Selva, M. Spectral Distortion in Lossy Compression of Hyperspectral Data. J. Electr. Comput. Eng. 2012, 2012, 850637. [Google Scholar] [CrossRef]
  39. Liu, N.; Li, W.; Wang, Y.; Tao, R.; Du, Q.; Chanussot, J. A Survey on Hyperspectral Image Restoration: From the View of Low-Rank Tensor Approximation. Sci. China Inf. Sci. 2023, 66, 140302. [Google Scholar] [CrossRef]
  40. Vuolo, F.; Neuwirth, M.; Immitzer, M.; Atzberger, C.; Ng, W.T. How much does multi-temporal Sentinel-2 data improve crop type classification? Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 122–130. [Google Scholar]
  41. Wang, M.; Wang, J.; Cui, Y.; Liu, J.; Chen, L. Agricultural Field Boundary Delineation with Satellite Image Segmentation for High-Resolution Crop Mapping: A Case Study of Rice Paddy. Agronomy 2022, 12, 2342. [Google Scholar] [CrossRef]
  42. Liu, Y.; Diao, C.; Mei, W.; Zhang, C. CropSight: Towards a large-scale operational framework for object-based crop type ground truth retrieval using street view and PlanetScope satellite imagery. ISPRS J. Photogramm. Remote Sens. 2024, 216, 66–89. [Google Scholar]
  43. Kokhan, S.; Vostokov, A. Application of nanosatellites PlanetScope data to monitor crop growth. E3S Web Conf. 2020, 171, 02014. [Google Scholar]
  44. Bourriz, M.; Laamrani, A.; El-Battay, A.; Hajji, H.; Elbouanani, N.; Ait Abdelali, H.; Bourzeix, F.; Amazirh, A.; Chehbouni, A. An Intercomparison of Two Satellite-Based Hyperspectral Imagery (PRISMA & EnMAP) for Agricultural Mapping: Potential of These Sensors to Produce Hyperspectral Time-Series Essential for Tracking Crop Phenology and Enhancing Crop Type Mapping. 2025. Available online: https://meetingorganizer.copernicus.org/EGU25/EGU25-18417.html (accessed on 15 March 2025). [CrossRef]
  45. Chakraborty, R.; Rachdi, I.; Thiele, S.; Booysen, R.; Kirsch, M.; Lorenz, S.; Gloaguen, R.; Sebari, I. A Spectral and Spatial Comparison of Satellite-Based Hyperspectral Data for Geological Mapping. Remote Sens. 2024, 16, 2089. [Google Scholar] [CrossRef]
  46. Bostan, S.; Ortak, M.A.; Tuna, C.; Akoguz, A.; Sertel, E.; Berk Ustundag, B. Comparison of Classification Accuracy of Co-Located Hyperspectral & Multispectral Images for Agricultural Purposes. In Proceedings of the 2016 Fifth International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Tianjin, China, 18–20 July 2016; pp. 1–4. [Google Scholar] [CrossRef]
  47. Surase, R.; Kale, K.; Solankar, M.; Hanumant, G.; Varpe, A.; Vibhute, A.; Gaikwad, S.; Nalawade, D. Assessment of EO-1 Hyperion Imagery for Crop Discrimination Using Spectral Analysis. In Microelectronics, Electromagnetics and Telecommunications; Springer: Singapore, 2018; pp. 505–515. [Google Scholar]
  48. Aneece, I.; Thenkabail, P. Classifying Crop Types Using Two Generations of Hyperspectral Sensors (Hyperion and DESIS) with Machine Learning on the Cloud. Remote Sensing 2021, 13, 4704. [Google Scholar] [CrossRef]
  49. Transon, J.; D’Andrimont, R.; Maugnard, A.; Defourny, P. Survey of Hyperspectral Earth Observation Applications from Space in the Sentinel-2 Context. Remote Sens. 2018, 10, 157. [Google Scholar] [CrossRef]
  50. Aneece, I.; Thenkabail, P. New Generation Hyperspectral Sensors DESIS and PRISMA Provide Improved Agricultural Crop Classifications. Photogramm. Eng. Remote Sens. 2022, 88, 715–729. [Google Scholar] [CrossRef]
  51. Aneece, I.; Thenkabail, P.S.; McCormick, R.; Alifu, H.; Foley, D.; Oliphant, A.J.; Teluguntla, P. Machine Learning and New-Generation Spaceborne Hyperspectral Data Advance Crop Type Mapping. Photogramm. Eng. Remote Sens. 2024, 90, 687–698. [Google Scholar] [CrossRef]
  52. Sedighi, A.; Hamzeh, S.; Firozjaei, M.; Goodarzi, H.; Naseri, A. Comparative Analysis of Multispectral and Hyperspectral Imagery for Mapping Sugarcane Varieties. PFG—J. Photogramm. Remote Sens. Geoinf. Sci. 2023, 91, 453–470. [Google Scholar] [CrossRef]
  53. Chakraborty, R.; Thiele, S.; Naik, P.; Gloaguen, R. Evaluation of Spectral and Spatial Reconstruction in Resolution-Enhanced Hyperspectral Data for Effective Mineral Mapping. 2025. Available online: https://www.researchgate.net/publication/389263998?channel=doi&linkId=67bb7129f5cb8f70d5bd618e&showFulltext=true (accessed on 15 February 2025). [CrossRef]
  54. Eltner, A.; Hoffmeister, D.; Kaiser, A.; Karrasch, P.; Klingbeil, L.; Stöcker, C.; Rovere, A. (Eds.) UAVs for the Environmental Sciences: Methods and Applications; wbg Academic: Darmstadt, Germany, 2022; ISBN 978-3-534-40588-6. [Google Scholar]
  55. Ambrosi, G. UAV4PrecisAg. Image Created by Ambrosi, Gianluca. Available online: https://lms.geoversity.io/course/view.php?id=4 (accessed on 23 January 2025).
  56. Spiller, D.; Ansalone, L.; Carotenuto, F.; Mathieu, P.P. Crop Type Mapping Using Prisma Hyperspectral Images and One-Dimensional Convolutional Neural Network. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 8166–8169. [Google Scholar] [CrossRef]
  57. Thoreau, R.; Risser, L.; Achard, V.; Berthelot, B.; Briottet, X. Toulouse Hyperspectral Data Set: A benchmark data set to assess semi-supervised spectral representation learning and pixel-wise classification techniques. ISPRS J. Photogramm. Remote Sens. 2024, 212, 323–337. [Google Scholar] [CrossRef]
  58. Roupioz, L.; Briottet, X.; Adeline, K.; Al Bitar, A.; Barbon-Dubosc, D.; Barda-Chatain, R.; Barillot, P.; Bridier, S.; Carroll, E.; Cassante, C.; et al. Multi-source datasets acquired over Toulouse (France) in 2021 for urban microclimate studies during the CAMCATT/AI4GEO field campaign. Data Brief 2023, 48, 109109. [Google Scholar] [CrossRef] [PubMed]
  59. Joshi, A.; Pradhan, B.; Gite, S.; Chakraborty, S. Remote-Sensing Data and Deep-Learning Techniques in Crop Mapping and Yield Prediction: A Systematic Review. Remote Sens. 2023, 15, 2014. [Google Scholar] [CrossRef]
  60. Misbah, K.; Laamrani, A.; Chehbouni, A.; Dhiba, D.; Ezzahar, J. Use of Hyperspectral Prisma Level-1 Data and ISDA Soil Fertility Map for Soil Macronutrient Availability Quantification in a Moroccan Agricultural Land. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 7051–7054. [Google Scholar] [CrossRef]
  61. Arroyo-Mora, J.P.; Kalacska, M.; Inamdar, D.; Soffer, R.; Lucanus, O.; Gorman, J.; Naprstek, T.; Schaaf, E.S.; Ifimov, G.; Elmer, K.; et al. Implementation of a UAV–Hyperspectral Pushbroom Imager for Ecological Monitoring. Drones 2019, 3, 12. [Google Scholar] [CrossRef]
  62. Yokoya, N.; Chan, J.C.W.; Segl, K. Potential of Resolution-Enhanced Hyperspectral Data for Mineral Mapping Using Simulated EnMAP and Sentinel-2 Images. Remote Sens. 2016, 8, 172. [Google Scholar] [CrossRef]
  63. Li, H.; Ye, W.; Liu, J.; Tan, W.; Pirasteh, S.; Fatholahi, S.N.; Li, J. High-Resolution Terrain Modeling Using Airborne LiDAR Data with Transfer Learning. Remote Sens. 2021, 13, 3448. [Google Scholar] [CrossRef]
  64. Cao, J.; Liu, K.; Zhuo, L.; Liu, L.; Zhu, Y.; Peng, L. Combining UAV-based hyperspectral and LiDAR data for mangrove species classification using the rotation forest algorithm. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102414. [Google Scholar]
  65. Stewart, A.J.; Robinson, C.; Corley, I.A.; Ortiz, A.; Ferres, J.M.L.; Banerjee, A. Torchgeo: Deep learning with geospatial data. In Proceedings of the 30th International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 1–4 November 2022; pp. 1–12. [Google Scholar]
  66. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral Image Classification—Traditional to Deep Models: A Survey for Future Prospects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 968–999. [Google Scholar] [CrossRef]
  67. Akewar, M.; Chandak, M. An Integration of Natural Language and Hyperspectral Imaging: A Review. IEEE Geosci. Remote Sens. Mag. 2024, 13, 32–54. [Google Scholar] [CrossRef]
  68. Lebrini, Y.; Boudhar, A.; Hadria, R.; Lionboui, H.; Elmansouri, L.; Arrach, R.; Ceccato, P.; Benabdelouahab, T. Identifying Agricultural Systems Using SVM Classification Approach Based on Phenological Metrics in a Semi-arid Region of Morocco. Earth Syst. Environ. 2019, 3, 277–288. [Google Scholar] [CrossRef]
  69. Jacon, A.; Galvão, L.; Santos, J.; Sano, E. Seasonal Characterization and Discrimination of Savannah Physiognomies in Brazil Using Hyperspectral Metrics from Hyperion/EO-1. Int. J. Remote Sens. 2017, 38, 4494–4516. [Google Scholar] [CrossRef]
  70. Reddy, B.S.; Sharma, S.; Shwetha, H.R. Crop Classification Based on Optimal Hyperspectral Narrow Bands Using Machine Learning and Hyperion Data. In Proceedings of the 2023 IEEE India Geoscience and Remote Sensing Symposium (InGARSS), Bangalore, India, 10–13 December 2023; pp. 1–4. [Google Scholar] [CrossRef]
  71. Erdanaev, E.; Kappas, M.; Wyss, D. Irrigated Crop Types Mapping in Tashkent Province of Uzbekistan with Remote Sensing-Based Classification Methods. Sensors 2022, 22, 5683. [Google Scholar] [CrossRef] [PubMed]
  72. Ustuner, M.; Sanli, F.B. Evaluating Training Data for Crop Type Classifıcation Using Support Vector Machine and Random Forests. Geod. Glas. 2017, 51, 125–133. [Google Scholar] [CrossRef]
  73. Monaco, M.; Sileo, A.; Orlandi, D.; Battagliere, M.; Candela, L.; Cimino, M.; Vivaldi, G.; Giannico, V. Semantic Segmentation of Crops via Hyperspectral PRISMA Satellite Images. In Proceedings of the 10th International Conference on Geographical Information Systems Theory, Applications and Management (GISTAM 2024), Angers, France, 2–4 May 2024; SciTePress: Setubal, Portugal, 2024; pp. 187–194. [Google Scholar] [CrossRef]
  74. Bhosle, K.; Musande, V. Evaluation of Deep Learning Convolutional Neural Network for Crop Classification. Int. J. Recent Technol. Eng. 2019, 8, 3960–3963. [Google Scholar] [CrossRef]
  75. Liu, N.; Zhao, Q.; Williams, R.; Barrett, B. Enhanced Crop Classification through Integrated Optical and SAR Data: A Deep Learning Approach for Multi-Source Image Fusion. Int. J. Remote Sens. 2023, 45, 7605–7633. [Google Scholar] [CrossRef]
  76. Shuai, L.; Li, Z.; Chen, Z.; Luo, D.; Mu, J. A Research Review on Deep Learning Combined with Hyperspectral Imaging in Multiscale Agricultural Sensing. Comput. Electron. Agric. 2024, 217, 108577. [Google Scholar] [CrossRef]
  77. LeCun, Y.; Kavukcuoglu, K.; Farabet, C. Convolutional networks and applications in vision. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 253–256. [Google Scholar]
  78. Janga, B.; Asamani, G.P.; Sun, Z.; Cristea, N. A Review of Practical AI for Remote Sensing in Earth Sciences. Remote Sens. 2023, 15, 4112. [Google Scholar] [CrossRef]
  79. Kanthi, M.; Sarma, T.H.; Bindu, C.S. A 3d-Deep CNN Based Feature Extraction and Hyperspectral Image Classification. In Proceedings of the 2020 IEEE India Geoscience and Remote Sensing Symposium (InGARSS), Ahmedabad, India, 1–4 December 2020; pp. 229–232. [Google Scholar] [CrossRef]
  80. Mirzaei, S.; Pascucci, S.; Carfora, M.F.; Casa, R.; Rossi, F.; Santini, F.; Palombo, A.; Laneve, G.; Pignatti, S. Early-Season Crop Mapping by PRISMA Images Using Machine/Deep Learning Approaches: Italy and Iran Test Cases. Remote Sens. 2024, 16, 2431. [Google Scholar] [CrossRef]
  81. Ashraf, M.; Chen, L.; Innab, N.; Umer, M.; Baili, J.; Kim, T.H.; Ashraf, I. Novel 3-D Deep Neural Network Architecture for Crop Classification Using Remote Sensing-Based Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 12649–12665. [Google Scholar] [CrossRef]
  82. Meghraoui, K.; Sebari, I.; Pilz, J.; Kadi, K.; Bensiali, S. Applied Deep Learning-Based Crop Yield Prediction: A Systematic Analysis of Current Developments and Potential Challenges. Technologies 2024, 12, 43. [Google Scholar] [CrossRef]
  83. Sahoo, A.R.; Chakraborty, P. Hybrid CNN Bi-LSTM Neural Network for Hyperspectral Image Classification. arXiv 2024, arXiv:2402.10026. [Google Scholar]
  84. Zhou, F.; Hang, R.; Liu, Q.; Yuan, X. Hyperspectral Image Classification Using Spectral-Spatial LSTMs. Neurocomputing 2019, 328, 39–47. [Google Scholar] [CrossRef]
  85. Ahmad, M.; Distifano, S.; Khan, A.M.; Mazzara, M.; Li, C.; Yao, J.; Li, H.; Aryal, J.; Vivone, G.; Hong, D. A Comprehensive Survey for Hyperspectral Image Classification: The Evolution from Conventional to Transformers. arXiv 2024, arXiv:2404.14955. [Google Scholar]
  86. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2023, arXiv:1706.03762. [Google Scholar]
  87. Xie, J.; Hua, J.; Chen, S.; Wu, P.; Gao, P.; Sun, D.; Lyu, Z.; Lyu, S.; Xue, X.; Lu, J. HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification. Remote Sens. 2023, 15, 3491. [Google Scholar] [CrossRef]
  88. Zhang, H.; Feng, S.; Wu, D.; Zhao, C.; Liu, X.; Zhou, Y.; Wang, S.; Deng, H.; Zheng, S. Hyperspectral Image Classification on Large-Scale Agricultural Crops: The Heilongjiang Benchmark Dataset, Validation Procedure, and Baseline Results. Remote Sens. 2024, 16, 478. [Google Scholar] [CrossRef]
  89. Li, J.; Cai, Y.; Li, Q.; Kou, M.; Zhang, T. A Review of Remote Sensing Image Segmentation by Deep Learning Methods. Int. J. Digit. Earth 2024, 17, 2328827. [Google Scholar] [CrossRef]
  90. Guo, F.; Feng, Q.; Yang, S.; Yang, W. CMTNet: Convolutional Meets Transformer Network for Hyperspectral Images Classification. arXiv 2024, arXiv:2406.14080. [Google Scholar]
  91. Wang, A.; Xing, S.; Zhao, Y.; Wu, H.; Iwahori, Y. A Hyperspectral Image Classification Method Based on Adaptive Spectral Spatial Kernel Combined with Improved Vision Transformer. Remote Sens. 2022, 14, 3705. [Google Scholar] [CrossRef]
  92. Liu, Y.; Han, T.; Ma, S.; Zhang, J.; Yang, Y.; Tian, J.; He, H.; Li, A.; He, M.; Liu, Z.; et al. Summary of chatgpt-related research and perspective towards the future of large language models. Meta-Radiology 2023, 1, 100017. [Google Scholar] [CrossRef]
  93. Team, G.; Anil, R.; Borgeaud, S.; Alayrac, J.B.; Yu, J.; Soricut, R.; Schalkwyk, J.; Dai, A.M.; Hauth, A.; Millican, K.; et al. Gemini: A family of highly capable multimodal models. arXiv 2023, arXiv:2312.11805. [Google Scholar]
  94. Lu, S.; Guo, J.; Zimmer-Dauphinee, J.R.; Nieusma, J.M.; Wang, X.; VanValkenburgh, P.; Wernke, S.A.; Huo, Y. AI foundation models in remote sensing: A survey. arXiv 2024, arXiv:2408.03464. [Google Scholar]
  95. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 4015–4026. [Google Scholar]
  96. Xiong, Z.; Wang, Y.; Zhang, F.; Zhu, X.X. One for All: Toward Unified Foundation Models for Earth Vision. In Proceedings of the IGARSS 2024—2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024. [Google Scholar]
  97. Xu, Y.; Ma, Y.; Zhang, Z. Self-Supervised Pre-Training for Large-Scale Crop Mapping Using Sentinel-2 Time Series. ISPRS J. Photogramm. Remote Sens. 2024, 207, 312–325. [Google Scholar] [CrossRef]
  98. Ghorbanzadeh, O.; Shahabi, H.; Piralilou, S.T.; Crivellari, A.; Rosa, L.E.C.l.; Atzberger, C.; Li, J.; Ghamisi, P. Contrastive Self-Supervised Learning for Globally Distributed Landslide Detection. IEEE Access 2024, 12, 118453–118466. [Google Scholar] [CrossRef]
  99. Wang, Y.; Albrecht, C.M.; Braham, N.A.A.; Mou, L.; Zhu, X.X. Self-Supervised Learning in Remote Sensing: A Review. IEEE Geosci. Remote Sens. Mag. 2022, 10, 213–247. [Google Scholar] [CrossRef]
  100. Wang, D.; Hu, M.; Jin, Y.; Miao, Y.; Yang, J.; Xu, Y.; Qin, X.; Ma, J.; Sun, L.; Li, C.; et al. HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model. arXiv 2024, arXiv:2406.11519. [Google Scholar]
  101. Braham, N.A.A.; Albrecht, C.M.; Mairal, J.; Chanussot, J.; Wang, Y.; Zhu, X.X. SpectralEarth: Training Hyperspectral Foundation Models at Scale. arXiv 2024, arXiv:2408.08447. [Google Scholar]
  102. Xiong, Z.; Wang, Y.; Zhang, F.; Stewart, A.J.; Hanna, J.; Borth, D.; Papoutsis, I.; Le Saux, B.; Camps-Valls, G.; Zhu, X.X. Neural plasticity-inspired foundation model for observing the Earth crossing modalities. arXiv 2024, arXiv:2403.15356. [Google Scholar]
  103. Zhao, S.; Chen, Z.; Xiong, Z.; Shi, Y.; Saha, S.; Zhu, X.X. Beyond Grid Data: Exploring graph neural networks for Earth observation. IEEE Geosci. Remote Sens. Mag. 2024, 13, 175–208. [Google Scholar]
  104. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  105. Wu, G.; Al-qaness, M.A.A.; Al-Alimi, D.; Dahou, A.; Abd Elaziz, M.; Ewees, A.A. Hyperspectral Image Classification Using Graph Convolutional Network: A Comprehensive Review. Expert Syst. Appl. 2024, 257, 125106. [Google Scholar] [CrossRef]
  106. Liu, J.; Guan, R.; Li, Z.; Zhang, J.; Hu, Y.; Wang, X. Adaptive Multi-Feature Fusion Graph Convolutional Network for Hyperspectral Image Classification. Remote Sens. 2023, 15, 5483. [Google Scholar] [CrossRef]
  107. Yang, A.; Li, M.; Ding, Y.; Fang, L.; Cai, Y.; He, Y. Graphmamba: An efficient graph structure learning vision mamba for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5537414. [Google Scholar]
  108. Estudillo, J.P.; Kijima, Y.; Sonobe, T. Introduction: Agricultural Development in Asia and Africa. In Agricultural Development in Asia and Africa: Essays in Honor of Keijiro Otsuka; Estudillo, J.P., Kijima, Y., Sonobe, T., Eds.; Springer: Singapore, 2023; pp. 1–17. ISBN 978-981-19-5542-6. [Google Scholar] [CrossRef]
  109. Farmonov, N.; Esmaeili, M.; Abbasi-Moghadam, D.; Sharifi, A.; Amankulova, K.; Mucsi, L. HypsLiDNet: 3-D–2-D CNN Model and Spatial–Spectral Morphological Attention for Crop Classification With DESIS and LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 11969–11996. [Google Scholar] [CrossRef]
  110. Patel, H.; Bhagia, N.; Vyas, T.; Bhattacharya, B.; Dave, K. Crop Identification and Discrimination Using AVIRIS-NG Hyperspectral Data Based on Deep Learning Techniques. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July 2019–2 August 2019; pp. 3728–3731. [Google Scholar] [CrossRef]
  111. Wei, L.; Yu, M.; Zhong, Y.; Zhao, J.; Liang, Y.; Hu, X. Spatial–Spectral Fusion Based on Conditional Random Fields for the Fine Classification of Crops in UAV-borne Hyperspectral Remote Sensing Imagery. Remote Sens. 2019, 11, 780. [Google Scholar] [CrossRef]
  112. Zhao, J.; Zhong, Y.; Hu, X.; Wei, L.; Zhang, L. A robust spectral-spatial approach to identifying heterogeneous crops using remote sensing imagery with high spectral and spatial resolutions. Remote Sens. Environ. 2020, 239, 111605. [Google Scholar] [CrossRef]
  113. Wan, S.; Yeh, M.L.; Ma, H.L. An Innovative Intelligent System with Integrated CNN and SVM: Considering Various Crops through Hyperspectral Image Data. ISPRS Int. J. Geo-Inf. 2021, 10, 242. [Google Scholar] [CrossRef]
  114. Wei, L.; Wang, K.; Lu, Q.; Liang, Y.; Li, H.; Wang, Z.; Wang, R.; Cao, L. Crops Fine Classification in Airborne Hyperspectral Imagery Based on Multi-Feature Fusion and Deep Learning. Remote Sens. 2021, 13, 2917. [Google Scholar] [CrossRef]
  115. Shi, H.; Cao, G.; Ge, Z.; Zhang, Y.; Fu, P. Double-Branch Network with Pyramidal Convolution and Iterative Attention for Hyperspectral Image Classification. Remote Sens. 2021, 13, 1403. [Google Scholar] [CrossRef]
  116. Liu, S.; Shi, Q.; Zhang, L. Few-Shot Hyperspectral Image Classification With Unknown Classes Using Multitask Deep Learning. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5085–5102. [Google Scholar] [CrossRef]
  117. Cheng, J.; Xu, Y.; Kong, L. Hyperspectral Imaging Classification Based on LBP Feature Extraction and Multimodel Ensemble Learning. Comput. Electr. Eng. 2021, 92, 107199. [Google Scholar] [CrossRef]
  118. Liu, D.; Wang, Y.; Liu, P.; Li, Q.; Yang, H.; Chen, D.; Liu, Z.; Han, G. A Multibranch Crossover Feature Attention Network for Hyperspectral Image Classification. Remote Sens. 2022, 14, 5778. [Google Scholar] [CrossRef]
  119. Agilandeeswari, L.; Prabukumar, M.; Radhesyam, V.; Phaneendra, K.L.B.; Farhan, A. Crop Classification for Agricultural Applications in Hyperspectral Remote Sensing Images. Appl. Sci. 2022, 12, 1670. [Google Scholar] [CrossRef]
  120. Singh, P.; Srivastava, P.K.; Shah, D.; Pandey, M.K.; Anand, A.; Prasad, R.; Dave, R.; Verrelst, J.; Bhattacharya, B.K.; Raghubanshi, A. Crop Type Discrimination Using Geo-Stat Endmember Extraction and Machine Learning Algorithms. Adv. Space Res. 2022, 73, 1331–1348. [Google Scholar] [CrossRef]
  121. Galodha, A.; Vashisht, R.; Nidamanuri, R.; Ramiya, A. DEEP CONVOLUTION NEURAL NETWORKS WITH RESNET ARCHITECTURE FOR SPECTRAL-SPATIAL CLASSIFICATION OF DRONE BORNE AND GROUND BASED HIGH RESOLUTION HYPERSPECTRAL IMAGERY. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2022, 43, 577–584. [Google Scholar] [CrossRef]
  122. Bhosle, K.; Musande, V. Evaluation of CNN Model by Comparing with Convolutional Autoencoder and Deep Neural Network for Crop Classification on Hyperspectral Imagery. Geocarto Int. 2022, 37, 813–827. [Google Scholar] [CrossRef]
  123. Niu, B.; Feng, Q.; Chen, B.; Ou, C.; Liu, Y.; Yang, J. HSI-TransUNet: A Transformer Based Semantic Segmentation Model for Crop Mapping from UAV Hyperspectral Imagery. Comput. Electron. Agric. 2022, 201, 107297. [Google Scholar] [CrossRef]
  124. Qing, Y.; Huang, Q.; Feng, L.; Qi, Y.; Liu, W. Multiscale Feature Fusion Network Incorporating 3D Self-Attention for Hyperspectral Image Classification. Remote Sens. 2022, 14, 742. [Google Scholar] [CrossRef]
  125. Wu, H.; Zhou, H.; Wang, A.; Iwahori, Y. Precise Crop Classification of Hyperspectral Images Using Multi-Branch Feature Fusion and Dilation-Based MLP. Remote Sens. 2022, 14, 2713. [Google Scholar] [CrossRef]
  126. Zheng, X.; Sun, H.; Lu, X.; Xie, W. Rotation-Invariant Attention Network for Hyperspectral Image Classification. IEEE Trans. Image Process. 2022, 31, 4251–4265. [Google Scholar] [CrossRef]
  127. Hamza, M.A.; Alrowais, F.; Alzahrani, J.S.; Mahgoub, H.; Salem, N.M.; Marzouk, R. Squirrel Search Optimization with Deep Transfer Learning-Enabled Crop Classification Model on Hyperspectral Remote Sensing Imagery. Appl. Sci. 2022, 12, 5650. [Google Scholar] [CrossRef]
  128. Chen, H.; Qiu, Y.; Yin, D.; Chen, J.; Chen, X.; Liu, S.; Liu, L. Stacked Spectral Feature Space Patch: An Advanced Spectral Representation for Precise Crop Classification Based on Convolutional Neural Network. Crop J. 2022, 10, 1460–1469. [Google Scholar]
  129. Chen, H.; Li, X.; Zhou, J.; Wang, Y. TPPI: A Novel Network Framework and Model for Efficient Hyperspectral Image Classification. Photogramm. Eng. Remote Sens. 2022, 88, 535–546. [Google Scholar] [CrossRef]
  130. Sarma, A.; Rao Nidamanuri, R. Evolutionary Optimisation Techniques for Band Selection in Drone-Based Hyperspectral Images for Vegetable Crops Mapping. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; Volume 2023, pp. 7384–7387. [Google Scholar] [CrossRef]
  131. Farmonov, N.; Amankulova, K.; Szatmari, J.; Sharifi, A.; Abbasi-Moghadam, D.; Nejad, S.M.M.; Mucsi, L. Crop Type Classification by DESIS Hyperspectral Imagery and Machine Learning Algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1576–1588. [Google Scholar] [CrossRef]
  132. Alajmi, M.; Mengash, H.; Eltahir, M.; Assiri, M.; Ibrahim, S.; Salama, A. Exploiting Hyperspectral Imaging and Optimal Deep Learning for Crop Type Detection and Classification. IEEE Access 2023, 11, 124985–124995. [Google Scholar] [CrossRef]
  133. Scheibenreif, L.; Mommert, M.; Borth, D. Masked Vision Transformers for Hyperspectral Image Classification. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 17–24 June 2023; pp. 2166–2176. [Google Scholar] [CrossRef]
  134. Bai, Y.; Xu, M.; Zhang, L.; Liu, Y. Pruning Multi-Scale Multi-Branch Network for Small-Sample Hyperspectral Image Classification. Electronics 2023, 12, 674. [Google Scholar] [CrossRef]
  135. Huang, X.; Zhou, Y.; Yang, X.; Zhu, X.; Wang, K. SS-TMNet: Spatial–Spectral Transformer Network with Multi-Scale Convolution for Hyperspectral Image Classification. Remote Sens. 2023, 15, 1206. [Google Scholar] [CrossRef]
  136. Liang, J.; Yang, Z.; Bi, Y.; Qu, B.; Liu, M.; Xue, B.; Zhang, M. A Multitree Genetic Programming-Based Feature Construction Approach to Crop Classification Using Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–17. [Google Scholar] [CrossRef]
  137. Islam, M.R.; Islam, M.T.; Uddin, M.P.; Ulhaq, A. Improving Hyperspectral Image Classification with Compact Multi-Branch Deep Learning. Remote Sens. 2024, 16, 2069. [Google Scholar] [CrossRef]
  138. Zhang, B.; Chen, Y.; Li, Z.; Xiong, S.; Lu, X. SANet: A Self-Attention Network for Agricultural Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
  139. Sankararao, A.U.G.; Rajalakshmi, P.; Choudhary, S. UC-HSI: UAV-Based Crop Hyperspectral Imaging Datasets and Machine Learning Benchmark Results. IEEE Geosci. Remote. Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  140. Chen, R.; Vivone, G.; Li, G.; Dai, C.; Hong, D.; Chanussot, J. Graph U-Net With Topology-Feature Awareness Pooling for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–14. [Google Scholar] [CrossRef]
  141. Cheng, W.; Ye, H.; Wen, X.; Su, Q.; Hu, H.; Zhang, J.; Zhang, F. A Crop’s Spectral Signature Is Worth a Compressive Text. Comput. Electron. Agric. 2024, 227, 109576. [Google Scholar] [CrossRef]
  142. Ravirathinam, P.; Ghosh, R.; Khandelwal, A.; Jia, X.; Mulla, D.; Kumar, V. Combining Satellite and Weather Data for Crop Type Mapping: An Inverse Modelling Approach. In Proceedings of the 2024 SIAM International Conference on Data Mining (SDM), SIAM, Houston, TX, USA, 18–20 April 2024; pp. 445–453. [Google Scholar]
  143. Elbouanani, N.; Laamrani, A.; El-Battay, A.; Hajji, H.; Bourriz, M.; Bourzeix, F.; Ait Abdelali, H.; Amazirh, A.; Chehbouni, A. Enhancing Soil Fertility Mapping with Hyperspectral Remote Sensing and Advanced AI: A Comparative Study of Dimensionality Reduction Techniques in Morocco. 2025. Available online: https://meetingorganizer.copernicus.org/EGU25/EGU25-18418.html (accessed on 15 March 2025). [CrossRef]
  144. Ghassemi, B.; Immitzer, M.; Atzberger, C.; Vuolo, F. Evaluation of Accuracy Enhancement in European-Wide Crop Type Mapping by Combining Optical and Microwave Time Series. Land 2022, 11, 1397. [Google Scholar] [CrossRef]
  145. Kakhani, N.; Rangzan, M.; Jamali, A.; Attarchi, S.; Alavipanah, S.K.; Mommert, M.; Tziolas, N.; Scholten, T. SSL-SoilNet: A Hybrid Transformer-based Framework with Self-Supervised Learning for Large-scale Soil Organic Carbon Prediction. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4509915. [Google Scholar] [CrossRef]
  146. Höhl, A.; Obadic, I.; Torres, M.Á.F.; Najjar, H.; Oliveira, D.; Akata, Z.; Dengel, A.; Zhu, X.X. Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing. arXiv 2024, arXiv:2402.13791. [Google Scholar]
  147. Abbas, A.; Linardi, M.; Vareille, E.; Christophides, V.; Paris, C. Towards Explainable AI4EO: An Explainable Deep Learning Approach for Crop Type Mapping Using Satellite Images Time Series. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 1088–1091. [Google Scholar] [CrossRef]
  148. Ghamisi, P.; Rasti, B.; Yokoya, N.; Wang, Q.; Hofle, B.; Bruzzone, L.; Bovolo, F.; Chi, M.; Anders, K.; Gloaguen, R.; et al. Multisource and Multitemporal Data Fusion in Remote Sensing: A Comprehensive Review of the State of the Art. IEEE Geosci. Remote Sens. Mag. 2019, 7, 6–39. [Google Scholar] [CrossRef]
  149. Villa, A.; Chanussot, J.; Benediktsson, J.A.; Jutten, C. Spectral Unmixing for the Classification of Hyperspectral Images at a Finer Spatial Resolution. IEEE J. Sel. Top. Signal Process. 2011, 5, 521–533. [Google Scholar] [CrossRef]
  150. Gumma, M.K.; Panjala, P.; Teluguntla, P. Mapping Heterogeneous Land Use/Land Cover and Crop Types in Senegal Using Sentinel-2 Data and Machine Learning Algorithms. Int. J. Digit. Earth 2024, 17, 2378815. [Google Scholar] [CrossRef]
  151. Ouzemou, J.E.; El Harti, A.; Lhissou, R.; El Moujahid, A.; Bouch, N.; El Ouazzani, R.; Bachaoui, E.; El Ghmari, A. Crop Type Mapping from Pansharpened Landsat 8 NDVI Data: A Case of a Highly Fragmented and Intensive Agricultural System. Remote Sens. Appl. Soc. Environ. 2018, 11, 94–103. [Google Scholar] [CrossRef]
  152. Misbah, K.; Laamrani, A.; Voroney, P.; Khechba, K.; Casa, R.; Chehbouni, A. Ensemble Band Selection for Quantification of Soil Total Nitrogen Levels from Hyperspectral Imagery. Remote Sens. 2024, 16, 2549. [Google Scholar] [CrossRef]
  153. Misbah, K.; Laamrani, A.; Casa, R.; Voroney, P.; Dhiba, D.; Ezzahar, J.; Chehbouni, A. Spatial Prediction of Soil Attributes from PRISMA Hyperspectral Imagery Using Wrapper Feature Selection and Ensemble Modeling. PFG—J. Photogramm. Remote Sens. Geoinf. Sci. 2024, 93, 197–215. [Google Scholar] [CrossRef]
Figure 1. The adopted review process.
Figure 1. The adopted review process.
Remotesensing 17 01574 g001
Figure 2. Literature screening process.
Figure 2. Literature screening process.
Remotesensing 17 01574 g002
Figure 3. Cloud tag of review.
Figure 3. Cloud tag of review.
Remotesensing 17 01574 g003
Figure 4. Spectral reflectance characteristics of soil, green vegetation, and water across the electromagnetic spectrum [36].
Figure 4. Spectral reflectance characteristics of soil, green vegetation, and water across the electromagnetic spectrum [36].
Remotesensing 17 01574 g004
Figure 5. Representation of a hyperspectral imaging (HSI) cube [39].
Figure 5. Representation of a hyperspectral imaging (HSI) cube [39].
Remotesensing 17 01574 g005
Figure 6. The spatial coverage of different data acquisition methods for agricultural monitoring [55].
Figure 6. The spatial coverage of different data acquisition methods for agricultural monitoring [55].
Remotesensing 17 01574 g006
Figure 7. A simplified workflow of machine learning models.
Figure 7. A simplified workflow of machine learning models.
Remotesensing 17 01574 g007
Figure 8. Architectures of several variants of CNN: (a) 1D-CNN, (b) 2D-CNN, and (c) 3D-CNN.
Figure 8. Architectures of several variants of CNN: (a) 1D-CNN, (b) 2D-CNN, and (c) 3D-CNN.
Remotesensing 17 01574 g008
Figure 9. RNN architecture.
Figure 9. RNN architecture.
Remotesensing 17 01574 g009
Figure 10. ViT architecture.
Figure 10. ViT architecture.
Remotesensing 17 01574 g010
Figure 11. A simplified workflow of a hybrid structure-based model.
Figure 11. A simplified workflow of a hybrid structure-based model.
Remotesensing 17 01574 g011
Figure 12. General pipeline of a foundation model.
Figure 12. General pipeline of a foundation model.
Remotesensing 17 01574 g012
Figure 13. A general GNN Architecture.
Figure 13. A general GNN Architecture.
Remotesensing 17 01574 g013
Figure 14. The number of publications by types.
Figure 14. The number of publications by types.
Remotesensing 17 01574 g014
Figure 15. Distribution of reviewed studies per year.
Figure 15. Distribution of reviewed studies per year.
Remotesensing 17 01574 g015
Figure 16. Geographic distribution of publications considered in the literature review on crop type mapping using hyperspectral imaging.
Figure 16. Geographic distribution of publications considered in the literature review on crop type mapping using hyperspectral imaging.
Remotesensing 17 01574 g016
Figure 17. Number of publications that used hyperspectral data.
Figure 17. Number of publications that used hyperspectral data.
Remotesensing 17 01574 g017
Table 1. Main search queries.
Table 1. Main search queries.
Main Search TermSearch QueryBoolean Operator Between Query
Crop Mapping(TITLE-ABS-KEY (“Crop Mapping” OR “Crop Types Classification” OR “Crop Identification”))AND
Hyperspectral DataTITLE-ABS-KEY (“Hyperspectral Imaging” OR “Hyperspectral Remote Sensing” OR “Hyperspectral Satellite” OR “Hyperspectral UAV”)AND
AI AlgorithmsTITLE-ABS-KEY (“Machine Learning” OR “Deep Learning” OR “Transformers” OR “Foundation Models”))-
Table 2. Comparison of hyperspectral satellite-based sensors.
Table 2. Comparison of hyperspectral satellite-based sensors.
SensorHyperionDESISPRISMAEnMAPEMIT
Spectral Range (nm)400–2500400–1000400–2500420–2450380–2500
Number of Bands220235240246285
Spectral Resolution (nm)VNIR 60VNIR 2.55VNIR 12VNIR 6.5VNIR 7.4
SWIR 10-SWIR 12SWIR 10SWIR 7.4
Spatial Resolution (m)3030303010
Temporal Resolution (days)16102927-
OperatorNASADLR/TBEASIDLRNASA
Life Time2000–20172018–act.2019–act.2022–act.2022–act.
Note: This table includes a selection of hyperspectral satellite-based sensors that provide open-access data for research. It does not represent an exhaustive list of all available systems.
Table 3. Comparison of hyperspectral airborne-based sensors.
Table 3. Comparison of hyperspectral airborne-based sensors.
DatasetYearSourceSpectral BandsWavelengths (nm)Spatial Resolution (m)
Indian Pines1992NASA AVIRIS220400–250020
Salinas1998NASA AVIRIS224360–25003.7
Xuzhou2014HySpex436415–25080.73
Toulouse2021Aisa (FENIX)420400–25001
Note: This table includes a selection of hyperspectral airborne-based sensors commonly used in research. It does not represent an exhaustive list of all existing systems.
Table 4. Comparison of hyperspectral UAV-based sensors.
Table 4. Comparison of hyperspectral UAV-based sensors.
DatasetYearSourceSpectral BandsWavelengths (nm)Spatial Resolution (m)
WHU-Hi-HanChuan2016Headwall Nano-Hyperspec274400–10000.109
WHU-Hi-HongHu2017Headwall Nano-Hyperspec270400–10000.043
WHU-Hi-LongKou2018Headwall Nano-Hyperspec270400–10000.463
UAV-borne Crop Hyperspectral Image (UC-HSI)2024Resonon’s Pika-L300385–10210.011
Note: This table includes a selection of hyperspectral UAV-based sensors commonly used in research. It does not represent an exhaustive list of all existing systems.
Table 5. Summary of papers used in the study sorted by publication year (most performant models are written in bold characters).
Table 5. Summary of papers used in the study sorted by publication year (most performant models are written in bold characters).
YearPublication TopicModelsMax Overall Accuracy (%)ClassesImagery SourceFeatures UsedReference
2019Developed a Parallel CNN architecture to classify crops and their growth stages using AVIRIS-NG hyperspectral data.PCNN, ANN, SVM (Linear, RBF), KNN, SAM99.113Airborne Hyperspectral ImagerySpatial–SpectralPatel et al. [110]
2019Developed a spatial–spectral fusion method based on conditional random fields (SSF-CRF) for fine crop classification using UAV-borne hyperspectral imagery.SSF-CRF, SVM, MS, MSVC, SVRFMC, DPSCRFWHU-Han: 94.60, WHU-Hon: 97.959; 18Hyperspectral Benchmark DatasetSpatial–Spectral, Texture, MorphologyWei et al. [111]
2020Development of a novel 3D-Deep Feature Extraction CNN (3D-DFE-CNN) for hyperspectral image classification.3D-DFE-CNN, SVM, 2D-CNN, 3D-CNNIP: 99.94, SA: 99.9916Hyperspectral Benchmark DatasetSpatial–SpectralKanthi et al. [79]
2020Proposed a robust spectral–spatial classification method using Conditional Random Fields (SCRFs) to handle heterogeneous agricultural areas.SCRF, SSVM, SVM, RF, OO-FNEA, SSRNSA: 99.40, WHU-Han: 88.2416Hyperspectral Benchmark DatasetSpatial–SpectralZhao et al. [112]
2021Developed a two-stage classification system combining SVM and CNN for crop classification.SVM + CNN, SVM, CNN1007Airborne Hyperspectral ImagerySpatial–SpectralWan et al. [113]
2021Compared crop classification using two generations of hyperspectral sensors.SVM, RF, Naive Bayes, WekaXMeansEO-1: 100, DE: 853Satellite Hyperspectral ImagerySpectralAneece and Thenkabail [48]
2021Demonstrated the utility of PRISMA hyperspectral imagery for binary crop type classification using a 1D-CNN model.1D-CNN1002Satellite Hyperspectral Imagery, Airborne Hyperspectral ImagerySpectralSpiller et al. [56]
2021Developed a fine classification method for airborne hyperspectral imagery using multi-feature fusion and a DNN-CRF framework.DNN + CRF with stacking fusion, SVMWHU-Hon: 98.71, WHU-Xio: 99.7118; 20Hyperspectral Benchmark DatasetSpectral, Texture, Spatial, MorphologicalWei et al. [114]
2021Developed a deep learning-based framework using hyperspectral satellite imagery for large-area crop mapping in cloudy conditions.3D-CNN, 2D-CNN, 1D-CNN, SVM, RF94.656Hyperspectral Satellite ImagerySpatial–SpectralMeng et al. [17]
2021Proposed a novel HSI classification model integrating pyramidal convolution and iterative attention mechanisms to improve accuracy.Double-Branch Network with Pyramidal Convolution + Iterative Attention, SVM, CDCNN, SSRN, FDSSC, DBMA, DBDAIP: 95.90, SA: 98.3316Hyperspectral Benchmark DatasetSpatial–SpectralShi et al. [115]
2021Developed a multitask deep learning model for hyperspectral image classification.MDL4OW, CROS2R, ResNet, WCRN, DCCNN, SVM, RFSA: 94.34, IP: 82.618; 16Hyperspectral Benchmark DatasetSpatial–SpectralLiu et al. [116]
2021Developed a multi-model ensemble learning method integrating LBP features and sparse representation for hyperspectral image classification.LBP_SRPMC, SVM, JSRC, CCJSR, JSACRIP: 98.9216Hyperspectral Benchmark DatasetSpatial–SpectralCheng et al. [117]
2022Proposed a hybrid method combining adaptive spectral–spatial kernel and improved ViT with re-attention and locality mechanisms.Adaptive Spectral Spatial Kernel + Improved Vision Transformer (ViT), RBF-SVM, CNN, HybridSN, PvResNet, SSRN, SSFTT, A2S2KResNetIP: 98.81, Xu: 99.80, WHU-Lon: 99.899; 16Hyperspectral Benchmark DatasetSpatial–SpectralWang et al. [91]
2022Developed a Multibranch Crossover Feature Attention Network (MCFANet) for hyperspectral image classification.MCFANet, SVM, RF, KNN, GaussianNB, SSRN, FDSSC, 2D-CNN, 3D-CNN, HybridSN, MBDA, MSRN, MCFANetIP: 98.91, SA: 99.5416Hyperspectral Benchmark DatasetSpatial–SpectralLiu et al. [118]
2022Developed a partition-based band selection approach integrating entropy, NDVI, and MNDWI for crop mapping.Partition-Based Band Selection + CNN, 3DGSVM, CNN+MFL, SS3FC, WEDCT-MIIP: 97.62, SA: 96.0816Hyperspectral Benchmark DatasetSpectral Indices, Spatial–SpectralAgilandeeswari et al. [119]
2022Developed a Geo-Stat Endmember Extraction (GSEE) algorithm for pure pixel identification and applied advanced classifiers for crop type discrimination.CNN, SAM, SID, SVM, RUSBoost, AdaBoost, Tree Bag89.0710Airborne Hyperspectral ImagerySpatialSingh et al. [120]
2022Developed a ResNet-based deep learning architecture for spatial–spectral crop classification using UAV and ground-based hyperspectral imagery.ResNet + DCNN97.165Drone Hyperspectral Imagery, Terrestrial Hyperspectral ImagerySpatial–SpectralGalodha et al. [121]
2022Evaluation of a CNN compared with a Convolutional Autoencoder and DNN for crop classification using hyperspectral data.Optimized CNN, Convolutional Autoencoder, DNNIP: 97, EO-1: 784; 16Hyperspectral Benchmark Dataset, Hyperspectral Satellite ImagerySpatialBhosle and Musande [122]
2022Developed HSI-TransUNet, a transformer-based semantic segmentation model for precise crop mapping using UAV hyperspectral imagery.HSI-TransUNet, TransUNet, UNet, SegNet, SETR86.0530Drone Hyperspectral ImagerySpatial–SpectralNiu et al. [123]
2022Introduced a 3D Self-Attention Multiscale Feature Fusion Network for hyperspectral image classification.3DSA-MFN, SVM, 3D-CNN, SSAN, SSRN, HSI-BERT, SATSA: 99.92, IP: 99.5216Hyperspectral Benchmark DatasetSpatial–SpectralQing et al. [124]
2022Compared classification performance and optimal band selections between PRISMA and DESIS hyperspectral sensors for crop mapping.SVM, RFPR: 90, DE: 837Hyperspectral Satellite ImagerySpectralAneece and Thenkabail [50]
2022Compared crop classification performance of hyperspectral DESIS data versus high-resolution PlanetScope data for major crops in California’s Central Valley.SVM, RFDE: 85, PL: 798Hyperspectral Satellite ImagerySpectralAneece et al. [16]
2022Proposed a multi-branch feature fusion framework (DMLPFFN) using dilation-based MLP to enhance hyperspectral crop classification accuracy.DMLPFFN, DMLP, DFFN, CNN, ResNet, RBF-SVM, EMP-SVMSA: 99.05, WHU-Lon: 99.16, WHU-Han: 98.059; 16Hyperspectral Benchmark DatasetSpatial–SpectralWu et al. [125]
2022Developed a Rotation-Invariant Attention Network (RIAN) to address rotation sensitivity in hyperspectral image classification.RIAN, 1-D CNN, 2-D CNN, DHCNet, SpecAN, SSAN, MGCNSA: 97.516Hyperspectral Benchmark DatasetSpatial–SpectralZheng et al. [126]
2022Proposed a squirrel search optimization and deep transfer learning-enabled crop classification model for hyperspectral imaging.SSODTL-CC, CNN, CNN-CRF, SVM, FNEA-OO, SVRFMCWHU-Lon: 99.23, WHU-Han: 97.159; 16Hyperspectral Benchmark DatasetSpatialHamza et al. [127]
2022Proposed a framework using advanced spectral feature representation for precise crop classification in hyperspectral imagery.SSFSP-CNN, TFP-CNWHU-Hon: 99.21, SA: 99.2416; 22Hyperspectral Benchmark DatasetSpatial–SpectralChen et al. [128]
2022Proposed a novel framework for efficient pixel-level dense hyperspectral image classification.TPPI-Net, HybridSN, pResNet, SSRN, 2D-CNN, 3D-CNN, SSANIP: 98.18, SA: 99.0616Hyperspectral Benchmark DatasetSpatial–SpectralChen et al. [129]
2023Developed an active learning-enhanced crop classification method using Marine Predators Algorithm for band optimization.Optimized RF99.693Drone Hyperspectral ImagerySpectralSarma and Nidamarthi [130]
2023Proposed Adaptive Multi-Feature Fusion Graph Convolutional Network (AMF-GCN) for hyperspectral image classification.AMF-GCN, HybridSN, DR-CNN, SAGE-A, MDGCN, MBCUT, JSDFSA: 98.2116Hyperspectral Benchmark DatasetSpectral, Spatial, TextureLiu et al. [106]
2023Developed an integrating wavelet transform and attention mechanisms for crop type classification using DESIS imagery.WA-GCN, MSRN, MDBRSSN, RF, SVM97.895Satellite Hyperspectral imagerySpatial–SpectralFarmonov et al. [131]
2023Proposed an optimize for hyperparameter tuning in a deep transfer learning model for crop type classification using HSI.DODTL-CTDC, CNN, SVM, CNN-CRF, FNEA-OO, SVRFMC, SSODTL-CC, DODTL-CTDCWHU-Lon: 99.47, WHU-Han: 97.669; 11Hyperspectral Benchmark DatasetSpatial–SpectralAlajmi et al. [132]
2023Proposed a transformer-based model for hyperspectral crop classification.HyperSFormer, FPGA, SSDGL, SeqFormerIP: 98.4, WHU-Han: 95.3, WHU-Hon: 9816; 22Hyperspectral Benchmark DatasetSpatial–SpectralXie et al. [87]
2023Developed a masked vision transformer leveraging spatial–spectral interactions for hyperspectral classification with improved label efficiency.Masked Spatial–Spectral Transformer, 3D-CNN, ViT-RGB, Spectral T., SST828Satellite Hyperspectral ImagerySpatial–SpectralScheibenreif et al. [133]
2023Introduced a pruning multi-scale multi-branch hybrid convolutional network for small-sample hyperspectral image classification.PMSMBN, 2D-CNN, 3D-CNN, SSRN, HybridSN, SSAN, DMCNSA: 99.78, IP: 96.2816Hyperspectral Benchmark DatasetSpatial–SpectralBai et al. [134]
2023Proposed a spatial–spectral transformer network with multi-scale convolution for hyperspectral image classification.SS-TMNet, ViT, 3D-CNN, HybridSN, CrossViT, LeViTIP: 84.6716Hyperspectral Benchmark DatasetSpatial–SpectralHuang et al. [135]
2024Proposed a multitree genetic programming-based feature selection for hyperspectral crop classification.AMTGP with SVM, MLP, LSTM, 1D-CNNSA: 96.86, WHU-Lon: 93.139; 16Hyperspectral Benchmark DatasetSpectral, Texture, Spectral IndicesLiang et al. [136]
2024Early-season crop mapping using PRISMA and Sentinel-2 satellite data with machine and deep learning algorithms.1D-CNN, 3D-CNN, RF, SVM, KNN, MNB89 (Winter), 91 (Summer), 92 (Perennial), WHU-Hon: 73.5419Satellite Hyperspectral ImagerySpectral, Spatial–SpectralMirzaei et al. [80]
2024Introduced the Spectral–Spatial Feature Tokenization Transformer for large-scale agricultural crops.SSFTT, SpectralFormer, ViT, SSRN, DBDA-MISH, 2D-Deform, SVMHLJ-Raohe: 95.72, HLJ-Yan: 92.417; 8Satellite Hyperspectral ImagerySpatial–SpectralZhang et al. [88]
2024Introduced HypsLiDNet, a 3D-2D CNN model integrating hyperspectral and LiDAR data with spatial–spectral morphological attention for crop classification.HypsLiDNet, AMSSE-Net, MFT, MCACN, DEGL, HLDC, SVM, RF, 2DCNN, 3DCNN, ViT98.866Satellite Hyperspectral Imagery, LIDARSpatial–Spectral, Morphological OperatorsFarmonov et al. [109]
2024Developed a multi-branch deep learning model integrating Factor Analysis and Minimum Noise Fraction for spectral–spatial HSI classification.CMD, Fast 3D CNN, HybridSN, SpectralNET, MBDA, Tri-CNNSA: 99.35, IP: 98.4516Hyperspectral Benchmark DatasetSpatial–SpectralIslam et al. [137]
2024Investigated hyperspectral crop classification using DESIS and PRISMA with machine learning algorithms.SVM, RF, SAMDE+PR: 99, DE: 97, PR: 965Satellite Hyperspectral ImagerySpectralAneece et al. [51]
2024Developed a novel 3D-UNet architecture for hyperspectral image classification.3D-UNet, HyperUNet, EddyNet, CEU-Net, UNetIP: 99.60, XI: 99.81, WHU-Hon: 98.85, SA: 99.9216; 20Hyperspectral Benchmark DatasetSpatial–SpectralAshraf et al. [81]
2024Proposed SANet, a self-attention-based network for agricultural HSI classification.SANet, ResNet, PyResNet, SSRN, A2S2KResNet, SSFTT, MATNetWHU-Han: 95.81, WHU-Lon: 99.65, WHU-Hon: 97.409; 22Hyperspectral Benchmark DatasetSpatial–SpectralZhang et al. [138]
2024Developed a two-stage semantic segmentation pipeline for agricultural crop classification.2D-CNN, 1D-CNN, Random Forest922Satellite Hyperspectral ImagerySpectro-TemporalMonaco et al. [73]
2024Introduced a novel HyperConvFormer model for hyperspectral crop classification and growth stage analysis.HyperConvFormer, 3D-CNN, ViT, 2D+1D-CNN, SpectralFormer95.2610Drone Hyperspectral ImagerySpatial–SpectralSankararao et al. [139]
2024Introduced a novel graph U-Net with topology feature awareness pooling for hyperspectral image classification.TFAP graph U-Net, 3D-CNN, SSRN, SSTN, 3DGAN97.1916Hyperspectral Benchmark DatasetSpatial–SpectralChen et al. [140]
2024Introduced a lightweight, non-training framework for hyperspectral crop classification, featuring a Channel2Vec module for spectral embedding and a spectral tokenizer for local sequence extraction.Channel2Vec, kNN, SVM, 1D-CNN, 2D-CNN, 3D-CNN, ViT, SpectralFormer GCNXJM: 95.15, XI: 99.11, SA: 99.176; 16Hyperspectral Benchmark DatasetSpectralCheng et al. [141]
IP: Indian Pines; SA: Salinas; WHU-Hon: WHU-Hi-HongHu; WHU-Han: WHU-Hi-HanChuan; WHU-Lon: WHU-Hi-LongKou; XI: Xiongan; DE: DESIS; PR: PRISMA; EO-1: Hyperion; PL: PlanetScope.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bourriz, M.; Hajji, H.; Laamrani, A.; Elbouanani, N.; Abdelali, H.A.; Bourzeix, F.; El-Battay, A.; Amazirh, A.; Chehbouni, A. Integration of Hyperspectral Imaging and AI Techniques for Crop Type Mapping: Present Status, Trends, and Challenges. Remote Sens. 2025, 17, 1574. https://doi.org/10.3390/rs17091574

AMA Style

Bourriz M, Hajji H, Laamrani A, Elbouanani N, Abdelali HA, Bourzeix F, El-Battay A, Amazirh A, Chehbouni A. Integration of Hyperspectral Imaging and AI Techniques for Crop Type Mapping: Present Status, Trends, and Challenges. Remote Sensing. 2025; 17(9):1574. https://doi.org/10.3390/rs17091574

Chicago/Turabian Style

Bourriz, Mohamed, Hicham Hajji, Ahmed Laamrani, Nadir Elbouanani, Hamd Ait Abdelali, François Bourzeix, Ali El-Battay, Abdelhakim Amazirh, and Abdelghani Chehbouni. 2025. "Integration of Hyperspectral Imaging and AI Techniques for Crop Type Mapping: Present Status, Trends, and Challenges" Remote Sensing 17, no. 9: 1574. https://doi.org/10.3390/rs17091574

APA Style

Bourriz, M., Hajji, H., Laamrani, A., Elbouanani, N., Abdelali, H. A., Bourzeix, F., El-Battay, A., Amazirh, A., & Chehbouni, A. (2025). Integration of Hyperspectral Imaging and AI Techniques for Crop Type Mapping: Present Status, Trends, and Challenges. Remote Sensing, 17(9), 1574. https://doi.org/10.3390/rs17091574

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop