Next Article in Journal
Enhanced YOLOv8-Based System for Automatic Number Plate Recognition
Previous Article in Journal
Enhancing 3D Rock Localization in Mining Environments Using Bird’s-Eye View Images from the Time-of-Flight Blaze 101 Camera
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Synergy between Artificial Intelligence and Hyperspectral Imagining—A Review

by
Svetlana N. Khonina
1,
Nikolay L. Kazanskiy
1,
Ivan V. Oseledets
2,
Artem V. Nikonorov
1 and
Muhammad A. Butt
1,*
1
Samara National Research University, 443086 Samara, Russia
2
Artificial Intelligence Research Institute (AIRI), 105064 Moscow, Russia
*
Author to whom correspondence should be addressed.
Technologies 2024, 12(9), 163; https://doi.org/10.3390/technologies12090163
Submission received: 26 August 2024 / Revised: 10 September 2024 / Accepted: 11 September 2024 / Published: 13 September 2024
(This article belongs to the Section Assistive Technologies)

Abstract

:
The synergy between artificial intelligence (AI) and hyperspectral imaging (HSI) holds tremendous potential across a wide array of fields. By leveraging AI, the processing and interpretation of the vast and complex data generated by HSI are significantly enhanced, allowing for more accurate, efficient, and insightful analysis. This powerful combination has the potential to revolutionize key areas such as agriculture, environmental monitoring, and medical diagnostics by providing precise, real-time insights that were previously unattainable. In agriculture, for instance, AI-driven HSI can enable more precise crop monitoring and disease detection, optimizing yields and reducing waste. In environmental monitoring, this technology can track changes in ecosystems with unprecedented detail, aiding in conservation efforts and disaster response. In medical diagnostics, AI-HSI could enable earlier and more accurate disease detection, improving patient outcomes. As AI algorithms advance, their integration with HSI is expected to drive innovations and enhance decision-making across various sectors. The continued development of these technologies is likely to open new frontiers in scientific research and practical applications, providing more powerful and accessible tools for a wider range of users.

1. Introduction

Initially developed for remote sensing in the mid-1980s [1], hyperspectral imaging (HSI) has, over the past twenty years, expanded into numerous research fields, notably agriculture and food science, to gather complementary data on both the visual and chemical attributes of study subjects [2,3,4]. This advanced method combines conventional imaging with spectroscopic techniques like infrared, Raman, and fluorescence, creating a 3D data structure called a hypercube. This hypercube integrates 2D digital images (X*Y) with a third dimension of spectral data (λ) [5]. Thus, HSIs consist of a stack of spatial image planes, each linked to a specific wavelength band, visually depicting pixel-level information at each wavelength. Subsequently, each pixel in an image is associated with a spectrum that reflects the light absorption, reflection, emission, and/or scattering properties of the spatial region it represents. Nevertheless, it should be noted that the spectrum of each pixel is susceptible to influence from neighboring pixels because of several factors. These include the effects of optical, instrumental, and background factors. This spectrum serves to act as a distinctive fingerprint, thus enabling the accurate identification of the pixel’s composition [6].
The primary challenges with using HSI for the quality assessment of fruits, vegetables, and mushrooms are related to data availability and the reliability of existing models and results, particularly regarding their practical applicability [7,8,9]. Although numerous studies have been conducted on a wide range of cases [10,11,12], they generally lack a practical focus. Only a few studies have conducted true test set validations using data from different years or orchards rather than solely splitting a single piece of data. To develop consistent prediction models and enhance knowledge transfer from researchers to horticultural practices, research should prioritize repeatability and the translation of research parameters (such as chlorophyll content, firmness, and SSC) into relevant industrial parameters. Additionally, effective non-destructive valuation of quality parameters with HSI in the field, during storage and handling, and on the shelf can lead to the best possible use of production inputs, reduced food waste, and the assurance of safe, high-quality food [13,14].
The integration of artificial intelligence (AI) with HSI is crucial because it significantly enhances the ability to analyze and interpret complex hyperspectral (HS) data, which encompass hundreds of spectral bands [15]. AI algorithms, mainly machine learning (ML) and deep learning (DL) methods, can effectively manage the high dimensionality and vast volume of HSI data, enabling precise feature extraction and categorization [16,17]. The advantage of AI in hyperspectral image analysis lies in its ability to handle large datasets, reduce noise, perform real-time analysis, and improve the precision of tasks such as object detection, anomaly identification, and environmental monitoring. This makes AI essential for applications like agriculture, remote sensing, and medical diagnostics, where hyperspectral data are vital.
As AI technologies continue to evolve, their integration with HSI is expected to become more sophisticated and ubiquitous. These advancements will enable real-time processing capabilities, facilitating immediate decision-making in critical applications like environmental monitoring [18], disaster management, and healthcare diagnostics. Moreover, the combination of AI with emerging technologies such as edge computing and the Internet of Things (IoT) will democratize access to HSI, allowing for its deployment in remote and resource-limited settings. As AI-HSI becomes more accessible, it will foster innovations in personalized medicine, precision agriculture, and intelligent transportation systems. Additionally, the growing emphasis on sustainability and environmental conservation will spur the development of AI algorithms tailored to monitor and mitigate climate change impacts. Overall, the synergy between AI and HSI promises to revolutionize various industries, driving efficiency, accuracy, and transformative outcomes in ways previously unimaginable [19].
In this comprehensive review, we delve into the pivotal role of AI in revolutionizing HSI across a diverse array of applications. AI’s integration with HSI has not only streamlined data analysis but also unlocked new frontiers in fields such as agriculture, environmental monitoring, medical diagnostics, and materials science [20]. By harnessing AI’s capabilities in processing vast and intricate spectral data, researchers and practitioners can achieve unprecedented levels of precision in feature extraction, categorization, and interpretation. However, this transformative synergy is not without its challenges. The complexity and high dimensionality of HS data pose significant computational hurdles, demanding advanced algorithms and robust computing infrastructure. Moreover, the scarcity of labeled HS datasets complicates the training of AI models, necessitating innovative approaches in data augmentation and semi-supervised learning.
Looking ahead, the prospects for AI-enhanced HSI are promising [21]. Continued advancements in AI algorithms, coupled with improvements in hardware capabilities such as GPU acceleration and cloud computing, are set to overcome current limitations [22]. Standardization efforts and collaborative initiatives will further facilitate the seamless integration and interoperability of AI and HSI technologies, paving the way for broader adoption and impactful applications in precision agriculture, environmental sustainability, personalized medicine, and beyond [23,24].

2. Working Mechanism of HSI and Challenges

HSI operates by capturing and processing information from across the electromagnetic (EM) spectrum, extending beyond the capabilities of traditional imaging that typically captures data in just a few broad wavelength bands [25,26,27]. The process begins with illumination, where a light source irradiates the target scene. The interaction of light with the materials in the scene—through reflection, absorption, and scattering—varies according to the unique spectral properties of each material. HS sensors then capture the reflected light across numerous narrow spectral bands. These sensors can use different methods for data acquisition, including spatial scanning (push broom), where the sensor or scene moves to capture spatial dimensions sequentially; spectral scanning (whiskbroom), capturing spectral data for each spatial point in sequence; snapshot imaging, capturing the full HS data cube in a single instance; and spatiospectral scanning, which combines aspects of both spatial and spectral scanning.
The light collected by the sensor is dispersed into its constituent wavelengths using optical elements like prisms or diffraction gratings [28,29]. This dispersion process allows the system to separate light into fine spectral bands. The dispersed light is then detected using an array of sensors—typically charge-coupled devices or complementary metal-oxide-semiconductor sensors—where each sensor pixel captures light intensity at a specific wavelength, generating a detailed spectrum for each spatial point in the scene. The captured raw data form an HS data cube, with two spatial dimensions and one spectral dimension, containing rich spectral information for each pixel. The main component of the HSI system is shown in Figure 1 [30].
Following data acquisition, calibration is performed to correct for sensor response, lighting conditions, and atmospheric effects, ensuring the accuracy of the spectral information. Advanced data processing techniques are applied to the HS data cube, enabling spectral unmixing to decompose mixed pixels, categorization to identify and categorize materials based on their spectral signatures, feature extraction to pinpoint specific spectral characteristics, and dimensionality reduction to simplify the data while retaining essential information [31].
HSI presents several challenges due to its high data dimensionality and complexity. Each hyperspectral image contains hundreds of spectral bands for every pixel, resulting in massive datasets that require significant computational resources and storage. This large volume of data can lead to issues such as data redundancy, where many spectral bands carry overlapping information, complicating efficient analysis. Additionally, hyperspectral data are often noisy and sensitive to environmental conditions, requiring sophisticated preprocessing techniques like noise reduction, atmospheric correction, and calibration.
AI can significantly solve the challenges of HSI analysis due to its ability to process large, complex datasets and detect intricate patterns that are not easily discernible by traditional methods [15]. AI, particularly through ML and DL techniques, excels at dimensionality reduction, feature extraction, and classification, making it well-suited for hyperspectral data analysis [32]. These methods can automatically learn from data, identify relevant spectral features, and improve the accuracy of tasks such as object detection, anomaly detection, and material classification [33]. AI models, especially deep learning architectures like convolutional Neural Networks (CNNs), are capable of learning hierarchical features from raw hyperspectral data, which reduces the need for manual feature engineering.
The advantages of AI in this field include its scalability, adaptability, and efficiency in handling large, complex, and non-linear data. AI algorithms can be trained on vast amounts of hyperspectral data, continually improving their performance as more data become available [34]. AI also provides faster processing times, enabling real-time or near-real-time analysis, which is critical in areas like remote sensing, agriculture, and environmental monitoring [23,35]. Additionally, AI can generalize well across different datasets, making it robust in handling varying conditions and noise in HSI.

3. Types of HSI

Different types of HSI systems and techniques are used depending on the application, data collection method, and the specific region of the spectrum being targeted. HS cameras can acquire detailed information, which is then transformed into a three-dimensional data cube through the employment of one of five primary methods.
(a)
Whiskbroom (point scanning): This method captures one single pixel at a time, gradually building the image as the camera scans across the sample [36]. Each pixel includes all of its spectral information, resulting in very high spectral resolution. However, the image acquisition process is time-consuming, making it less suitable for applications requiring rapid data collection [37]. Despite this limitation, whiskbroom scanning is valued for its precision in capturing detailed spectral data.
(b)
Push-broom (line scanning): Push-broom technology measures unceasing spectra one line of pixels at a time, making it widely used in industrial quality control monitoring processes [38,39]. Its main limitation is the high losses caused by the entrance slit of the spectrometer, which can reduce the overall efficiency of light capture. Nonetheless, push-broom scanning is favored in many applications for its balance between speed and spectral resolution, offering a practical solution for real-time monitoring and analysis.
(c)
Fourier Transform (FT) spectroscopy: An alternative for measuring non-stop spectra, FT spectroscopy combines a monochrome imaging sensor with an interferometer, providing higher light throughput compared with push-broom systems [40]. This method enhances efficiency and accuracy in spectral data collection, making it ideal for applications requiring high sensitivity and precision [41]. Additionally, FT spectroscopy can effectively handle a wide range of wavelengths, further broadening its applicability in various scientific and industrial fields.
(d)
Spectral scanning: This technique is capable of gathering the entirety of the spatial information associated with a given wavelength, with each wavelength being considered individually [42]. While the process is relatively rapid when considered on an image-by-image basis, the overall procedure is considerably slower due to the necessity of changing wavelengths. Nevertheless, spectral scanning is a powerful tool for applications that require high spatial resolution at specific wavelengths. Its ability to precisely isolate and capture data for individual wavelengths makes it particularly useful in fields such as fluorescence microscopy, where detailed spectral information is crucial. Additionally, spectral scanning can be optimized to focus on particular regions of interest, enhancing the efficiency of data collection in targeted studies [43].
(e)
HS snapshot cameras: These cameras capture HS video, making them ideal for imaging moving objects [44]. The method is rapid and effective, although it typically provides restricted spectral and spatial resolutions in comparison to alternative techniques [45]. Nonetheless, snapshot cameras are crucial in applications requiring real-time HS imaging.
High-resolution HSI is becoming crucial for accurately detecting spectral variations in complex, spatially detailed targets [46,47]. However, regardless of these advantages, current high-resolution setups are often extremely costly, notably limiting their accessibility and user base. These restrictions can have broader consequences, reducing opportunities for data collection and, thus, our understanding of various environments [48]. Stuart et al. introduced a cost-effective alternative to the existing equipment (Figure 2) [3]. This new instrument delivered HS datasets efficiently in resolving spectral variants in millimeter-scale targets, which many affordable HSI options currently cannot achieve. The detailed instrument metrology was provided, and its effectiveness in a mineralogy-based environmental surveillance application was demonstrated, showcasing it as an advantage in the realm of affordable HSI. Table 1 compares whiskbroom (point scanning), push broom (line scanning), Fourier Transform (FT) spectroscopy, spectral scanning, and HS snapshot cameras.

4. Synergy between AI and HSI

AI algorithms are particularly well-suited for HSI because they can efficiently handle and analyze the vast and complex data that these systems generate. Hyperspectral images consist of hundreds of narrow spectral bands, providing detailed information across a wide range of wavelengths. This high dimensionality offers rich data but also presents challenges such as redundancy [49], noise [50], and the “curse of dimensionality,” which can overwhelm traditional analysis techniques [51]. AI algorithms, particularly ML and DL models, are adept at extracting relevant features, identifying patterns, and performing tasks like classification, detection, and anomaly identification from these large datasets. They excel at leveraging the spectral and spatial relationships in hyperspectral data, which can be difficult for traditional statistical methods to capture. Furthermore, AI models can be trained to recognize subtle differences in spectral signatures that are critical for applications such as remote sensing [52], agriculture [53], medical diagnostics [54], and environmental monitoring [55]. Their ability to learn from data makes them highly adaptable to different HSI tasks, even in cases where annotated data are limited or the problem is non-linear, thus enhancing the overall efficiency and effectiveness of HSI systems [56].
Supervised learning algorithms use labeled data to train models that can classify or analyze new hyperspectral images. Common algorithms include Support Vector Machines (SVMs) [57], which are popular for classification tasks due to their ability to handle high-dimensional data and work with smaller training datasets. Random Forests (RFs) [58] is another widely used method known for its robustness and ability to handle noisy data. The K-Nearest Neighbors (KNNs) algorithm is also employed for classification, particularly based on the proximity of pixel spectra to labeled training data [59]. In addition, Neural Networks (NNs), such as Multilayer Perceptrons (MLPs) [60], are used for pixel-level classification tasks [61].
In contrast, unsupervised learning algorithms, such as k-means clustering [62], hierarchical clustering [63], and principal component analysis (PCA) [64], are employed to find hidden patterns or groupings in data without predefined labels [65]. Neural Networks (NNs), particularly DL architectures like convolutional Neural Networks (CNNs) [66,67,68] and recurrent Neural Networks (RNNs) [69], have transformed fields like image and speech recognition due to their capability to learn complex, hierarchical representations of data. Reinforcement learning algorithms enable AI systems to learn optimal actions through trial and error, guided by rewards and penalties, making them suitable for dynamic and interactive environments, such as game-playing or robotics [70]. Independent Component Analysis (ICA) is applied for blind source separation and is useful in unmixing hyperspectral data into independent spectral components [71]. Additionally, autoencoders, a type of NN, are employed for unsupervised feature extraction and dimensionality reduction [72].
Sigger et al. introduced a novel method called DiffSpectralNet, which integrates diffusion processes with transformer-based techniques for enhanced hyperspectral image classification [73]. The diffusion component effectively captures a broad range of spectral–spatial features, contributing to more accurate and diverse feature extraction. This approach utilized an unsupervised learning framework built on the diffusion model to identify both high-level and low-level spectral–spatial features. Intermediate hierarchical features were then extracted from multiple timestamps using a pre-trained denoising U-Net. These features were subsequently fed into a supervised transformer-based classifier to perform HSI classification. Extensive experiments conducted on three publicly available datasets validated the robustness of the method. The results show that DiffSpectralNet significantly outperformed existing models, achieving state-of-the-art accuracy while demonstrating consistent stability and reliability across various class types within each dataset [73].
DL techniques employ deeper artificial Neural Networks (ANNs) with more neurons, additional layers, and more complex connections between layers [74]. ANNs are computational models inspired by the human brain’s architecture and functioning. They consist of interconnected nodes or “neurons” organized in layers—input, hidden, and output. These networks are designed to recognize patterns, make decisions, and learn from data by adjusting the weights of connections based on the errors of their predictions. ANNs are widely used in various applications, from image and speech recognition to natural language processing and autonomous systems, thanks to their ability to model complex relationships and improve performance through training. The primary benefit of these deeper networks is their capability to learn features routinely, eradicating the need for manual feature engineering, as is often required in traditional ML techniques [75]. This capability enables DL systems to solve more complex problems, offering greater flexibility and the ability to capture more flexibility in data, which is particularly beneficial for horticultural applications [76,77]. Generative Adversarial Networks (GANs) play a role in data augmentation, anomaly detection, and improving the resolution of hyperspectral images [78].
In a fully connected Neural Network (FCNN), a neuron in one layer is linked to all neurons in the next layer, leading to an overwhelming amount of input for multidimensional data [79]. This extensive connectivity can cause a dramatic increase in computational complexity and memory requirements [80]. Furthermore, information in these data is characteristically distributed across diverse dimensions, making linear processing suboptimal. When a neuron contains specific information, the surrounding environment is likely to hold relevant spatial information as well. Accordingly, the neurons in a convolutional layer are interconnected via a localized system of receptive fields extending throughout the depth of the input data rather than in a linear fashion.
A CNN is characteristically composed of multiple convolutional layers, each of which is tailored to obtain specific types of features from the data [81]. The layers facilitate the generation of diverse representations of the input data, ranging from more abstract forms in the initial layers to more detailed depictions in the deeper layers of the network [82,83]. The initial layers might detect basic features such as edges and textures, while the deeper layers recognize more complex structures like shapes and objects. This hierarchical feature extraction allows CNNs to effectively handle spatial hierarchies in data, making them highly suitable for tasks such as image and video recognition, where spatial patterns are crucial.
Additionally, CNNs often incorporate pooling layers, which reduce the dimensionality of the data and help in achieving translational invariance, further enhancing the model’s ability to generalize from training data. Techniques such as dropout and batch normalization are also commonly used in CNNs to prevent overfitting and improve training efficiency. This combination of localized connectivity, hierarchical feature extraction, and regularization techniques makes CNNs exceptionally powerful for processing and interpreting complex multidimensional data [84].
Spectral unmixing is the process of decomposing a mixed pixel into its constituent pure spectral components, known as endmembers. Non-Negative Matrix Factorization (NMF) is commonly used for spectral unmixing by approximating the hyperspectral data matrix with two Non-Negative Matrices, one representing the spectral signatures and the other representing the abundances [85,86]. Sparse Coding techniques are also employed for spectral unmixing, where each pixel is represented as a sparse linear combination of a dictionary of spectral signatures [87].
Hybrid models combine both spatial and spectral information to improve performance in classification, detection, and segmentation tasks. Spatial–spectral CNNs leverage this combination to enhance classification accuracy by processing spatial and spectral data together [88]. Graph Convolutional Networks (GCNs) have also emerged as useful tools for hyperspectral image classification and segmentation. GCNs treat pixels as nodes in a graph and apply convolutions on the graph to capture spatial relationships, which is particularly useful in hyperspectral data where neighboring pixels tend to be highly correlated [89].
Additionally, natural language processing (NLP) algorithms, including transformers and sequence-to-sequence models, empower AI to understand and generate human language, driving advancements in machine translation, sentiment analysis, and conversational agents. Each of these algorithms plays a fundamental role in permitting AI systems to perform a wide range of intelligent tasks by learning from data, recognizing patterns, and making informed decisions. The characteristics of different algorithms used in the interaction between AI and HSI are presented in Table 2.
Spectral imaging, essential for applications requiring high-resolution spectral and spatial information, has seen various technical advancements and commercial availability. However, computational spectral cameras, while being compact, lightweight, and cost-effective, face limitations due to the trade-off between spatial and spectral resolutions, restricted data volume, and environmental noise. Zhang et al. developed a deeply learned broadband encoding stochastic HS camera [90]. The simplified representation of the device is revealed in Figure 3a. Utilizing advanced AI for filter design and spectrum reconstruction, signal processing speeds 7000 to 11,000 times faster and improved noise tolerance by approximately 10 times were achieved. These advancements allowed for the accurate and dynamic reconstruction of the spectra of the entire field of view, surpassing the capabilities of traditional compact computational spectral cameras.
To derive the precise spectrum from these random spectral filters with rich features, a DNN was employed for data management (Figure 3b). This approach offered two significant improvements over traditional compressed sensing (CS) algorithms. First, the spectrum reconstruction speed was dramatically faster. This acceleration was due to the DNN’s use of matrix multiplexing, which was particularly well-suited for parallel computing on a GPU. In contrast, the iterative CS algorithm solved an optimization problem for each pixel, resulting in increased reconstruction time as the number of pixels grows. The DNN’s reconstruction speed, however, remained almost independent of the pixel count. While both methods perform similarly for images with a few thousand pixels or fewer, the DNN proved to be significantly more efficient for higher-resolution images.
Figure 3c–e compares the rebuilt results from the device (solid line) with reference curves from a commercial spectrometer (dashed line). To ensure objective measurements, two DNNs with identical frameworks on two separate datasets were trained. The initial data comprised solely monochromatic spectrum data derived from a laser beam. In this “precise” mode, the output spectrum generated by the trained DNN was constrained by a narrowband envelope, resulting in an average center wavelength localization precision of 0.55 nm (Figure 3c). In the “general” mode, the training dataset was expanded to incorporate more wideband spectral data and eliminated all boundary conditions to assist arbitrary spectrum shapes. This mode achieved an equivalent spectral localization precision with an average of 0.63 nm (Figure 3d) and provided the camera with the flexibility to meet diverse operational needs. The trained DNN in “general” mode offered a spectral resolution of approximately 5.2 nm (Figure 3e), which was sufficient for most practical HSI applications [90].
One key area where AI is applied is super-resolution, where AI-driven algorithms can enhance the spatial resolution of hyperspectral images [91,92]. Traditional HSI systems may capture high spectral resolution but often at a lower spatial resolution due to hardware limitations [15]. By using AI-based models such as CNNs and Generative Adversarial Networks (GANs), it is possible to upscale low-resolution images to finer spatial details [92]. These super-resolution techniques enable more precise mapping and detection, which are critical in fields such as remote sensing, medical imaging, and industrial quality control [93]. AI-driven super-resolution methods not only improve the visual clarity of hyperspectral data but also provide more detailed information that can be leveraged for further analysis [94].
Classification is another important application of AI in hyperspectral image analysis. Hyperspectral images consist of hundreds of spectral bands, providing a wealth of information that can be challenging to process using traditional methods [95]. ML and DL techniques, including Support Vector Machines (SVMs), Random Forests, and deep CNNs, can be employed to classify different materials or objects in hyperspectral images. These models learn to distinguish between various classes, such as different types of vegetation, minerals, or tissues, based on their spectral signatures [96]. AI-driven classification techniques are particularly powerful because they can handle the high dimensionality of hyperspectral data, making them well-suited for tasks such as land cover classification, mineral mapping, and medical diagnostics. By automating the classification process, AI enables faster, more accurate interpretation of hyperspectral images [95].
In the realm of denoising, AI algorithms play a crucial role in improving the quality of hyperspectral images by reducing noise while preserving essential details. Hyperspectral images often suffer from noise due to environmental factors, sensor limitations, or transmission errors. AI-based approaches, such as autoencoders and DL-based noise reduction techniques, can effectively filter out noise without sacrificing important spectral and spatial information. By learning complex noise patterns, these algorithms can remove noise across multiple spectral bands, resulting in cleaner and more reliable data. This is particularly important in fields like medical imaging and remote sensing, where the accuracy of the data is paramount.
AI also contributes significantly to the reconstruction of hyperspectral images. HSI systems may not always capture complete data due to sensor limitations or occlusions. AI-based reconstruction techniques, such as DNNs, can be used to fill in missing or corrupted parts of an image, ensuring a more complete and accurate representation. These methods are highly effective in recovering lost spectral and spatial details by learning the relationships between different bands and spatial features. In medical applications, for example, reconstructing hyperspectral images can improve the detection of subtle tissue abnormalities that may have been missed due to incomplete data.
Object detection is another area where AI excels. Unlike traditional images, where object detection is typically based on spatial features, hyperspectral images offer additional spectral information that can be leveraged for a more precise identification of objects [97]. DL techniques, such as region-based CNNs (R-CNNs) and YOLO (You Only Look Once), have been adapted to work with hyperspectral data to detect objects with a high degree of accuracy. These models are trained to identify spectral signatures that correspond to specific objects or materials, allowing for more refined detection in scenarios such as crop monitoring, mineral exploration, and environmental monitoring. The combination of spatial and spectral information provided by hyperspectral images makes AI-driven object detection a powerful tool for a variety of applications [97].
Lastly, the segmentation of hyperspectral images greatly benefits from AI techniques [98]. Segmentation involves partitioning an image into distinct regions or objects based on their spectral and spatial characteristics [99]. Traditional methods of segmentation often struggle with the high dimensionality and complexity of hyperspectral data. However, AI-driven methods, particularly deep learning architectures like fully convolutional networks (FCNs) and U-Nets, have been developed to handle these challenges effectively [100]. These models can accurately segment hyperspectral images into meaningful regions, such as different types of vegetation, water bodies, or urban areas, by analyzing both spatial and spectral features [101]. In medical imaging, segmentation is critical for tasks such as tumor detection, where AI models can distinguish between healthy and diseased tissues with high precision.
Table 2. Characteristics of various algorithms used in the interaction between AI and HSI.
Table 2. Characteristics of various algorithms used in the interaction between AI and HSI.
AlgorithmDescriptionStrengthsWeaknessesApplications
Convolutional Neural Networks (CNNs) [102,103]DL models that use convolutional layers to capture spatial and spectral features.High accuracy, ability to capture complex patterns, end-to-end learning.Requires large datasets, computationally intensive.Crop categorization, disease detection, yield prediction.
Support Vector Machines (SVMs) [104,105]Supervised learning models that find the optimal hyperplane to classify data.Effective in high-dimensional spaces, robust to overfitting.Less effective with noisy data, requires careful parameter tuning.Soil property categorization, crop health monitoring.
Random Forests (RF) [106,107]Ensemble learning method that uses multiple decision trees for categorization and regression.Handles large datasets, good at managing overfitting.Can be less interpretable, may require extensive computation.Crop type categorization, pest and disease identification.
Principal Component Analysis (PCA) [108,109]Dimensionality reduction technique that transforms data into a set of orthogonal components.Reduces computational load, highlights main spectral features.May lose important information, not ideal for non-linear data.Preprocessing for further analysis, noise reduction.
K-Nearest Neighbors (KNN) [110,111]Simple algorithm that classifies based on the majority class among K-Nearest Neighbors.Easy to implement, non-parametric.Computationally expensive with large datasets, sensitive to irrelevant features.Crop species classification, vegetation monitoring.
Artificial Neural Networks (ANNs) [99,112]Computing systems inspired by biological neural networks, capable of pattern recognition.Flexible, can model complex relationships.Requires extensive training data, prone to overfitting.Spectral unmixing, anomaly detection.
Recurrent Neural Networks (RNNs) [113]NNs with loops, designed to recognize patterns in sequences of data.Effective for temporal dependencies, can handle sequential data.Can suffer from vanishing gradients, and requires large memory.Time-series crop monitoring, growth stage identification.
Graph Convolutional Networks (GCNs) [114,115]NNs that operate on graph-structured data, capturing relationships in irregular data.Can model complex dependencies, robust to varying data structures.Complex implementation requires extensive computational resources.Crop disease spread modeling, soil nutrient mapping.
Sparse Representation [116,117]Techniques that represent data as a sparse combination of basis functions.Effective in capturing essential features, good for compressed sensing.Requires careful selection of basis functions, computationally intensive.HSI denoising, feature extraction.
Deep Residual Networks (ResNets) [118,119]NNs that use skip connections to mitigate the vanishing gradient problem.High accuracy, allows for very deep networks and improves training.Requires significant computational power, and complex architecture.Crop health monitoring, detailed spectral analysis.
Generative Adversarial Networks (GANs) [120,121]NNs that consist of a generator and a discriminator, used for data generation.Can create high-quality synthetic data, effective for data augmentation.Difficult to train, prone to instability.Data augmentation, and synthesis of HS images.

5. Applications

AI-HSI is revolutionizing various fields by providing advanced analytical capabilities and detailed spectral information across numerous bands of the EM spectrum. AI excels across diverse fields like precision agriculture, environmental monitoring, mining and mineralogy, forensic science, medicine, and space operations due to its ability to process vast and complex datasets efficiently. In precision agriculture, AI helps optimize crop yields by analyzing soil health [122], weather patterns [123], and plant growth in real time [124]. For environmental monitoring, it automates the analysis of satellite and sensor data to track climate changes and ecosystems [125,126]. In mining and mineralogy, AI enhances resource identification and extraction processes with precision, reducing the environmental impact [127,128]. In forensic science, AI aids in pattern recognition and evidence analysis, speeding up investigations [129,130]. In the medical field, AI improves diagnostics, treatment planning, and personalized medicine through advanced data analysis and predictive models [131,132]. In space operations, AI supports mission planning, navigation, and autonomous spacecraft control, enabling faster decision-making in unpredictable environments [133,134]. Its versatility and precision make AI a critical tool for advancing efficiency and innovation across these sectors. In this section, a few key applications of AI-HSI are discussed.
(A)
Agriculture
Plant growth can be significantly impacted by factors such as disease, stress, and pest infestations, which may lead to reduced yields, diminished quality, or even a complete halt in growth. The visible and near-infrared (NIR) spectral region has proven effective in detecting specific issues, such as white tip disease and Orobanche cumana parasitism in sunflower plants [135]. Using hyperspectral data, logistic regression models can classify plant health and identify different stages of disease damage, even at pre-symptomatic stages [136]. Additionally, plant stress caused by heavy metal contamination, such as mercury (Hg), has been identified using the NIR spectral region [137].
In cases where diseased areas are small and surrounded by healthy tissue, such as charcoal rot disease in soybean stems, background removal and precise selection of regions of interest (ROIs) are crucial for improving data accuracy [138]. Vegetation indices are also applied in spectral domain studies to analyze the relationship between plant diseases and growth [139]. In recent advancements, 3D-CNN models have been trained on complete spectral bands, eliminating the need for labor-intensive feature selection. Handheld hyperspectral sensors, like the Specim IQ, have simplified data collection through line scan methods. Moreover, DCNNs trained on UAV-captured data have shown high accuracy in detecting diseases such as yellow rust in winter wheat [140]. While DL reduces some intermediate processing steps, many existing studies focus on training models for two primary purposes: (1) accurately detecting or classifying diseases and (2) minimizing preprocessing requirements.
Although real-time hyperspectral applications in plant disease and pest detection are limited, real-time defect detection combined with rapid actuation requires fewer spectral bands to meet processing and data transfer demands. The wavelength range of 400–1000 nm is commonly used in agricultural applications, but the ideal wavelength varies for different purposes [24]. DL offers the necessary speed for such applications. For example, variations of CNN models, such as fused 2D–3D CNNs, have achieved rapid analysis times, sorting 32 coffee beans per image in 0.03 s with the help of robotic arms [141].
Coffee beans are a vital agricultural commodity traded globally, and ensuring their quality is crucial before roasting. Screening for defective beans is a key process, with common defects including black, fermented, moldy, insect-damaged, shell, and broken beans. Among these, insect damage is the most prevalent. Traditionally, coffee bean sorting has been performed manually, which is labor-intensive and prone to errors caused by fatigue, leading to inconsistent quality. Chen et al. introduced a novel approach that integrates an NIR snapshot hyperspectral sensor with DL to develop a multimodal real-time coffee bean defect inspection algorithm (RT-CBDIA) for sorting defective green coffee beans [141]. Three types of CNNs were designed to enable real-time inspection: lean 2D-CNN, 3D-CNN, and a 2D–3D merged CNN. Principal component analysis was employed to select the most important spectral bands for the analysis.
AI-HSI is revolutionizing agriculture by providing precise and detailed insights into crop health, soil conditions, and pest infestations (Figure 4) [142,143]. HSI captures a wide spectrum of light beyond what is visible to the human eye, allowing for the detection of subtle variations in plant and soil properties [144]. The expansion and execution of a real-time AI-assisted push-broom HS system for crop categorization was presented by Neri et al. [145]. The push-broom HS technique linked with AI provided extraordinary detail and precision in crop examination. The precision of the system in collecting spectral data was demonstrated through calibration and resolution analysis. Plant leaf categorization has been the test case for the system. Neural networking AI enables unceasing HS data analysis for up to 720 ground positions at 50 frames per second [145]. A modified spectral-spatial CNN with compensation for illumination variations was employed for the classification of high-resolution hyperspectral images of agricultural lands in the applied problem of determining the types of vegetation [146].
(B)
Environmental monitoring
In environmental monitoring, AI enhances the detection of pollutants and the assessment of natural resources [147,148,149]. An example includes the analysis of water quality and the detection of algae blooms. ML algorithms have been trained to recognize the spectral characteristics of algae in HSIs, offering an effective and non-invasive method to monitor water quality [150]. Another application field has been the assessment of air pollution. ML techniques have analyzed HSIs to detect the presence of atmospheric pollutants, aiding in monitoring and assessing air pollution levels and providing valuable data for environmental policies. ML has also played a crucial role in natural disaster management. Predictive models have been trained to assess the impact of fires, floods, and other natural disasters based on HS imagery. This has aided in predicting the evolution of a disaster and planning relief operations [151]. Furthermore, forest analysis has been another area where ML has made a difference. ML algorithms have monitored forest health and prevented fires, contributing to the protection of these vital ecosystems and mitigating the impact of climate change.
Chen et al. proposed a method to detect air pollution by applying an HSI algorithm for visible light, near-infrared, and far infrared [152]. Hyperspectral information was assigned to images from monocular, NIR, and thermal imaging, and principal component analysis was performed on hyperspectral images taken at different times to obtain solar radiation intensity. The Beer–Lambert law and multivariate regression analysis were applied to calculate PM2.5 and PM10 concentrations over the period, which were then compared to the corresponding concentrations provided by the Taiwan Environmental Protection Agency to evaluate the method’s accuracy. The study demonstrated that the accuracy in the visible light band was higher than that in the near-infrared and far-infrared bands, and it proved to be the most convenient band for data acquisition. Consequently, mobile phone cameras will be able to analyze PM2.5 and PM10 concentrations in real time using this algorithm by capturing images, enhancing the convenience and immediacy of air pollution detection.
Alboody et al. introduced an innovative Remote Hyperspectral Imaging System integrated into an Unmanned Aquatic Drone (UAD) for sensing and identifying plastic in coastal and freshwater environments (Figure 5) [153]. This innovative system operated in a near-field of view, with the HS camera positioned approximately 45 cm from the water surface. An HSI database was constructed with HS data cubes of various plastics and polymers commonly found as litter in aquatic environments. Using this benchmark database, an in situ spectral analysis characterized the HS reflectance of these items, pinpointing the absorption feature wavelengths for each plastic type. The Remote Hyperspectral Imaging System’s ability to automatically identify different plastic litter types was evaluated by applying several supervised ML approaches to a dataset of representative marine litter image patches. The evaluation showed that the system could classify plastic litter with a general accuracy of nearly 90%. This research underscored that the Remote Hyperspectral Imaging System, when used alongside the UAD, presents a capable method for detecting plastic waste in aquatic settings.
The conversion of red, green, and blue (RGB) images to hyperspectral images involves transforming the limited color information from traditional red, green, and blue channels into a richer spectral dataset that spans a broader range of wavelengths [154]. While RGB images capture only three spectral bands, HSI records hundreds of narrow spectral bands across the electromagnetic spectrum, providing detailed spectral signatures for each pixel [155]. This conversion process typically uses specialized algorithms, such as DL models or ML techniques, which extract spectral information from RGB data and estimate the missing wavelengths [156]. One approach involves leveraging databases of hyperspectral data to train these algorithms, allowing them to predict hyperspectral signatures from RGB images [157,158]. Principal Component Analysis (PCA), along with other dimensionality reduction techniques, can also aid in estimating hyperspectral bands by identifying key spectral patterns in RGB data [108]. The conversion significantly enhances image analysis, as it enables the detection of subtle material properties and environmental changes that are not visible in standard RGB imaging, making it useful in fields like remote sensing, agriculture, and medical imaging [159].
The study conducted by Leung et al. utilized spectral analysis to quantify water pollutants by analyzing images related to biological oxygen demand (BOD) [160]. A total of 2545 images depicting water quality pollution were generated due to the lack of a standardized method for water pollution detection. To address this, a novel snapshot HSI conversion algorithm was developed to perform spectral analysis on traditional RGB images, as shown in Figure 6. To demonstrate the effectiveness of the developed HSI algorithm, two distinct 3D-CNNs were employed to train two separate datasets. One dataset was based on the HSI conversion algorithm (HSI-3DCNN), while the other utilized a traditional RGB dataset (RGB-3DCNN). The images depicting water quality pollution were categorized into three groups—Good, Normal, and Severe—based on the severity of pollution. A comparison was conducted between the HSI and RGB models, focusing on precision, recall, F1-score, and accuracy metrics. The water pollution model’s accuracy was improved from 76% to 80% when the RGB-3DCNN was replaced by the HSI-3DCNN. These results suggested that the HSI conversion algorithm has the potential to significantly enhance the effectiveness of water pollution detection compared with the traditional RGB model [160].
(C)
Mining and mineralogy
Hyperspectral remote sensing has emerged as a crucial tool in mapping lithological variations and mineral alterations across a wide spectrum of ore mineralization settings [161]. This advanced imagery enables the detailed characterization and analysis of the Earth’s surface, offering extensive spectral and spatial data that prove invaluable during the early reconnaissance phases of mineral exploration. The use of hyperspectral data, gathered from both satellite and airborne platforms, has been particularly effective in overcoming common challenges associated with mineral exploration [162].
With the rapid growth in hyperspectral data acquisition from a variety of platforms, the scientific community has responded by developing advanced data processing techniques, many of which leverage AI. Moreover, recent research has highlighted the increasing integration of ML algorithms with traditional image-processing methods and geological surveys. This integration has underscored the growing importance of hyperspectral remote sensing in lithological mapping and mineral prospecting (HLM-MP) [163,164].
The global shift toward green energy has led to a significant perceived rise in the demand for raw materials. Although recycling rates are steadily improving, the discovery of new resources is essential to meet the growing supply needs [165]. Mapping mineral abundances in an environmentally responsible manner calls for minimally invasive and cost-efficient exploration techniques. Geological remote sensing provides an effective solution for mineral resource exploration, with satellite and airborne imaging commonly used for large-scale exploration. However, a crucial gap persists when it comes to mineral mapping at the finer outcrop scale. Addressing this gap is vital for more detailed and precise resource targeting.
Booysen et al. presented a novel approach that combines multiple sensors and multi-scale data acquisition to better understand the complex mineralogy related to lithium and tin mineralization in the Uis pegmatite complex, Namibia [166]. The method was trained using hand samples and ultimately used to generate a three-dimensional (3D) point cloud to map lithium mineralization within an open pit. This enabled the successful identification and mapping of lithium-bearing minerals, such as cookeite and montebrasite, at the outcrop scale. Validation of the approach was conducted using drill-core data, X-ray diffraction (XRD) analysis, and laser-induced breakdown spectroscopy (LIBS) measurements. This technique offered an efficient means of mapping complex terrains while enhancing the monitoring and optimization of ore extraction processes. Additionally, the method was adaptable for application to other minerals of importance to the mining industry [166].
AI-HSI revolutionizes mining and mineralogy by enabling the precise recognition and analysis of minerals in complex geological formations [167]. AI algorithms enhance HSI data, capturing a wide range of wavelengths reflected from mineral surfaces far beyond the visible spectrum. This detailed spectral information allows for accurate mineral recognition and mapping, even in heterogeneous and visually indistinguishable samples [168]. By harnessing the power of AI, HSI systems can analyze vast amounts of data from the EM spectrum, providing defense forces with unprecedented capabilities in surveillance and reconnaissance. These systems offer enhanced detection and recognition of targets, including those concealed by camouflage or environmental factors [169].
Mineral processing methods such as flotation are used in mining operations to separate ore into its constituent minerals. Prior knowledge of the types of minerals present in the ore enables these processes to be both more efficient and faster. Human perception can only identify information across three wavelength regions (red, green, and blue), whereas HSI can collect high-resolution spectral data across the visible light wavelength region up to NIR. DL techniques can extract and analyze features from HS data, identifying the unique spectral patterns of each mineral. Okada et al. anticipated an automatic mineral identification system that combines HSI with DL to determine mineral types before the mineral processing stage [170]. This approach allowed for the rapid, non-destructive recognition of minerals in rocks. Figure 7a–e presents example RGB images of various minerals. This study investigated the feasibility of identifying these minerals using RGB images analyzed through DL, comparing the results with those obtained from HS data analysis. Experimental results demonstrated that using DL on RGB images of minerals yields an identification accuracy of approximately 30%. In contrast, analyzing HS data with DL achieves a significantly higher accuracy of over 90% [170].
(D)
Forensic science
Analyzing evidence presents significant challenges due to the complexity, diversity, and often unknown nature of materials found at crime scenes. Forensic science plays a crucial role in examining such evidence, requiring a careful selection of chemical tests, analytical methods, and processing techniques by forensic experts. Ideally, the evidence should be interpreted, analyzed, and evaluated within the original context of the crime scene. In this regard, HSI has proven to be an effective analytical tool, preserving the integrity of samples and objects for repeated and sequential analyses, as well as for re-examination in counter-proof tests [171].
AI is increasingly being integrated into various forensic disciplines, offering valuable support across numerous areas [172,173]. In forensic anthropology, AI has been applied to assist in determining the sex of individuals by using CNNs to analyze dysmorphic sexual traits in 3D skull reconstructions obtained from computed tomography (CT) scans. In forensic dentistry, AI has proven effective in predicting age and gender based on dental features, as well as in the analysis of bite marks [174]. ANNs have demonstrated up to 90% accuracy in age and gender predictions. In forensic pathology, AI is used to enhance the rapid identification of diatom species, which can be crucial in cases involving drowning. CNNs have been employed to classify diatoms in both water samples and human tissue, with an impressive accuracy rate of 95%. Meanwhile, in forensic genetics, AI is harnessed to process the vast amounts of data generated during forensic DNA analysis [175]. One notable application involved using an ANN for DNA profiling from high throughput sequencing data, achieving remarkable precision, with 99.9% accuracy [176].
Forgery involves the creation, alteration, or imitation of writings, objects, or documents and is classified as a white-collar crime. Investigating cases such as forged checks, wills, or altered documents often requires the analysis of the inks used [177]. HSI offers a powerful tool for detecting various materials, including different types of inks. By combining this technology with advanced classifiers, it is possible to accurately identify the specific inks used in a document. The study conducted by Rastogi et al. utilized the UWA Writing Ink Hyperspectral Images (WIHSI) database to conduct ink detection, applying 3D reduction algorithms: Principal Component Analysis (PCA), Factor Analysis, and Independent Component Analysis (ICA). A comparative analysis was then performed, evaluating the results of these methods against existing techniques [178].
AI-HSI significantly enhances forensic science by providing detailed, non-destructive analysis of materials and scenes. HSI captures a wide spectrum of light beyond visible wavelengths, permitting the detection of subtle differences in materials that are imperceptible to the human eye. When integrated with AI, this technology can efficiently analyze and interpret complex spectral data, identifying substances such as drugs, explosives, and biological materials with high precision. It can also be used to detect traces of blood, gunshot residue, or other forensic evidence on various surfaces [179].
Traditionally used in remote sensing [180], HSI is now applied in non-invasive substance categorization, with significant potential in forensic science for classifying substances at crime scenes. Ksiazek et al. focused on blood stain categorization as a case study for evaluating HS data processing methods [181]. Experiments were conducted on a previously untested dataset comprising images of blood and blood-like substances (e.g., ketchup, tomato concentrate, artificial blood). To assess categorization performance, two experimental setups were prepared: Hyperspectral Transductive Classification (HTC), where training and test sets came from the same image, and Hyperspectral Inductive Classification (HIC), where the test set came from a different image. The latter setup presented a more realistic and challenging scenario for forensic investigators. Several DL architectures, containing 1D, 2D, and 3D CNNs, an RNN, and a Multilayer Perceptron (MLP) were tested, and their performance was compared against that of a baseline SVM. Model evaluation used t-SNE and confusion matrix analysis to identify and address model undertraining [181].
Results indicated that in the HTC setup, all models, comprising MLP and SVM, performed comparably, achieving an overall accuracy of 98–100% for simpler images and 74–94% for more difficult ones. Nevertheless, in the more challenging HIC setup, selected DL architectures demonstrated a noteworthy benefit, with the best overall accuracy ranging from 57–71%, improving the baseline by up to 9 percentage points. A detailed analysis and discussion of the results for each architecture revealed that per-class errors varied significantly according to the model.
The best-performing models came from two diverse architecture families (3D CNN and RNN), suggesting that the optimal DNN design for HS data remains an open research area [181]. Furthermore, the study underscored the importance of selecting appropriate model architectures based on the specific features of the data and the categorization task. Future research may explore hybrid models or advanced ensembling techniques to further enhance categorization performance in forensic applications. This work contributes valuable insights into the application of DL techniques for HS data, highlighting both their potential and the challenges that remain.
Figure 8 illustrates the categorization maps for the various architectures within the context of the HTC scenario. The predictions for the HTC scenario are presented, with the 1D CNN model applied to the E(1) image and the 2D CNN model applied to the F(1) image. Due to the HTC scenario’s requirement to utilize only a subset of accessible pixels as the test set, the categorization maps exhibited areas of incomplete data. In the first row, misclassifications were identified within the artificial blood class, whereby some pixels were incorrectly designated as tomato concentrate and acrylic paint. The second row displayed more straightforward categorization, with errors predominantly occurring along substance borders, potentially due to the mixing of spectra [181].
(E)
Medical field
In the medical field, AI-HSI contributes to non-invasive diagnostics and tissue analysis [182]. When light interacts with biological tissues, it undergoes complex processes such as scattering and absorption due to structural inhomogeneity and the presence of molecules like hemoglobin, melanin, and water. These interactions alter as diseases progress, affecting the tissue’s absorption, fluorescence, and scattering properties. HSI captures the reflected, fluorescent, and transmitted light from tissues, providing a wealth of quantitative data that can reveal pathological changes at the molecular level. This detailed spectral information enables clinicians to obtain valuable diagnostic insights, improving the accuracy of disease detection and treatment monitoring [183].
By automating complex data processing tasks, AI not only accelerates analysis but also improves accuracy, leading to more informed decision-making and innovative applications across diverse domains [184,185]. A recent multidisciplinary study including 39 patients examined the prospective of retinal imaging methods for analyzing Alzheimer’s disease [186]. The study utilized a user-friendly HS snapshot camera featuring sixteen spectral bands ranging from 0.46 µm to 0.62 µm with a 10 nm bandwidth to quantify amyloid buildup. Additionally, optical coherence tomography (OCT) was used to evaluate the thickness of the retinal nerve fiber layer. Special image pre-processing and ML were helpful in the discrimination between AD patients and healthy subjects. Combining the HS and OCT data produced the best results [186].
For the application of HSI in an in vivo setting, particularly during surgery, enhanced real-time, in vivo, and label-free capabilities are highly desirable. Currently, traditional imaging methods like CT and MRI are used to confirm tumor locations prior to resection surgeries. However, the exact locations, number, and sizes of tumors hidden within organs can remain uncertain [187]. During surgery, the surgeon’s expertise in visual diagnosis, eye-hand coordination, and tactile feedback plays a crucial role. This has led to a growing demand for intraoperative imaging tools. HSI has gained significant attention due to its in vivo, real-time, and label-free nature, making it particularly useful for perioperative applications and therapy evaluation. HSI has demonstrated the ability to differentiate between healthy and cancerous tissues in near-real time during surgical oncology. Combined with advanced classification algorithms, HSI can monitor the surgical environment, delineate tumors [188], guide surgery, identify critical adjacent structures, detect residual tumor cells, and ultimately assist surgeons in making precise decisions during surgery [189].
Ji et al. introduced an HS learning method for snapshot HIS [190]. This method incorporated sampled HS data from a small subarea into a learning algorithm to reconstruct the hypercube. HS learning was based on the concept that a photograph contains detailed spectral information beyond its visual appearance. By sampling a small amount of HS data, the method enabled spectrally informed learning to reconstruct a hypercube from an RGB image without needing full HS measurements. This method attained complete spectroscopic resolution in the hypercube, similar to that of high-resolution scientific spectrometers. Furthermore, HS learning enabled rapid dynamic imaging by utilizing the slow-motion video functionality of regular smartphones, which produce a sequence of multiple RGB images over time [190].
To provide further evidence of the capabilities of dynamic imaging of peripheral microcirculation at a specific video frame rate, Ji et al. recorded a video at 60 fps for 180 s (temporal resolution = 0.0167 s) [190]. The authors employed the inner eyelid (palpebral conjunctiva) as a model system for peripheral microcirculation in humans, visualizing spatiotemporal hemodynamic variations in the microvessels (Figure 9a and Figure 10b). The inner eyelid is a highly vascularized and readily accessible peripheral tissue site, receiving blood from the ophthalmic artery, and thus represents a feasible sensing site for a range of diseases and disorders. Figure 9c,d illustrates the peripheral hemodynamic maps of hemoglobin (Hb), oxyhemoglobin (HbO2), and oxygen saturation (sPO2) during the resting state of a healthy adult. These maps demonstrate the capability of the HS learning method to provide detailed, high-resolution spectral information from dynamic video data, highlighting its potential for non-invasive medical diagnostics and monitoring. The ability to capture and analyze such fine-grained spectral data in real time opens new possibilities for applications in medical imaging, environmental monitoring, and beyond, where rapid and accurate spectral analysis is crucial [190].
The aforementioned maps were calculated using two distinct ML frameworks, namely statistical learning and DL. The sPO2 maps displayed in Figure 9 demonstrated a spatially intricate network of perfusion patterns within the internal eyelid, a detail that was not discernible within the original RGB image. The hemodynamic maps of HbO2 and Hb, which were generated through the application of statistical learning and DL, exhibited an exceptional level of concordance. This finding indicated the efficacy of the employed hemodynamic parameter extraction techniques [190].
The future of AI-HSI holds immense potential for groundbreaking advancements and applications [56]. For instance, the study conducted by Tsai et al. leveraged HSI and a DL diagnostic model to recognize the stage of esophageal cancer and pinpoint its locations [56]. The model employed a novel algorithm developed in this research, which simulated spectral data from images and integrated it with DL for the classification and analysis of esophageal cancer using a single-shot multibox detector (SSD)-based recognition system. To evaluate the prediction model, a dataset comprising 155 white-light endoscopic (WLI) images and 153 narrow-band endoscopic (NBI) images of esophageal cancer was utilized. The algorithm demonstrated efficient performance, taking just 19 s to forecast the outcomes for 308 test images. The precision of the test results using spectral data was 88% for WLI and 91% for NBI images. In comparison, the precision using R.G.B images was 83% for WLI and 86% for NBI. The study found that incorporating HSI improved the accuracy of WLI and NBI predictions by 5%, thereby significantly enhancing the detection method’s predictive accuracy [56].
Figure 9. Illustrative peripheral hemodynamic maps of HbO2, Hb, and sPO2 from a healthy adult at rest. The inner eyelid is highly vascular and connected to the ophthalmic artery, making it a manageable site for imaging [190]. (a) A healthy adult takes a photo with a smartphone while pulling down the inner eyelid, (b) a High-resolution RGB image of the inner eyelid exhibiting the field of view for HS learning-based imaging. Microvessels are visible and unobscured by skin pigments, and (c) hemodynamic maps of the inner eyelid generated using statistical learning. These maps are based on tissue reflectance spectral modeling from the recovered hypercube, (d) Hemodynamic maps of the inner eyelid produced using DL. The DNN directly outputs HbO2 and Hb values from the R.G.B input [190].
Figure 9. Illustrative peripheral hemodynamic maps of HbO2, Hb, and sPO2 from a healthy adult at rest. The inner eyelid is highly vascular and connected to the ophthalmic artery, making it a manageable site for imaging [190]. (a) A healthy adult takes a photo with a smartphone while pulling down the inner eyelid, (b) a High-resolution RGB image of the inner eyelid exhibiting the field of view for HS learning-based imaging. Microvessels are visible and unobscured by skin pigments, and (c) hemodynamic maps of the inner eyelid generated using statistical learning. These maps are based on tissue reflectance spectral modeling from the recovered hypercube, (d) Hemodynamic maps of the inner eyelid produced using DL. The DNN directly outputs HbO2 and Hb values from the R.G.B input [190].
Technologies 12 00163 g009
(F)
Space operations
Since the late 1990s, spectroscopy, spectral analysis, and color photometry have been proposed for characterizing the composition of space objects [191,192]. Early works focused on observing spectra from known objects and linking them to material types, with some initial classification methods based on color indexes and spectral features. Color photometry was also suggested for differentiating objects but not for identifying surface materials or reconstructing attitude motion [193]. Meanwhile, hyperspectral technology and spectrometry have been applied in astronomy, such as asteroid classification. Additionally, light curve analysis has been used to reconstruct the attitude and shape of space objects [194]. More recently, ML has been introduced to classify objects based on light curves. HSI has also been explored for close-proximity navigation and target classification on Earth, with extensive use in Earth observation [194].
The rapid advancement of nanosatellite technologies has significantly expanded the availability of satellite-generated data and their applications. Satellite edge computing (SEC) has emerged as a promising solution for performing in-orbit processing of sensed data, reducing the need for extensive terrestrial-satellite communications and enabling mission-critical services. While most SEC research has focused on general computing tasks, Zhu et al. proposed a two-tier collaborative framework specifically tailored for HSI processing [195]. This approach involved the strategic selection of spectral bands from the collected HSI data, transmitting only the most relevant bands for further analysis. An in-depth data analysis was performed to uncover the intricate relationship between band selection and analytic performance. The band selection problem was then framed as a utility maximization challenge, balancing analytic accuracy, energy efficiency, and communication demands. To address this, a novel multi-agent reinforcement learning solution (MaHSI) was introduced, which adapts to the dynamic SEC environment. MaHSI models the complex interactions among spectral bands as collaborations among agents, reducing the exploration space and optimizing decision-making. Experimental results on real-world HSI datasets demonstrated that this method not only surpasses traditional band selection algorithms in accuracy and processing speed but also maximizes utility for satellite missions [195].
The study conducted by Vasile et al. explored the use of HSI and ML to analyze the surface composition of space objects and estimate their attitude motion [196]. It demonstrated how hyperspectral and multispectral analysis of absorbed, emitted, and reflected light can accurately identify surface materials. A high-fidelity simulation model was introduced and validated through laboratory experiments to test this approach. The paper explains the process of spectral unmixing to estimate the materials on the object’s surface and proposes an ML method to reconstruct attitude motion based on spectral time series.
Integrating AI into space operations can significantly enhance the speed of response to different events. By processing massive raw HSIs directly on-board satellites, the images can be converted into valuable information quickly. This not only accelerates the transfer of images to the ground but also vastly improves the scalability of AI solutions across the globe [23]. Nevertheless, several challenges need to be addressed, including hardware and energy constraints, the efficiency of DL models, the accessibility of reference data, and the need to build trust in AI-based solutions. It is crucial to select AI applications that are unbiased, objective, and interpretable, as this choice affects all aspects of satellite design and operation, particularly for emerging missions. The problem has been addressed by introducing a quantifiable procedure for the empirical evaluation of probable AI applications thought for on-board operation. To demonstrate the flexibility of the proposed procedure, the approach was used to assess AI applications for two essentially different tasks: the Copernicus Hyperspectral Imaging Mission for the Environment [European Union/European Space Agency (ESA)] and the 6U Intuition-1 satellite. It is considered that the standardized process has the potential to become an imperative means for maximizing the results of Earth observation missions through the selection of the most appropriate on-board AI applications in terms of scientific and industrial grades [23].
HSI is of great importance for material recognition; however, old-style systems are frequently unwieldy, impeding the advancement of condensed solutions. While prior metasurfaces (MSs) have addressed volume issues, their complex fabrication processes and large footprints remain significant obstacles to their further development. Lin et al. introduced a compact snapshot HS imager that integrates meta-optics with the small-data convex/deep (CODE) DL theory (see Figure 10) [197]. The imager incorporates a single multi-wavelength MS chip operating in the visible range (500 nm to 650 nm), which suggestively reduces the overall device size. To validate its high performance, a 4-band multispectral imaging dataset was used as input. The CODE-driven imaging system was able to proficiently generate an 18-band HS data cube with high reliability, requiring only 18 training data points. It was expected that the seamless addition of multi-resonant MSs with small-data learning theory would pave the way for low-profile, advanced instruments applicable in both fundamental scientific research and real-world scenarios [197].
Figure 10. The off-axis focusing meta-mirror, created of multi-resonant meta-atoms, images a color object and separates it into four wavelength channels in free space. These four channels are used to create an 18-band HS data cube using convex optimization and DL, following CODE’s small-data learning and imaging theory [197].
Figure 10. The off-axis focusing meta-mirror, created of multi-resonant meta-atoms, images a color object and separates it into four wavelength channels in free space. These four channels are used to create an 18-band HS data cube using convex optimization and DL, following CODE’s small-data learning and imaging theory [197].
Technologies 12 00163 g010

6. Challenges and Future Prospectives

The integration of AI with HSI presents a series of significant challenges. Firstly, the sheer volume of data produced by HS sensors is enormous, often consisting of hundreds of spectral bands per pixel. This creates substantial storage and processing requirements, which can be a barrier to real-time or near-real-time analysis. The need for high computational power is a critical challenge, particularly when deploying AI models in resource-limited environments [198]. Another challenge lies in the complexity of HS data itself. HSIs capture detailed spectral information that can reveal subtle differences in materials that are not discernible in traditional imaging [199]. However, this complexity requires sophisticated algorithms for effective feature extraction and classification. Traditional ML techniques often struggle with the high dimensionality of HS data, leading to the “curse of dimensionality”, where the performance of models deteriorates as the number of dimensions increases [200].
Furthermore, the lack of labeled HS data poses a significant challenge for training AI models [201]. Creating labeled datasets for HSIs is time-consuming and requires expertise in both the application domain and spectral analysis. This scarcity of annotated data limits the effectiveness of supervised learning techniques, necessitating the development of advanced semi-supervised or unsupervised learning methods. Finally, there is the issue of standardization and interoperability. HSI systems and AI platforms often come from various manufacturers and developers, leading to compatibility issues. The absence of standardized data formats and processing protocols hinders the seamless integration and sharing of data and models across different platforms and applications.
Despite these challenges, the synergy between AI and HSI holds immense potential for future advancements across various fields. One promising direction is the expansion of more efficient and scalable AI algorithms tailored for HS data. Advances in DL, particularly in CNNs and RNNs, are expected to play a vital role in overcoming the curse of dimensionality and improving the accuracy of HS data analysis [202,203]. Moreover, the evolution of hardware technology, such as graphics processing units (GPUs) and specialized AI accelerators, as well as photonic Neural Networks (PNNs) [204,205], will facilitate the handling of large HS datasets, enabling faster and more efficient data processing. Cloud computing and edge computing solutions also present opportunities for deploying HSI and AI applications in real time, even in remote or resource-constrained environments.
The development of large-scale annotated HS datasets, along with improved data augmentation techniques, will enhance the training of AI models [206]. Crowdsourcing and citizen science initiatives could play a role in gathering and labeling HS data, expanding the available datasets and improving model robustness. Standardization efforts will be critical to the broader adoption of HSI and AI. Establishing common data formats, protocols, and best practices will facilitate interoperability and integration across different systems and applications. This will enable the creation of more versatile and reusable AI models, accelerating innovation and deployment.
In application domains such as agriculture, environmental monitoring, medical diagnostics, and material science, the integration of AI with HSI is poised to revolutionize practices [207]. For instance, in agriculture, AI-driven HS analysis can lead to more precise crop monitoring and disease detection, significantly enhancing yield and sustainability. In medical diagnostics, it can improve the accuracy of non-invasive diagnostic techniques, leading to earlier and more reliable detection of diseases.

7. Conclusions

The synergy between AI and HSI represents a significant leap forward in data processing and material identification technologies. By integrating AI, particularly DL algorithms, with HSI systems, we can effectively overcome traditional limitations such as bulky hardware, slow processing speeds, and sensitivity to environmental noise. AI-driven methods enable faster and more accurate spectrum reconstruction, enhance noise tolerance, and allow for real-time, high-resolution imaging. This powerful combination not only optimizes existing processes but also unlocks new possibilities across a variety of fields. In remote sensing, it can enhance the precision of environmental monitoring, enabling the detection of subtle changes in land use or vegetation health. In medical diagnostics, this integration can lead to more accurate, non-invasive diagnostic tools capable of detecting diseases at earlier stages. In forensic analysis, it allows for more detailed examination of materials, leading to quicker and more reliable investigations. As AI and HSI technologies continue to evolve, their intersection is poised to drive further innovations, expanding their applications in both scientific research and practical scenarios. The future of this synergy holds the promise of revolutionizing industries, pushing the boundaries of what is possible, and offering precise, efficient, and versatile solutions that were previously unattainable with conventional approaches.

Author Contributions

Conceptualization, S.N.K., N.L.K., I.V.O., A.V.N. and M.A.B.; methodology, A.V.N.; software, I.V.O.; validation, A.V.N.; formal analysis, M.A.B.; investigation, S.N.K., N.L.K. and M.A.B.; resources, A.V.N.; data curation, I.V.O.; writing—original draft preparation, M.A.B. and S.N.K.; writing—review and editing, S.N.K. and N.L.K.; visualization, N.L.K.; supervision, S.N.K., N.L.K. and A.V.N.; project administration, A.V.N.; funding acquisition, A.V.N. All authors have read and agreed to the published version of the manuscript.

Funding

Grant No 70-2023-001317.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data was produced.

Acknowledgments

This work was supported by the Analytical Center for the Government of the Russian Federation (agreement identifier 000000D730324P540002, grant No 70-2023-001317 dated 28.12.2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Goetz, A.F.H.; Vane, G.; Solomon, J.E.; Rock, B.N. Imaging Spectrometry for Earth Remote Sensing. Science 1985, 228, 1147–1153. [Google Scholar] [CrossRef] [PubMed]
  2. Gowen, A.A.; O’Donnell, C.P.; Taghizadeh, M.; Cullen, P.J.; Frias, J.M.; Downey, G. Hyperspectral imaging combined with principal component analysis for bruise damage detection on white mushrooms (Agaricus bisporus). J. Chemom. 2008, 22, 259–267. [Google Scholar] [CrossRef]
  3. Stuart, M.B.; Davies, M.; Hobbs, M.J.; Pering, T.D.; McGonigle, A.J.S.; Willmott, J.R. High-Resolution Hyperspectral Imaging Using Low-Cost Components: Application within Environmental Monitoring Scenarios. Sensors 2022, 22, 4652. [Google Scholar] [CrossRef] [PubMed]
  4. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  5. Amigo, J.M.; Martí, I.; Gowen, A. Chapter 9—Hyperspectral Imaging and Chemometrics: A Perfect Combination for the Analysis of Food Structure, Composition and Quality. In Data Handling in Science and Technology; Marini, F., Ed.; Chemometrics in Food Chemistry; Elsevier: Amsterdam, The Netherlands, 2013; Volume 28, pp. 343–370. [Google Scholar] [CrossRef]
  6. Picot-Clémente, J. Hyperspectral Imaging’s Potential for Transforming the Imaging Field. Laser Focus World. Available online: https://www.laserfocusworld.com/detectors-imaging/article/14289712/hyperspectral-imagings-potential-for-transforming-the-imaging-field (accessed on 9 July 2024).
  7. Sun, D.-W.; Pu, H.; Yu, J. Applications of hyperspectral imaging technology in the food industry. Nat. Rev. Electr. Eng. 2024, 1, 251–263. [Google Scholar] [CrossRef]
  8. Liu, Y.; Pu, H.; Sun, D.-W. Hyperspectral imaging technique for evaluating food quality and safety during various processes: A review of recent applications. Trends Food Sci. Technol. 2017, 69, 25–35. [Google Scholar] [CrossRef]
  9. Lee, W.S. Plant Health Detection and Monitoring. In Hyperspectral Imaging Technology in Food and Agriculture; Park, B., Lu, R., Eds.; Springer: New York, NY, USA, 2015; pp. 275–288. [Google Scholar] [CrossRef]
  10. Li, J.; Rao, X.; Ying, Y. Detection of common defects on oranges using hyperspectral reflectance imaging. Comput. Electron. Agric. 2011, 78, 38–48. [Google Scholar] [CrossRef]
  11. Schmilovitch, Z.; Ignat, T.; Alchanatis, V.; Gatker, J.; Ostrovsky, V.; Felföldi, J. Hyperspectral imaging of intact bell peppers. Biosyst. Eng. 2014, 117, 83–93. [Google Scholar] [CrossRef]
  12. Riccioli, C.; Pérez-Marín, D.; Garrido-Varo, A. Optimizing spatial data reduction in hyperspectral imaging for the prediction of quality parameters in intact oranges. Postharvest Biol. Technol. 2021, 176, 111504. [Google Scholar] [CrossRef]
  13. Mendez, J.; Mendoza, L.; Cruz-Tirado, J.P.; Quevedo, R.; Siche, R. Trends in Application of NIR and Hyperspectral Imaging for Food Authentication. Sci. Agropecu. 2019, 10, 143–161. [Google Scholar] [CrossRef]
  14. Naganathan, G.K.; Cluff, K.; Samal, A.; Calkins, C.; Subbiah, J. Quality Evaluation of Beef and Pork. In Hyperspectral Imaging Technology in Food and Agriculture; Park, B., Lu, R., Eds.; Springer: New York, NY, USA, 2015; pp. 251–273. [Google Scholar] [CrossRef]
  15. Jia, J.; Zheng, X.; Wang, Y.; Chen, Y.; Karjalainen, M.; Dong, S.; Lu, R.; Wang, J.; Hyyppä, J. The effect of artificial intelligence evolving on hyperspectral imagery with different signal-to-noise ratio, spectral and spatial resolutions. Remote Sens. Environ. 2024, 311, 114291. [Google Scholar] [CrossRef]
  16. Malik, P.; Pathania, M.; Rathaur, V.K. Overview of artificial intelligence in medicine. J. Fam. Med. Prim. Care 2019, 8, 2328. [Google Scholar] [CrossRef]
  17. Khalifa, M.; Albadawy, M. AI in diagnostic imaging: Revolutionising accuracy and efficiency. Comput. Methods Programs Biomed. Update 2024, 5, 100146. [Google Scholar] [CrossRef]
  18. Tratt, D.M.; Keim, E.R.; Hall, J.L.; Buckland, K.N.; Volquarts, T.L.; Saad, K.M.; Johnson, P.D. High areal rate longwave-infrared hyperspectral imaging for environmental remote sensing. In Micro- and Nanotechnology Sensors, Systems, and Applications X; SPIE: Orlando, FL, USA, 2018; pp. 171–181. [Google Scholar] [CrossRef]
  19. Gul, N.; Muzaffar, K.; Shah, S.Z.A.; Assad, A.; Makroo, H.A.; Dar, B.N. Deep learning hyperspectral imaging: A rapid and reliable alternative to conventional techniques in the testing of food quality and safety. Qual. Assur. Saf. Crops Foods 2024, 16, 78–97. [Google Scholar] [CrossRef]
  20. Khonina, S.N.; Kazanskiy, N.L.; Efimov, A.R.; Nikonorov, A.V.; Oseledets, I.V.; Skidanov, R.V.; Butt, M.A. A Perspective on the Artificial Intelligence’s Transformative Role in Advancing Diffractive Optics. Iscience 2024, 27, 110270. [Google Scholar] [CrossRef]
  21. Hong, D.; He, W.; Yokoya, N.; Yao, J.; Gao, L.; Zhang, L.; Chanussot, J.; Zhu, X. Interpretable Hyperspectral Artificial Intelligence: When Nonconvex Modeling Meets Hyperspectral Remote Sensing. IEEE Geosci. Remote Sens. Mag. 2021, 9, 52–87. [Google Scholar] [CrossRef]
  22. Jin, H.; Kong, F.; Li, X.; Shen, J. Artificial intelligence in microplastic detection and pollution control. Environ. Res. 2024, 262, 119812. [Google Scholar] [CrossRef]
  23. Wijata, A.M.; Foulon, M.F.; Bobichon, Y.; Vitulli, R.; Celesti, M.; Camarero, R.; Di Cosimo, G.; Gascon, F.; Longépé, N.; Nieke, J.; et al. Taking Artificial Intelligence Into Space Through Objective Selection of Hyperspectral Earth Observation Applications: To bring the “brain” close to the “eyes” of satellite missions. IEEE Geosci. Remote Sens. Mag. 2023, 11, 10–39. [Google Scholar] [CrossRef]
  24. Centorame, L.; Ilari, A.; Del Gatto, A.; Pedretti, E.F. F. A Systematic Review on Precision Agriculture Applied to Sunflowers, the Role of Hyperspectral Imaging. Comput. Electron. Agric. 2024, 222, 109097. [Google Scholar] [CrossRef]
  25. Hyperspectral Imaging Technology: How It Works|Imec. Available online: https://www.imechyperspectral.com/en/hyperspectral-imaging-technology (accessed on 10 July 2024).
  26. Studier-Fischer, A.; Bressan, M.; Qasim, A.B.; Özdemir, B.; Sellner, J.; Seidlitz, S.; Haney, C.M.; Egen, L.; Michel, M.; Dietrich, M.; et al. Spectral characterization of intraoperative renal perfusion using hyperspectral imaging and artificial intelligence. Sci. Rep. 2024, 14, 17262. [Google Scholar] [CrossRef]
  27. Faqeerzada, M.A.; Kim, Y.N.; Kim, H.; Akter, T.; Kim, H.; Park, M.S.; Kim, M.S.; Baek, I.; Cho, B.K. Hyperspectral imaging system for pre- and post-harvest defect detection in paprika fruit. Postharvest Biol. Technol. 2024, 218, 113151. [Google Scholar] [CrossRef]
  28. Lee, J.-H.; Jang, T.-S.; Yang, H.-S.; Rhee, S.-W. Optical Design of A Compact Imaging Spectrometer for STSAT3. J. Opt. Soc. Korea 2008, 12, 262–268. [Google Scholar] [CrossRef]
  29. Kazanskiy, N.L.; Kharitonov, S.I.; Karsakov, A.V.; Khonina, S.N. Modeling action of a hyperspectrometer based on the Offner scheme within geometric optics. Comput. Opt. 2014, 38, 271–280. [Google Scholar] [CrossRef]
  30. Aviara, N.A.; Liberty, J.T.; Olatunbosun, O.S.; Shoyombo, H.A.; Oyeniyi, S.K. Potential application of hyperspectral imaging in food grain quality inspection, evaluation and control during bulk storage. J. Agric. Food Res. 2022, 8, 100288. [Google Scholar] [CrossRef]
  31. Yoon, J. Hyperspectral Imaging for Clinical Applications. BioChip J. 2022, 16, 1–12. [Google Scholar] [CrossRef]
  32. Gorji, H.T.; Kessel, J.A.S.V.; Haley, B.J.; Husarik, K.; Sonnier, J.; Shahabi, S.M.; Zadeh, H.K.; Chan, D.E.; Qin, J.; Baek, I.; et al. Deep Learning and Multiwavelength Fluorescence Imaging for Cleanliness Assessment and Disinfection in Food Services. Front. Sens. 2022, 3, 97770. [Google Scholar] [CrossRef]
  33. Wieme, J.; Mollazade, K.; Malounas, I.; Zude-Sasse, M.; Zhao, M.; Gowen, A.; Argyropoulos, D.; Fountas, S.; Van Beek, J. Application of hyperspectral imaging systems and artificial intelligence for quality assessment of fruit, vegetables and mushrooms: A review. Biosyst. Eng. 2022, 222, 156–176. [Google Scholar] [CrossRef]
  34. García-Vera, Y.E.; Polochè-Arango, A.; Mendivelso-Fajardo, C.A.; Gutiérrez-Bernal, F.J. Hyperspectral Image Analysis and Machine Learning Techniques for Crop Disease Detection and Identification: A Review. Sustainability 2024, 16, 6064. [Google Scholar] [CrossRef]
  35. Matese, A.; Czarnecki, J.M.P.; Samiappan, S.; Moorhead, R. Are unmanned aerial vehicle-based hyperspectral imaging and machine learning advancing crop science? Trends Plant Sci. 2024, 29, 196–209. [Google Scholar] [CrossRef]
  36. Bassler, M.C.; Stefanakis, M.; Sequeira, I.; Ostertag, E.; Wagner, A.; Bartsch, J.W.; Roeßler, M.; Mandic, R.; Reddmann, E.F.; Lorenz, A.; et al. Comparison of Whiskbroom and Pushbroom darkfield elastic light scattering spectroscopic imaging for head and neck cancer identification in a mouse model. Anal. Bioanal. Chem. 2021, 413, 7363–7383. [Google Scholar] [CrossRef]
  37. Funatomi, T.; Ogawa, T.; Tanaka, K.; Kubo, H.; Caron, G.; Mouaddib, E.M.; Matsushita, Y.; Mukaigawa, Y. Eliminating Temporal Illumination Variations in Whisk-broom Hyperspectral Imaging. Int. J. Comput. Vis. 2022, 130, 1310–1324. [Google Scholar] [CrossRef]
  38. Barbieux, K. Pushbroom Hyperspectral Data Orientation by Combining Feature-Based and Area-Based Co-Registration Techniques. Remote Sens. 2018, 10, 645. [Google Scholar] [CrossRef]
  39. Kim, J.-I.; Chi, J.; Masjedi, A.; Flatt, J.E.; Crawford, M.M.; Habib, A.F.; Lee, J.; Kim, H.C. High-resolution hyperspectral imagery from pushbroom scanners on unmanned aerial systems. Geosci. Data J. 2022, 9, 221–234. [Google Scholar] [CrossRef]
  40. Rakib, F.; Al-Saad, K.; Ustaoglu, S.G.; Ullah, E.; Mall, R.; Thompson, R.; Abdelalim, E.M.; Ahmed, T.; Severcan, F.; Ali, M.H. Fourier Transform Infrared Imaging—A Novel Approach to Monitor Bio Molecular Changes in Subacute Mild Traumatic Brain Injury. Brain Sci. 2021, 11, 918. [Google Scholar] [CrossRef]
  41. Govari, M.; Tryfinopoulou, P.; Panagou, E.Z.; Nychas, G.-J.E. Application of Fourier Transform Infrared (FT-IR) Spectroscopy, Multispectral Imaging (MSI) and Electronic Nose (E-Nose) for the Rapid Evaluation of the Microbiological Quality of Gilthead Sea Bream Fillets. Foods 2022, 11, 2356. [Google Scholar] [CrossRef]
  42. Liu, R.; Xiao, Y.; Wang, Z.; Ma, J.; Tian, X. Scanning-based compressive hyperspectral imaging via spectral-coded illumination. Opt. Lett. 2023, 48, 3387–3390. [Google Scholar] [CrossRef]
  43. Wang, Y.W.; Reder, N.P.; Kang, S.; Glaser, A.K.; Liu, J.T.C. Multiplexed Optical Imaging of Tumor-Directed Nanoparticles: A Review of Imaging Systems and Approaches. Nanotheranostics 2017, 1, 369–388. [Google Scholar] [CrossRef]
  44. Guenot, D.; Alexander, J.; Selvander, M. Compact snapshot hyperspectral camera for ophthalmology. Opt. Contin. 2024, 3, 565–573. [Google Scholar] [CrossRef]
  45. Pieters, O.; De Swaef, T.; Lootens, P.; Stock, M.; Roldán-Ruiz, I.; Wyffels, F. Limitations of snapshot hyperspectral cameras to monitor plant response dynamics in stress-free conditions. Comput. Electron. Agric. 2020, 179, 105825. [Google Scholar] [CrossRef]
  46. Kawakami, R.; Matsushita, Y.; Wright, J.; Ben-Ezra, M.; Tai, Y.W.; Ikeuchi, K. High-Resolution Hyperspectral Imaging via Matrix Factorization. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011. [Google Scholar]
  47. Jullien, A.; Pascal, R.; Bortolozzo, U.; Forget, N.; Residori, S. High-resolution hyperspectral imaging with cascaded liquid crystal cells. Optica 2017, 4, 400–405. [Google Scholar] [CrossRef]
  48. Tegdan, J.; Ekehaug, S.; Hansen, I.M.; Aas, L.M.S.; Steen, K.J.; Pettersen, R.; Beuchel, F.; Camus, L. Underwater hyperspectral imaging for environmental mapping and monitoring of seabed habitats. In Proceedings of the OCEANS 2015, Genova, Italy, 18–21 May 2015; pp. 1–6. [Google Scholar] [CrossRef]
  49. Li, J.; Li, Q.; Wang, F.; Liu, F. Hyperspectral Redundancy Detection and Modeling with Local Hurst Exponent. Phys. A Stat. Mech. Its Appl. 2022, 592, 126830. [Google Scholar] [CrossRef]
  50. Mishra, P.; Karami, A.; Nordon, A.; Rutledge, D.N.; Roger, J.-M. Automatic de-noising of close-range hyperspectral images with a wavelength-specific shearlet-based image noise reduction method. Sens. Actuators B Chem. 2019, 281, 1034–1044. [Google Scholar] [CrossRef]
  51. Ruiz Hidalgo, D.; Bacca Cortés, B.; Caicedo Bravo, E. Dimensionality reduction of hyperspectral images of vegetation and crops based on self-organized maps. Inf. Process. Agric. 2021, 8, 310–327. [Google Scholar] [CrossRef]
  52. Janga, B.; Asamani, G.P.; Sun, Z.; Cristea, N. A Review of Practical AI for Remote Sensing in Earth Sciences. Remote Sens. 2023, 15, 4112. [Google Scholar] [CrossRef]
  53. Talaviya, T.; Shah, D.; Patel, N.; Yagnik, H.; Shah, M. Implementation of artificial intelligence in agriculture for optimisation of irrigation and application of pesticides and herbicides. Artif. Intell. Agric. 2020, 4, 58–73. [Google Scholar] [CrossRef]
  54. Göndöcs, D.; Dörfler, V. AI in medical diagnosis: AI prediction & human judgment. Artif. Intell. Med. 2024, 149, 102769. [Google Scholar] [CrossRef]
  55. Tironi, M.; Rivera Lisboa, D.I. Artificial intelligence in the new forms of environmental governance in the Chilean State: Towards an eco-algorithmic governance. Technol. Soc. 2023, 74, 102264. [Google Scholar] [CrossRef]
  56. Tsai, C.-L.; Mukundan, A.; Chung, C.S.; Chen, Y.H.; Wang, Y.K.; Chen, T.H.; Tseng, Y.S.; Huang, C.W.; Wu, I.C.; Wang, H.C. Hyperspectral Imaging Combined with Artificial Intelligence in the Early Detection of Esophageal Cancer. Cancers 2021, 13, 4593. [Google Scholar] [CrossRef]
  57. Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L.; Lopez, A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 2020, 408, 189–215. [Google Scholar] [CrossRef]
  58. Simon, S.M.; Glaum, P.; Valdovinos, F.S. Interpreting random forest analysis of ecological models to move from prediction to explanation. Sci. Rep. 2023, 13, 3881. [Google Scholar] [CrossRef]
  59. Uddin, S.; Haque, I.; Lu, H.; Moni, M.A.; Gide, E. Comparative performance analysis of K-nearest neighbour (KNN) algorithm and its different variants for disease prediction. Sci. Rep. 2022, 12, 6256. [Google Scholar] [CrossRef] [PubMed]
  60. Murtagh, F. Multilayer Perceptrons for Classification and Regression. Neurocomputing 1991, 2, 183–197. [Google Scholar] [CrossRef]
  61. Gupta, V.; Mishra, V.K.; Singhal, P.; Kumar, A. An Overview of Supervised Machine Learning Algorithm. In Proceedings of the 2022 11th International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India, 16–17 December 2022; pp. 87–92. [Google Scholar] [CrossRef]
  62. Sinaga, K.P.; Yang, M.-S. Unsupervised K-Means Clustering Algorithm. IEEE Access 2020, 8, 80716–80727. [Google Scholar] [CrossRef]
  63. Gothane, S.; Raju, K.S.; Tara, S.; Devika, P. Unsupervised learning towards hierarchical clustering. AIP Conf. Proc. 2023, 2477, 030049. [Google Scholar] [CrossRef]
  64. Howley, T.; Madden, M.G.; O’Connell, M.-L.; Ryder, A.G. The effect of principal component analysis on machine learning accuracy with high-dimensional spectral data. Knowl.-Based Syst. 2006, 19, 363–370. [Google Scholar] [CrossRef]
  65. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  66. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef]
  67. Gao, Q.; Lim, S.; Jia, X. Hyperspectral Image Classification Using Convolutional Neural Networks and Multiple Feature Learning. Remote Sens. 2018, 10, 299. [Google Scholar] [CrossRef]
  68. Firsov, N.A.; Podlipnov, V.V.; Ivliev, N.A.; Ryskova, D.D.; Pirogov, A.V.; Muzyka, A.A.; Makarov, A.R.; Lobanov, V.E.; Platonov, V.I.; Babichev, A.N.; et al. Ensembles of spectral-spatial convolutional neural network models for classifying soil types in hyperspectral images. Comput. Opt. 2023, 47, 795–805. [Google Scholar] [CrossRef]
  69. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network. Phys. Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  70. Shakya, A.K.; Pillai, G.; Chakrabarty, S. Reinforcement learning algorithms: A brief survey. Expert Syst. Appl. 2023, 231, 120495. [Google Scholar] [CrossRef]
  71. Comon, P. Independent component analysis, A new concept? Signal Process. 1994, 36, 287–314. [Google Scholar] [CrossRef]
  72. Berahmand, K.; Daneshfar, F.; Salehi, E.S.; Li, Y.; Xu, Y. Autoencoders and their applications in machine learning: A survey. Artif. Intell. Rev. 2024, 57, 28. [Google Scholar] [CrossRef]
  73. Sigger, N.; Vien, Q.-T.; Nguyen, S.V.; Tozzi, G.; Nguyen, T.T. Unveiling the potential of diffusion model-based framework with transformer for hyperspectral image classification. Sci. Rep. 2024, 14, 8438. [Google Scholar] [CrossRef]
  74. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
  75. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  76. Fink, O.; Wang, Q.; Svensén, M.; Dersin, P.; Lee, W.-J.; Ducoffe, M. Potential, challenges and future directions for deep learning in prognostics and health management applications. Eng. Appl. Artif. Intell. 2020, 92, 103678. [Google Scholar] [CrossRef]
  77. Joksimovic, S.; Ifenthaler, D.; Marrone, R.; De Laat, M.; Siemens, G. Opportunities of artificial intelligence for supporting complex problem-solving: Findings from a scoping review. Comput. Educ. Artif. Intell. 2023, 4, 100138. [Google Scholar] [CrossRef]
  78. Aggarwal, A.; Mittal, M.; Battineni, G. Generative adversarial network: An overview of theory and applications. Int. J. Inf. Manag. Data Insights 2021, 1, 100004. [Google Scholar] [CrossRef]
  79. Scabini, L.F.; Bruno, O.M. Structure and Performance of Fully Connected Neural Networks: Emerging Complex Network Properties—ScienceDirect. Phys. A Stat. Mech. Its Appl. 2023, 615, 128585. [Google Scholar] [CrossRef]
  80. Wang, C.; Yao, C.; Zhao, S.; Zhao, S.; Li, Y. A Comparative Study of a Fully-Connected Artificial Neural Network and a Convolutional Neural Network in Predicting Bridge Maintenance Costs. Appl. Sci. 2022, 12, 3595. [Google Scholar] [CrossRef]
  81. Ching, W.P.; Abdullah, S.S.; Shapiai, M.I.; Islam, A.K.M.M. Performance Enhancement of Alzheimer’s Disease Diagnosis Using Generative Adversarial Network. J. Adv. Res. Appl. Sci. Eng. Technol. 2025, 45, 191–201. [Google Scholar] [CrossRef]
  82. Pokkuluri, K.S.; NSSSN, U.D.N.; Margala, M.; Chakrabarti, P. Enhancing Image Segmentation Accuracy using Deep Learning Techniques. J. Adv. Res. Appl. Sci. Eng. Technol. 2025, 49, 139–148. [Google Scholar] [CrossRef]
  83. Duodu, N.Y.; Patel, W.D.; Koyuncu, H. Advancements in Telehealth: Enhancing Breast Cancer Detection and Health Automation through Smart Integration of IoT and CNN Deep Learning in Residential and Healthcare Settings. J. Adv. Res. Appl. Sci. Eng. Technol. 2025, 45, 214–226. [Google Scholar] [CrossRef]
  84. Duran, B.; Emory, D.; Azam, Y.E.; Linzell, D.G. A novel CNN architecture for robust structural damage identification via strain measurements and its validation via full-scale experiments. Measurement 2025, 239, 115393. [Google Scholar] [CrossRef]
  85. Guo, Y.-T.; Li, Q.-Q.; Liang, C.-S. The rise of nonnegative matrix factorization: Algorithms and applications. Inf. Syst. 2024, 123, 102379. [Google Scholar] [CrossRef]
  86. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef]
  87. Olshausen, B.A.; Field, D.J. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Res. 1997, 37, 3311–3325. [Google Scholar] [CrossRef]
  88. Yin, J.; Qi, C.; Chen, Q.; Qu, J. Spatial-Spectral Network for Hyperspectral Image Classification: A 3-D CNN and Bi-LSTM Framework. Remote Sens. 2021, 13, 2353. [Google Scholar] [CrossRef]
  89. Ma, D.; He, F.; Yue, Y.; Guo, R.; Zhao, T.; Wang, M. Graph convolutional networks for street network analysis with a case study of urban polycentricity in Chinese cities. Int. J. Geogr. Inf. Sci. 2024, 38, 931–955. [Google Scholar] [CrossRef]
  90. Zhang, W.; Song, H.; He, X.; Huang, L.; Zhang, X.; Zheng, J.; Shen, W.; Hao, X.; Liu, X. Deeply learned broadband encoding stochastic hyperspectral imaging. Light Sci. Appl. 2021, 10, 108. [Google Scholar] [CrossRef] [PubMed]
  91. Zou, C.; Huang, X. Hyperspectral image super-resolution combining with deep learning and spectral unmixing. Signal Process. Image Commun. 2020, 84, 115833. [Google Scholar] [CrossRef]
  92. Huang, Q.; Li, W.; Hu, T.; Tao, R. Hyperspectral Image Super-resolution Using Generative Adversarial Network and Residual Learning. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 3012–3016. [Google Scholar] [CrossRef]
  93. Bhattacharya, S.; Remane, K.; Kindel, B.; Tang, G. Spectral Super-Resolution for Hyperspectral Image Reconstruction Using Dictionary and Machine Learning. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 1764–1767. [Google Scholar] [CrossRef]
  94. Chen, C.; Wang, Y.; Zhang, N.; Zhang, Y.; Zhao, Z. A Review of Hyperspectral Image Super-Resolution Based on Deep Learning. Remote Sens. 2023, 15, 2853. [Google Scholar] [CrossRef]
  95. De Lucia, G.; Lapegna, M.; Romano, D. Towards explainable AI for hyperspectral image classification in Edge Computing environments. Comput. Electr. Eng. 2022, 103, 108381. [Google Scholar] [CrossRef]
  96. Kumar, D.; Kumar, D. Hyperspectral Image Classification Using Deep Learning Models: A Review. J. Phys. Conf. Ser. 2021, 1950, 012087. [Google Scholar] [CrossRef]
  97. Bhavatarini, N.; Akash, B.N.; Avinash, A.R.; Akshay, H.M. Object Detection and Classification of Hyperspectral Images Using K-NN. In Proceedings of the 2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), Trichirappalli, India, 5–7 April 2023. [Google Scholar]
  98. Makarenko, M.; Burguete-Lopez, A.; Wang, Q.; Getman, F.; Giancola, S.; Ghanem, B.; Fratalocchi, A. Real-time Hyperspectral Imaging in Hardware via Trained Metasurface Encoders. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 12682–12692. [Google Scholar] [CrossRef]
  99. Saha, D.; Manickavasagan, A. Machine learning techniques for analysis of hyperspectral images to determine quality of food products: A review. Curr. Res. Food Sci. 2021, 4, 28–44. [Google Scholar] [CrossRef]
  100. Li, J.; Wang, H.; Zhang, A.; Liu, Y. Semantic Segmentation of Hyperspectral Remote Sensing Images Based on PSE-UNet Model. Sensors 2022, 22, 9678. [Google Scholar] [CrossRef]
  101. Erturk, A.; Erturk, S. Unsupervised Segmentation of Hyperspectral Images Using Modified Phase Correlation. IEEE Geosci. Remote Sens. Lett. 2006, 3, 527–531. [Google Scholar] [CrossRef]
  102. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
  103. Krichen, M. Convolutional Neural Networks: A Survey. Computers 2023, 12, 151. [Google Scholar] [CrossRef]
  104. Shmilovici, A. Support Vector Machines. In Data Mining and Knowledge Discovery Handbook; Maimon, O., Rokach, L., Eds.; Springer US: Boston, MA, USA, 2005; pp. 257–276. [Google Scholar] [CrossRef]
  105. Mercier, G.; Lennon, M. Support vector machines for hyperspectral image classification with spectral-based kernels. In Proceedings of the IGARSS 2003, 2003 IEEE International Geoscience and Remote Sensing Symposium. Proceedings (IEEE Cat. No.03CH37477), Toulouse, France, 21–25 July 2003; Volume 1, pp. 288–290. [Google Scholar] [CrossRef]
  106. Zhang, Y.; Cao, G.; Li, X.; Wang, B.; Fu, P. Active Semi-Supervised Random Forest for Hyperspectral Image Classification. Remote Sens. 2019, 11, 2974. [Google Scholar] [CrossRef]
  107. Joelsson, S.R.; Benediktsson, J.A.; Sveinsson, J.R. Random forest classifiers for hyperspectral data. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, IGARSS ’05, Seoul, Republic of Korea, 29 July 2005; p. 4. [Google Scholar] [CrossRef]
  108. Farrugia, J.; Griffin, S.; Valdramidis, V.P.; Camilleri, K.; Falzon, O. Principal component analysis of hyperspectral data for early detection of mould in cheeselets. Curr. Res. Food Sci. 2021, 4, 18–27. [Google Scholar] [CrossRef] [PubMed]
  109. Deepa, P.; Thilagavathi, K. Feature extraction of hyperspectral image using principal component analysis and folded-principal component analysis. In Proceedings of the 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, India, 26–27 February 2015; pp. 656–660. [Google Scholar] [CrossRef]
  110. Song, W.; Li, S.; Kang, X.; Huang, K. Hyperspectral image classification based on KNN sparse representation. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 2411–2414. [Google Scholar] [CrossRef]
  111. Cariou, C.; Moan, S.L.; Chehdi, K. Improving K-Nearest Neighbor Approaches for Density-Based Pixel Clustering in Hyperspectral Remote Sensing Images. Remote Sens. 2020, 12, 3745. [Google Scholar] [CrossRef]
  112. Dhingra, S.; Kumar, D. Hyperspectral image classification using meta-heuristics and artificial neural network. J. Inf. Optim. Sci. 2022, 43, 2167–2179. [Google Scholar] [CrossRef]
  113. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
  114. Jia, S.; Jiang, S.; Zhang, S.; Xu, M.; Jia, X. Graph-in-Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 1157–1171. [Google Scholar] [CrossRef] [PubMed]
  115. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  116. Akhtar, N.; Shafait, F.; Mian, A. Sparse Spatio-spectral Representation for Hyperspectral Image Super-resolution. In Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 63–78. [Google Scholar] [CrossRef]
  117. Duan, Y.; Wang, N.; Zhang, Y.; Song, C. Tensor-Based Sparse Representation for Hyperspectral Image Reconstruction Using RGB Inputs. Mathematics 2024, 12, 708. [Google Scholar] [CrossRef]
  118. Zhong, Z.; Li, J.; Ma, L.; Jiang, H.; Zhao, H. Deep residual networks for hyperspectral image classification. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1824–1827. [Google Scholar] [CrossRef]
  119. Tao, Y.; Bao, J.; Liu, Q.; Liu, L.; Zhu, J. Deep residual network enabled smart hyperspectral image analysis and its application to monitoring moisture, size distribution and contents of four bioactive compounds of granules in the fluid-bed granulation process of Guanxinning tablets. Spectrochim. Acta. A Mol. Biomol. Spectrosc. 2023, 287, 122083. [Google Scholar] [CrossRef]
  120. Yu, Z.; Cui, W. Robust hyperspectral image classification using generative adversarial networks. Inf. Sci. 2024, 666, 120452. [Google Scholar] [CrossRef]
  121. Zhan, Y.; Wang, Y.; Yu, X. Semisupervised hyperspectral image classification based on generative adversarial networks and spectral angle distance. Sci. Rep. 2023, 13, 22019. [Google Scholar] [CrossRef]
  122. Awais, M.; Naqvi, S.M.Z.A.; Zhang, H.; Li, L.; Zhang, W.; Awwad, F.A.; Ismail, E.A.; Khan, M.I.; Raghavan, V.; Hu, J. AI and machine learning for soil analysis: An assessment of sustainable agricultural practices. Bioresour. Bioprocess. 2023, 10, 90. [Google Scholar] [CrossRef] [PubMed]
  123. Dewitte, S.; Cornelis, J.P.; Müller, R.; Munteanu, A. Artificial Intelligence Revolutionises Weather Forecast, Climate Monitoring and Decadal Prediction. Remote Sens. 2021, 13, 3209. [Google Scholar] [CrossRef]
  124. Javaid, M.; Haleem, A.; Khan, I.H.; Suman, R. Understanding the potential applications of Artificial Intelligence in Agriculture Sector. Adv. Agrochem 2023, 2, 15–30. [Google Scholar] [CrossRef]
  125. Konya, A.; Nematzadeh, P. Recent applications of AI to environmental disciplines: A review. Sci. Total Environ. 2024, 906, 167705. [Google Scholar] [CrossRef]
  126. Yin, X.; Li, J.; Kadry, S.N.; Sanz-Prieto, I. Artificial intelligence assisted intelligent planning framework for environmental restoration of terrestrial ecosystems. Environ. Impact Assess. Rev. 2021, 86, 106493. [Google Scholar] [CrossRef]
  127. Gomez-Flores, A.; Ilyas, S.; Heyes, G.W.; Kim, H. A critical review of artificial intelligence in mineral concentration. Miner. Eng. 2022, 189, 107884. [Google Scholar] [CrossRef]
  128. Rangel Gavidia, J.C.; Chinelatto, G.F.; Basso, M.; da Ponte Souza, J.P.; Soltanmohammadi, R.; Vidal, A.C.; Goldstein, R.H.; Mohammadizadeh, S. Utilizing integrated artificial intelligence for characterizing mineralogy and facies in a pre-salt carbonate reservoir, Santos Basin, Brazil, using cores, wireline logs, and multi-mineral petrophysical evaluation. Geoenergy Sci. Eng. 2023, 231, 212303. [Google Scholar] [CrossRef]
  129. Sankhla, M.S.; Kumar, N. (Eds.) Artificial Intelligence in Forensic Science: An Emerging Technology in Criminal Investigation Systems; CRC Press: Boca Raton, FL, USA, 2024. [Google Scholar]
  130. Tynan, P. The Integration and Implications of Artificial Intelligence in Forensic Science|Forensic Science, Medicine and Pathology. Forensic Sci. Med. Pathol. 2024. [Google Scholar] [CrossRef]
  131. Bindra, S.; Jain, R. Artificial intelligence in medical science: A review. Ir. J. Med. Sci. 1971 2024, 193, 1419–1429. [Google Scholar] [CrossRef]
  132. Xu, N.; Yang, D.; Arikawa, K.; Bai, C. Application of Artificial Intelligence in Modern Medicine. Clin. eHealth 2023, 6, 130–137. [Google Scholar] [CrossRef]
  133. Venkataramanan, V.; Modi, A.; Mistry, K. AI and Robots Impact on Space Exploration. Adv. Astronaut. Sci. Technol. 2024. [Google Scholar] [CrossRef]
  134. Russo, A.; Lax, G. Using Artificial Intelligence for Space Challenges: A Survey. Appl. Sci. 2022, 12, 5106. [Google Scholar] [CrossRef]
  135. Atsmon, G.; Nehurai, O.; Kizel, F.; Eizenberg, H.; Nisim Lati, R. Hyperspectral imaging facilitates early detection of Orobanche cumana below-ground parasitism on sunflower under field conditions. Comput. Electron. Agric. 2022, 196, 106881. [Google Scholar] [CrossRef]
  136. Appeltans, S.; Pieters, J.G.; Mouazen, A.M. Detection of leek white tip disease under field conditions using hyperspectral proximal sensing and supervised machine learning. Comput. Electron. Agric. 2021, 190, 106453. [Google Scholar] [CrossRef]
  137. Yu, K.; Fang, S.; Zhao, Y. Heavy metal Hg stress detection in tobacco plant using hyperspectral sensing and data-driven machine learning methods. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2021, 245, 118917. [Google Scholar] [CrossRef]
  138. Nagasubramanian, K.; Jones, S.; Singh, A.K.; Sarkar, S.; Singh, A.; Ganapathysubramanian, B. Plant disease identification using explainable 3D deep learning on hyperspectral images. Plant Methods 2019, 15, 98. [Google Scholar] [CrossRef]
  139. Xue, B.; Tian, L.; Wang, Z.; Wang, X.; Yao, X.; Zhu, Y.; Cao, W.; Cheng, T. Quantification of rice spikelet rot disease severity at organ scale with proximal imaging spectroscopy. Precis. Agric. 2023, 24, 1049–1071. [Google Scholar] [CrossRef]
  140. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; Gonzalez-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef]
  141. Chen, S.-Y.; Chiu, M.-F.; Zou, X.-W. Real-time defect inspection of green coffee beans using NIR snapshot hyperspectral imaging. Comput. Electron. Agric. 2022, 197, 106970. [Google Scholar] [CrossRef]
  142. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  143. Khalid, M.M.; Karan, O. Deep Learning for Plant Disease Detection. Int. J. Math. Stat. Comput. Sci. 2024, 2, 75–84. [Google Scholar] [CrossRef]
  144. Moghadam, P.; Ward, D.; Goan, E.; Jayawardena, S.; Sikka, P.; Hernandez, E. Plant Disease Detection Using Hyperspectral Imaging. In Proceedings of the 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney, NSW, Australia, 29 November–1 December 2017. [Google Scholar]
  145. Neri, I.; Caponi, S.; Bonacci, F.; Clementi, G.; Cottone, F.; Gammaitoni, L.; Figorilli, S.; Ortenzi, L.; Aisa, S.; Pallottino, F.; et al. Real-Time AI-Assisted Push-Broom Hyperspectral System for Precision Agriculture. Sensors 2024, 24, 344. [Google Scholar] [CrossRef] [PubMed]
  146. Firsov, N.A.; Podlipnov, V.; Ivliev, N.; Nikolaev, P.; Mashkov, S.; Ishkin, P.; Skidanov, R.; Nikonorov, A. Neural network-aided classification of hyperspectral vegetation images with a training sample generated using an adaptive vegetation index. Comput. Opt. 2021, 45, 887–896. [Google Scholar] [CrossRef]
  147. Faltynkova, A.; Johnsen, G.; Wagner, M. Hyperspectral imaging as an emerging tool to analyze microplastics: A systematic review and recommendations for future development. Microplastics Nanoplastics 2021, 1, 13. [Google Scholar] [CrossRef]
  148. Mukundan, A.; Huang, C.-C.; Men, T.-C.; Lin, F.-C.; Wang, H.-C. Air Pollution Detection Using a Novel Snap-Shot Hyperspectral Imaging Technique. Sensors 2022, 22, 6231. [Google Scholar] [CrossRef] [PubMed]
  149. Lechevallier, P.; Villez, K.; Felsheim, C.; Rieckermann, J. Towards non-contact pollution monitoring in sewers with hyperspectral imaging. Environ. Sci. Water Res. Technol. 2024, 10, 1160–1170. [Google Scholar] [CrossRef]
  150. Ma, T.; Zhang, D.; Li, X.; Huang, Y.; Zhang, L.; Zhu, Z.; Sun, X.; Lan, Z.; Guo, W. Hyperspectral Remote Sensing Technology for Water Quality Monitoring: Knowledge Graph Analysis and Frontier trend. Front. Environ. Sci. 2023, 11, 1133325. [Google Scholar] [CrossRef]
  151. Akhyar, A.; Zulkifley, M.A.; Lee, J.; Song, T.; Han, J.; Cho, C.; Hyun, S.; Son, Y.; Hong, B.W. Deep artificial intelligence applications for natural disaster management systems: A methodological review. Ecol. Indic. 2024, 163, 112067. [Google Scholar] [CrossRef]
  152. Chen, C.-W.; Tseng, Y.-S.; Mukundan, A.; Wang, H.-C. Air Pollution: Sensitive Detection of PM2.5 and PM10 Concentration Using Hyperspectral Imaging. Appl. Sci. 2021, 11, 4543. [Google Scholar] [CrossRef]
  153. Alboody, A.; Vandenbroucke, N.; Porebski, A.; Sawan, R.; Viudes, F.; Doyen, P.; Amara, R. A New Remote Hyperspectral Imaging System Embedded on an Unmanned Aquatic Drone for the Detection and Identification of Floating Plastic Litter Using Machine Learning. Remote Sens. 2023, 15, 3455. [Google Scholar] [CrossRef]
  154. Zhang, J.; Su, R.; Fu, Q.; Ren, W.; Heide, F.; Nie, Y. A survey on computational spectral reconstruction methods from RGB to hyperspectral imaging. Sci. Rep. 2022, 12, 11905. [Google Scholar] [CrossRef] [PubMed]
  155. Hu, S.; Hou, R.; Ming, L.; Meifang, S.; Chen, P. A hyperspectral image reconstruction algorithm based on RGB image using multi-scale atrous residual convolution network. Front. Mar. Sci. 2023, 9, 1006452. [Google Scholar] [CrossRef]
  156. Ahmed, M.T.; Monjur, O.; Kamruzzaman, M. Deep learning-based hyperspectral image reconstruction for quality assessment of agro-product. J. Food Eng. 2024, 382, 112223. [Google Scholar] [CrossRef]
  157. Zhang, L.; Luo, X.; Li, S.; Shi, X. R2H-CCD: Hyperspectral Imagery Generation from RGB Images Based on Conditional Cascade Diffusion Probabilistic Models. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 7392–7395. [Google Scholar] [CrossRef]
  158. Li, C.; Xiao, Z.; Wang, S. Multi-scale hyperspectral recovery networks: RGB-hyperspectral imaging consistency empowered deep spectral super-resolution. Opt. Express 2024, 32, 23392–23403. [Google Scholar] [CrossRef]
  159. Sharma, N.; Hefeeda, M. Hyperspectral reconstruction from RGB images for vein visualization. In Proceedings of the 11th ACM Multimedia Systems Conference, in MMSys ’20, Istanbul, Turkey, 8–11 June 2010; Association for Computing Machinery: New York, NY, USA, 2020; pp. 77–87. [Google Scholar] [CrossRef]
  160. Leung, J.-H.; Tsao, Y.M.; Karmakar, R.; Mukundan, A.; Lu, S.C.; Huang, S.Y.; Saenprasarn, P.; Lo, C.H.; Wang, H.C. Water pollution classification and detection by hyperspectral imaging. Opt. Express 2024, 32, 23956–23965. [Google Scholar] [CrossRef]
  161. Li, Z.; Yang, R.; Dang, F.; Du, P.; Zhang, X.; Tan, B.; Zhao, H.; Su, H. A review on the geological applications of hyperspectral remote sensing technology. In Proceedings of the 2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Shanghai, China, 4–7 June 2012; pp. 1–4. [Google Scholar] [CrossRef]
  162. Qian, S.E. Overview of Hyperspectral Imaging Remote Sensing from Satellites. In Advances in Hyperspectral Image Processing Techniques; Wiley: Hoboken, NJ, USA, 2022. [Google Scholar]
  163. Hyperspectral Remote Sensing in Lithological Mapping, Mineral Exploration, and Environmental Geology: An Updated Review. J. Appl. Remote Sens. 2021, 15, 031501.
  164. Hajaj, S.; El Harti, A.; Pour, A.B.; Jellouli, A.; Adiri, Z.; Hashim, M. A review on hyperspectral imagery application for lithological mapping and mineral prospecting: Machine learning techniques and future prospects. Remote Sens. Appl. Soc. Environ. 2024, 35, 101218. [Google Scholar] [CrossRef]
  165. Ali, S.H.; Giurco, D.; Arndt, N.; Nickless, E.; Brown, G.; Demetriades, A.; Durrheim, R.; Enriquez, M.A.; Kinnaird, J.; Littleboy, A.; et al. Mineral supply for sustainable development requires resource governance. Nature 2017, 543, 367–372. [Google Scholar] [CrossRef]
  166. Booysen, R.; Lorenz, S.; Thiele, S.T.; Fuchsloch, W.C.; Marais, T.; Nex, P.A.; Gloaguen, R. Accurate hyperspectral imaging of mineralised outcrops: An example from lithium-bearing pegmatites at Uis, Namibia. Remote Sens. Environ. 2022, 269, 112790. [Google Scholar] [CrossRef]
  167. Vella, H. Mapping Minerals Remotely: How Hyperspectral Imaging Can Support Exploration. Mining Technology. Available online: https://www.mining-technology.com/features/mapping-minerals-remotely-how-hyperspectral-imaging-can-support-exploration/ (accessed on 9 July 2024).
  168. Long, T.; Zhou, Z.; Hancke, G.; Bai, Y.; Gao, Q. A Review of Artificial Intelligence Technologies in Mineral Identification: Classification and Visualization. J. Sens. Actuator Netw. 2022, 11, 50. [Google Scholar] [CrossRef]
  169. Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  170. Okada, N.; Maekawa, Y.; Owada, N.; Haga, K.; Shibayama, A.; Kawamura, Y. Automated Identification of Mineral Types and Grain Size Using Hyperspectral Imaging and Deep Learning for Mineral Processing. Minerals 2020, 10, 809. [Google Scholar] [CrossRef]
  171. de Cássia Mariotti, K.; Ortiz, R.S.; Ferrão, M.F. Hyperspectral imaging in forensic science: An overview of major application areas. Sci. Justice 2023, 63, 387–395. [Google Scholar] [CrossRef] [PubMed]
  172. Chango, X.; Flor-Unda, O.; Gil-Jiménez, P.; Gómez-Moreno, H. Technology in Forensic Sciences: Innovation and Precision. Technologies 2024, 12, 120. [Google Scholar] [CrossRef]
  173. Klasén, L.; Fock, N.; Forchheimer, R. The invisible evidence: Digital forensics as key to solving crimes in the digital age. Forensic Sci. Int. 2024, 362, 112133. [Google Scholar] [CrossRef]
  174. Ahmed, O.; Saleem, S.A.; Khan, A.A.; Daruwala, S.; Pettiwala, A. Artificial intelligence in forensic odontology—A review. Int. Dent. J. Stud. Res. 2023, 11, 54–60. [Google Scholar] [CrossRef]
  175. Zhou, Y.; Zhang, J.; Huang, J.; Deng, K.; Zhang, J.; Qin, Z.; Wang, Z.; Zhang, X.; Tuo, Y.; Chen, L.; et al. Digital Whole-Slide Image Analysis for Automated Diatom Test in Forensic Cases of Drowning Using a Convolutional Neural Network Algorithm. Forensic Sci. Int. 2019, 302, 109922. [Google Scholar] [CrossRef]
  176. Galante, N.; Cotroneo, R.; Furci, D.; Lodetti, G.; Casali, M.B. Applications of artificial intelligence in forensic sciences: Current potential benefits, limitations and perspectives. Int. J. Legal Med. 2023, 137, 445–458. [Google Scholar] [CrossRef]
  177. Luo, Z.; Shafait, F.; Mian, A. Localized forgery detection in hyperspectral document images. In Proceedings of the 2015 13th International Conference on Document Analysis and Recognition (ICDAR), Tunis, Tunisia, 23–26 August 2015; pp. 496–500. [Google Scholar] [CrossRef]
  178. Rastogi, V.; Srivastava, S.; Jaiswal, G.; Sharma, A. Detecting Document Forgery Using Hyperspectral Imaging and Machine Learning. In Computer Vision and Image Processing; Raman, B., Murala, S., Chowdhury, A., Dhall, A., Goyal, P., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 14–25. [Google Scholar] [CrossRef]
  179. Głomb, P.; Romaszewski, M.; Cholewa, M.; Domino, K. Application of hyperspectral imaging and machine learning methods for the detection of gunshot residue patterns. Forensic Sci. Int. 2018, 290, 227–237. [Google Scholar] [CrossRef]
  180. Li, J.; Hong, D.; Gao, L.; Yao, J.; Zheng, K.; Zhang, B.; Chanussot, J. Deep learning in multimodal remote sensing data fusion: A comprehensive review. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102926. [Google Scholar] [CrossRef]
  181. Książek, K.; Romaszewski, M.; Głomb, P.; Grabowski, B.; Cholewa, M. Blood Stain Classification with Hyperspectral Imaging and Deep Neural Networks. Sensors 2020, 20, 6666. [Google Scholar] [CrossRef] [PubMed]
  182. Grigoroiu, A.; Yoon, J.; Bohndiek, S.E. Deep learning applied to hyperspectral endoscopy for online spectral classification. Sci. Rep. 2020, 10, 3947. [Google Scholar] [CrossRef] [PubMed]
  183. Fei, B. Chapter 3.6—Hyperspectral imaging in medical applications. In Data Handling in Science and Technology; Amigo, J.M., Ed.; Hyperspectral Imaging; Elsevier: Amsterdam, The Netherlands, 2019; Volume 32, pp. 523–565. [Google Scholar] [CrossRef]
  184. Guraya, S.Y. Transforming simulation in healthcare to enhance interprofessional collaboration leveraging big data analytics and artificial intelligence. BMC Med. Educ. 2024, 24, 941. [Google Scholar] [CrossRef] [PubMed]
  185. Malešević, A.; Kolesárová, M.; Čartolovni, A. Encompassing trust in medical AI from the perspective of medical students: A quantitative comparative study. BMC Med. Ethics 2024, 25, 94. [Google Scholar] [CrossRef]
  186. Lemmens, S.; De Groef, L.; Charle, W.; Jayapala, M.; Theunis, J.; Moons, L.; De Boever, P.; Stalmans, I. How Hyperspectral Imaging and Artificial Intelligence Transform Alzheimer’s Diagnosis. Spectrosc. Europe.-Weinh. 1992, 33, 18–23. [Google Scholar] [CrossRef]
  187. Sullivan, R.; Alatise, O.I.; Anderson, B.O.; Audisio, R.; Autier, P.; Aggarwal, A.; Balch, C.; Brennan, M.F.; Dare, A.; D’Cruz, A.; et al. Global cancer surgery: Delivering safe, affordable, timely cancer surgery. Lancet Oncol. 2015, 16, 1193–1224. [Google Scholar] [CrossRef]
  188. Hwang, J.; Cheney, P.; Kanick, S.C.; Le, H.N.; McClatchy, D.M., III; Zhang, H.; Liu, N.; John Lu, Z.Q.; Cho, T.J.; Briggman, K.; et al. Hyperspectral dark-field microscopy of human breast lumpectomy samples for tumor margin detection in breast-conserving surgery. J. Biomed. Opt. 2024, 29, 093503. [Google Scholar] [CrossRef]
  189. Shapey, J.; Xie, Y.; Nabavi, E.; Bradford, R.; Saeed, S.R.; Ourselin, S.; Vercauteren, T. Intraoperative multispectral and hyperspectral label-free imaging: A systematic review of in vivo clinical studies. J. Biophotonics 2019, 12, e201800455. [Google Scholar] [CrossRef]
  190. Ji, Y.; Park, S.M.; Kwon, S.; Leem, J.W.; Nair, V.V.; Tong, Y.; Kim, Y.L. mHealth hyperspectral learning for instantaneous spatiospectral imaging of hemodynamics. PNAS Nexus 2023, 2, pgad111. [Google Scholar] [CrossRef]
  191. Vananti, A.; Schildknecht, T.; Krag, H. Reflectance Spectroscopy Characterization of Space Debris. Adv. Space Res. 2017, 59, 2488–2500. [Google Scholar] [CrossRef]
  192. Willison, A.; Bédard, D. A novel approach to modeling spacecraft spectral reflectance. Adv. Space Res. 2016, 58, 1318–1330. [Google Scholar] [CrossRef]
  193. Zhao, X.F.; Zhang, H.Y.; Yu, Y.; Mao, Y.D. Multicolor Photometry of Geosynchronous Satellites and Application on Feature Recognition. Adv. Space Res. 2016, 58, 2269–2279. [Google Scholar] [CrossRef]
  194. Yanagisawa, T.; Kurosaki, H. Shape and motion estimate of LEO debris using light curves. Adv. Space Res. 2012, 50, 136–145. [Google Scholar] [CrossRef]
  195. Zhu, B.; Lin, S.; Zhu, Y.; Wang, X. Collaborative Hyperspectral Image Processing Using Satellite Edge Computing. IEEE Trans. Mob. Comput. 2024, 23, 2241–2253. [Google Scholar] [CrossRef]
  196. Vasile, M.; Walker, L.; Dunphy, R.D.; Zabalza, J.; Murray, P.; Marshall, S.; Savitski, V. Intelligent characterisation of space objects with hyperspectral imaging. Acta Astronaut. 2023, 203, 510–534. [Google Scholar] [CrossRef]
  197. Lin, C.-H.; Huang, S.-H.; Lin, T.-H.; Wu, P.C. Metasurface-empowered snapshot hyperspectral imaging with convex/deep (CODE) small-data learning theory. Nat. Commun. 2023, 14, 6979. [Google Scholar] [CrossRef] [PubMed]
  198. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  199. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  200. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  201. Jia, S.; Jiang, S.; Lin, Z.; Li, N.; Xu, M.; Yu, S. A survey: Deep learning for hyperspectral image classification with few labeled samples. Neurocomputing 2021, 448, 179–204. [Google Scholar] [CrossRef]
  202. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  203. Zhou, H.; Wang, X.; Xia, K.; Ma, Y.; Yuan, G. Transfer Learning-Based Hyperspectral Image Classification Using Residual Dense Connection Networks. Sensors 2024, 24, 2664. [Google Scholar] [CrossRef] [PubMed]
  204. Khonina, S.N.; Kazanskiy, N.L.; Skidanov, R.V.; Butt, M.A. Exploring Types of Photonic Neural Networks for Imaging and Computing—A Review. Nanomaterials 2024, 14, 697. [Google Scholar] [CrossRef] [PubMed]
  205. Woods, D.; Naughton, T.J. Photonic neural networks. Nat. Phys. 2012, 8, 257–259. [Google Scholar] [CrossRef]
  206. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  207. Mangotra, H.; Srivastava, S.; Jaiswal, G.; Rani, R.; Sharma, A. Hyperspectral Imaging for Early Diagnosis of Diseases: A Review. Expert Syst. 2023, 40, e13311. [Google Scholar] [CrossRef]
Figure 1. Schematic of foremost gears of the HIS system [30].
Figure 1. Schematic of foremost gears of the HIS system [30].
Technologies 12 00163 g001
Figure 2. The illustration demonstrates the passage of axial and marginal rays through the optical system of the low-cost, high-resolution HS imager. The blue, green, and red lines represent example wavelength rays after diffraction [3].
Figure 2. The illustration demonstrates the passage of axial and marginal rays through the optical system of the low-cost, high-resolution HS imager. The blue, green, and red lines represent example wavelength rays after diffraction [3].
Technologies 12 00163 g002
Figure 3. (a) Simplified diagram: The camera operates in active mode (top) or passive mode (bottom) based on where the light spectrum was encoded. (b) Overview of the DNN-based spectral rebuilding: The monochrome camera’s initial data are processed by the DNN to generate the restored 3D HS data cube. (c,d) Spectral profiles of narrow-band laser beams: In (c), the DNN was trained with “precise” data, while in (d), it was trained with “general” data. (e) The spectral profile of two peaks at 598.0 nm and 603.2 nm, with the peak-to-peak distance shown in black [90].
Figure 3. (a) Simplified diagram: The camera operates in active mode (top) or passive mode (bottom) based on where the light spectrum was encoded. (b) Overview of the DNN-based spectral rebuilding: The monochrome camera’s initial data are processed by the DNN to generate the restored 3D HS data cube. (c,d) Spectral profiles of narrow-band laser beams: In (c), the DNN was trained with “precise” data, while in (d), it was trained with “general” data. (e) The spectral profile of two peaks at 598.0 nm and 603.2 nm, with the peak-to-peak distance shown in black [90].
Technologies 12 00163 g003
Figure 4. AI-HSI for analyzing food quality [99].
Figure 4. AI-HSI for analyzing food quality [99].
Technologies 12 00163 g004
Figure 5. Remote Hyperspectral Imaging System implanted on a UAD called the Jellyfishbot, which can be equipped with a net to gather macroplastics or microplastics from various water surfaces [153].
Figure 5. Remote Hyperspectral Imaging System implanted on a UAD called the Jellyfishbot, which can be equipped with a net to gather macroplastics or microplastics from various water surfaces [153].
Technologies 12 00163 g005
Figure 6. VIS-HSI conversion algorithm [160].
Figure 6. VIS-HSI conversion algorithm [160].
Technologies 12 00163 g006
Figure 7. Five categories of minerals used for categorization: (a) hematite with large particles, (b) hematite with small particles, (c) hematite with very small particles, (d) galena, and (e) chalcopyrite [170].
Figure 7. Five categories of minerals used for categorization: (a) hematite with large particles, (b) hematite with small particles, (c) hematite with very small particles, (d) galena, and (e) chalcopyrite [170].
Technologies 12 00163 g007
Figure 8. Illustration of two examples of classification maps from the HTC scenario using 1D CNN and 2D CNN architectures. Most classes were correctly labeled, resulting in high classification precision. (a) reference data for image E(1). (b) Forecast by 1D CNN. (c) reference data for image F(1). (d) Forecast by 2D CNN [181].
Figure 8. Illustration of two examples of classification maps from the HTC scenario using 1D CNN and 2D CNN architectures. Most classes were correctly labeled, resulting in high classification precision. (a) reference data for image E(1). (b) Forecast by 1D CNN. (c) reference data for image F(1). (d) Forecast by 2D CNN [181].
Technologies 12 00163 g008
Table 1. Characteristics of scanning methods used in HSI.
Table 1. Characteristics of scanning methods used in HSI.
CharacteristicsWhiskbroomPush Broom Fourier Transform SpectroscopySpectral ScanningHS Snapshot Cameras
Scanning MethodPoint-by-pointLine-by-lineFull spectrum at a single pointWavelength-by-wavelengthFull image at once
Data AcquisitionSequential, point-by-pointSequential, line-by-lineSimultaneous full spectrumSequential, band-by-bandSimultaneous, full spatial-spectral
SpeedSlowModerateModerateSlow to moderateFast
Spatial ResolutionHighModerate to highHigh (single point)HighModerate
Spectral ResolutionHighHighVery highHighModerate
ComplexityHigh (moving parts)Moderate (moving parts)High (requires precise optics)Moderate (requires precise optics)Low to moderate (no moving parts)
PortabilityLow (bulky and heavy)ModerateLow (bulky and sensitive)ModerateHigh (compact and lightweight)
ApplicationsLaboratory, field spectroscopyRemote sensing, environmentalLaboratory, chemical analysisLaboratory, remote sensingReal-time imaging, medical diagnostics
AdvantagesHigh accuracy and detailEfficient for large areasHigh spectral resolutionDetailed spectral informationFast and efficient data capture
DisadvantagesSlow, not suitable for dynamic scenesModerate speed, complex calibrationBulky, complex, and expensiveTime-consuming, sensitive to motionLower spectral resolution compared with others
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khonina, S.N.; Kazanskiy, N.L.; Oseledets, I.V.; Nikonorov, A.V.; Butt, M.A. Synergy between Artificial Intelligence and Hyperspectral Imagining—A Review. Technologies 2024, 12, 163. https://doi.org/10.3390/technologies12090163

AMA Style

Khonina SN, Kazanskiy NL, Oseledets IV, Nikonorov AV, Butt MA. Synergy between Artificial Intelligence and Hyperspectral Imagining—A Review. Technologies. 2024; 12(9):163. https://doi.org/10.3390/technologies12090163

Chicago/Turabian Style

Khonina, Svetlana N., Nikolay L. Kazanskiy, Ivan V. Oseledets, Artem V. Nikonorov, and Muhammad A. Butt. 2024. "Synergy between Artificial Intelligence and Hyperspectral Imagining—A Review" Technologies 12, no. 9: 163. https://doi.org/10.3390/technologies12090163

APA Style

Khonina, S. N., Kazanskiy, N. L., Oseledets, I. V., Nikonorov, A. V., & Butt, M. A. (2024). Synergy between Artificial Intelligence and Hyperspectral Imagining—A Review. Technologies, 12(9), 163. https://doi.org/10.3390/technologies12090163

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop