Abstract
Over the past two decades, hyperspectral imaging (HSI) systems have shown significant potential in agriculture, from disease detection to the assessment of plant and fruit nutritional status. However, most applications remain confined to laboratory analyses under controlled conditions, with only a limited fraction implemented in field environments. In this scenario, spectral reconstruction techniques may serve as a bridge between the high accuracy of HSI and the challenges of on-field or even real-time applications. This review outlines the current state of the art of on-field HSI in the agrifood sector, highlighting existing limitations and potential advantages. It then introduces the problem of spectral reconstruction and reviews current techniques used to address it. Laboratory and on-field studies will be taken into account. The final section offers our perspective on the limitations of HSI and the promising potential of spectral super-resolution to overcome current barriers and enable broader adoption of hyperspectral technology in precision agriculture.
1. Introduction
Spectral reconstruction from RGB images—that is, generating multispectral or hyperspectral images from simple three-channel images (red, green, and blue)—is a topic that has received increasing attention in the last decade. Native hyperspectral images, acquired with hyperspectral cameras, provide richer information than traditional RGB images, capturing data across multiple narrow spectral bands, from tens to hundreds of bands. It allows for detecting chemical and physical properties in targets on the scene, related to the spectral signature, without any damage or invasive sampling. This capability enables advanced applications in various sectors, including remote sensing, agriculture, geology, medicine, environmental monitoring, and cultural heritage preservation []. Within environmental monitoring, air pollution is a topic where hyperspectral images are the best source of information. However, hyperspectral cameras are generally complex, expensive, and often impractical for large-scale applications or consumer contexts. To overcome these limitations, a line of research has developed aimed at reconstructing detailed spectral information from RGB images by leveraging advanced computational techniques and machine learning models []. Generally, the reconstruction of hyperspectral images from RGB images is a complex challenge due to information loss and limitations in data quality. First of all, there are problems of the non-uniqueness of the solution: reconstructing the spectrum from an RGB image is not unique, as different spectral distributions can correspond to the same RGB values. Given the sensor response, multiple input spectra can lead to identical integral values. Moreover, there is lack of an RGB camera frequency response. So, we do not know exactly how the camera works with the incoming spectrum. An important aspect of this problem is that many models require knowledge of the spectral response of the sensor (e.g. attention-block models) [], which are not available for common RGB cameras. In addition, we must couple with data degradation. Raw image values contain unknown pixel responses directly related to the incoming electromagnetic waves, but JPEG compression, white balance, and gamma correction degrade the original spectral information. However, recent studies (from regression models to deep learning models) demonstrate promising results, including predictions of spectral information beyond the visible spectrum, such as in the NIR (near-infrared) region. The methods proposed in the literature can primarily be divided into two general categories: prior-based methods and data-driven methods. Prior-based methods exploit statistical information and spatial or spectral characteristics, such as sparsity or spectral correlation []. On the other hand, data-driven methods mainly rely on deep neural networks, which have demonstrated a great ability to model complex relationships between RGB data and corresponding spectral information []. Recent research has significantly focused on the use of deep learning techniques to address the problem of spectral reconstruction. Approaches based on deep convolutional neural networks (CNNs), as well as more advanced architectures, such as Generative Adversarial Networks (GANs), have shown remarkable effectiveness, especially when trained on large datasets []. A critical issue in this field concerns the physical accuracy of reconstructed images. Many current methods, although achieving good spectral accuracy, may generate results that do not respect physical plausibility—meaning that reconstructed images do not precisely yield the original RGB values when reintegrated with the camera’s spectral sensitivities. To address this challenge, recent studies have developed physically plausible reconstruction methods, introducing constraints based on the decomposition of spectral information into fundamental metameric components. This allows for a reconstruction that is physically coherent and robust to exposure variations []. A further advancement in research involves the introduction of category-specific prior information—additional details describing particular object types or surfaces present in the images—to further improve reconstruction quality []. Finally, an important driver for developing these approaches has been the creation of new datasets, such as ARAD-HS [], which offers a wide variety of natural hyperspectral images. These datasets enable rigorous validation and performance comparisons among the various developed methods []. It is important to note that in the analyzed studies, RGB and hyperspectral images can be treated either as directly measured luminance data or as reflectance data, where the acquired signal is normalized against the light source and environmental conditions. However, it should be emphasized that many commonly cited datasets, such as ARAD-HS, CAVE, ICVL, and BGU-HS [], do not provide detailed calibration procedures for precise conversion to reflectance, one of the most important steps in order to extract numerical information from the images related to the status of plant and fruits. The only dataset explicitly described with rigorous calibration via a standard white panel is the KAUST-HS dataset [,]. Consequently, the use of reflectance data in studies based on these datasets should be approached cautiously, recognizing potential limitations in their accuracy and generalizability. Applying these models to images acquired under uncontrolled conditions remains an open challenge but represents a significant area of research for food-quality analysis and other natural material assessments. This study presents a detailed analysis of the topics previously introduced, beginning with the current state of the art in conventional hyperspectral imaging (HSI), with particular attention to methodologies, algorithms, and their limitations. On this basis, we then examine spectral reconstruction, highlighting the advantages it provides as well as the constraints that characterize it, with specific reference to the earliest applications reported in the agri-food domain.
2. Literature Search and Study Selection Methodology
2.1. Databases, Publisher Platforms, and Coverage
We conducted structured searches on two citation databases—Web of Science Core Collection and Scopus—and complemented them with Google Scholar to broaden recall, as reported in Table 1. Full texts were retrieved on publisher platforms when available, with a focus on ScienceDirect for Elsevier journals, and on MDPI, IEEE Xplore, SpringerLink, and Wiley Online Library for non-Elsevier titles. These choices align with the venues represented in our reference list (e.g., Elsevier journals such as Information Fusion, Computers and Electronics in Agriculture, and the Microchemical Journal).

Table 1.
Databases/platforms, context-specific keywords used in queries, and applied filters.
2.2. Search Strategy and Keywords
Query strings combined controlled and free-text terms using Boolean operators and truncation. Core terms were derived from the review scope and keywords of the manuscript (“Spectral super-resolution”, “Multispectral Imaging”, “Hyperspectral Imaging”, “Deep Learning”, “Precision Agriculture”, “Proximal sensing”, “On-field”) and were specialized per context (HSI general, on-field HSI, SSR, algorithms/models). Queries were iteratively refined to balance precision (algorithm/dataset-specific terms) and recall (broader agrifood terms).
2.3. Eligibility Criteria
Inclusion criteria: (i) peer-reviewed journal articles or full conference proceedings; (ii) relevance to hyperspectral imaging in agrifood or to spectral super-resolution (RGB-to-HSI/HSR) with agrifood applicability; (iii) English language; (iv) publication year 1998–2025. Exclusion criteria: non-peer-reviewed items; purely satellite-only remote sensing without proximal/on-field relevance; studies lacking spectral/agrifood focus; duplicates.
2.4. Screening Workflow
Titles/abstracts were screened to remove off-topic items. The remaining records underwent full-text assessment; reasons for exclusion (e.g., missing agrifood context, insufficient spectral content) were logged. Reference chaining (backward/forward) was used to capture seminal works (e.g., calibration, band selection, classic HSI datasets) and recent SSR applications.
3. Traditional HSI On-Field
HSI is emerging as a key technology across a wide range of agri-food applications. The main advantage of this technology lies in its non-invasive approach, as it enables the analysis of objects within a scene by simultaneously combining imaging and spectroscopy. The result of this measurement is a three-dimensional data cube, or hypercube, consisting of two spatial dimensions and one spectral dimension. Each acquired pixel is characterized by a complete spectrum—reflected, transmitted, or absorbed by the sample—commonly referred to as its spectral signature. Beyond laboratory and industrial systems, hyperspectral imaging can also be implemented on different platforms, including ground-based setups, UAVs, airborne systems, and satellites, thus covering a wide range of spatial scales and application domains.
The development of regression and classification models based on hyperspectral imaging (HSI) in agriculture generally follows a consolidated methodological pipeline, which, despite its modularity, remains consistent across most studies as shown in Figure 1. The first critical decision concerns the acquisition system, where the choice of the hyperspectral camera and platform determines the spectral range, resolution, and signal-to-noise characteristics of the resulting datacube (). Indoor gantry systems and imaging boxes are frequently employed for controlled experiments and calibration datasets, while ground-based vehicles, such as tractors or unmanned ground vehicles (UGVs), allow for direct deployment in the field. Unmanned aerial vehicles (UAVs) and manned airborne platforms extend coverage to larger areas, with satellites providing long-term and wide-scale monitoring. Selecting the sensor technology (e.g., pushbroom, whiskbroom, snapshot, or spectral scanning cameras) is equally decisive, as each modality entails specific trade-offs in terms of spatial resolution, acquisition speed, and robustness to motion. Once data are acquired, extensive pre-processing is typically required to ensure reliability. This includes radiometric corrections, such as white/dark reference calibration and spectral normalization, geometric adjustments for distortion removal and georeferencing, and, in the case of airborne acquisitions, atmospheric corrections. Following pre-processing, a crucial step is segmentation and region-of-interest (ROI) extraction, where the objects of interest (leaf, fruit, canopy, or plot) are separated from background elements, such as soil, shadows, or crop residues. This is achieved through a variety of techniques, including morphological operations, thresholding, vegetation indices, spectral clustering approaches such as Principal Component Analysis (PCA), or more recently, deep neural segmentation methods. A critical aspect at this stage is the choice of the analytical strategy, which generally follows two main approaches []. The first relies on the ROI average spectrum, where the mean spectral signature of a region is used to represent the entire sample. This method is computationally efficient and suitable for relatively homogeneous materials or when only global classification is required, yet it inevitably overlooks local heterogeneity and may obscure subtle adulterations or localized defects. Conversely, the pixel-wise approach exploits the full spatial resolution by analyzing each pixel spectrum individually, thus enabling the visualization of chemical maps and the detection of spatially localized anomalies. While this strategy preserves the distinctive advantage of hyperspectral imaging, it also introduces substantial computational demands, higher sensitivity to noise, and the need for advanced chemometric or deep learning models. Ultimately, the choice between the two reflects a trade-off between simplicity and representativeness, with recent studies often favoring the ROI approach for its practicality, even at the cost of underutilizing the spatial richness of hyperspectral data. The next phase concerns dimensionality reduction and feature extraction, necessary to condense the large spectral–spatial information into more manageable and discriminative representations. Beyond PCA, other methods, such as Minimum Noise Fraction (MNF), Independent Component Analysis (ICA), Partial Least Squares (PLS), or band selection strategies, are employed, while deep learning-based embeddings, such as autoencoders and Convolutional Neural Networks (CNNs), have gained increasing prominence. Building a robust dataset requires linking hyperspectral measurements to ground-truth labels obtained through expert annotation, field sampling and laboratory analysis—for instance, by measuring biochemical or physiological parameters (e.g., chlorophyll, nitrogen, soluble solids) through standard methods such as spectrophotometry, refractometry, or HPLC (High-Performance Liquid Chromatography). This is complemented by careful definition of training, validation, and test splits, with data augmentation techniques, such as geometric transformations (e.g., rotation, flipping, cropping), spectral perturbations (e.g., noise addition, spectral shifts), or synthetic sample generation via generative models, used to increase dataset variability and reduce overfitting in limited datasets. For modeling, regression tasks typically rely on algorithms such as Partial Least Squares Regression (PLSR), Random Forests (RF), or CNNs, targeting biochemical and physiological parameters. Classification problems—such as crop/weed discrimination or stress detection—are addressed with Support Vector Machines (SVMs), two- or three-dimensional CNNs, or hybrid models. Lightweight versions of these algorithms are increasingly adopted for real-time, on-board inference on GPUs (Graphics Processing Units) or FPGAs (Field-Programmable Gate Arrays). Model evaluation combines classical statistical measures with domain-specific metrics. Regression models are commonly assessed with the coefficient of determination (), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), or the Ratio of Performance to Deviation (RPD). Classification models are evaluated through accuracy, precision, recall, F1-score, and confusion matrices. For spectral fidelity and image reconstruction, metrics such as the Spectral Angle Mapper (SAM), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM) are also employed. Finally, deployment translates model outputs into actionable tools. In agriculture, this includes the production of operational maps for nutrients or water stress, the integration of real-time predictions into precision spraying or micro-dosing systems, or the use of embedded pipelines for sorting and grading applications. Comparable workflows are reported in other domains—such as food quality and safety, medical imaging, and water or flood monitoring—where analogous sequences of acquisition, segmentation, feature extraction, and modeling are applied with domain-specific adaptations. For example, indoor calibration protocols are optimized for food safety, tissue- or biomarker-driven feature sets guide medical imaging, and water–soil–vegetation spectral separation is exploited in environmental monitoring [,].

Figure 1.
Traditional HSI workflow.
Table 2 compares the main platforms employed for hyperspectral data acquisition in precision agriculture and related domains. Ground-based systems (gantries, imaging boxes, UGVs) stand out for their high stability and controlled illumination, making them particularly suitable for calibration studies and reference dataset collection. UAVs offer a balance between flexibility and spatial detail, with established applications in vineyards, rice fields, and horticultural crops. Manned airborne systems provide regional coverage but are constrained by high costs and logistical complexity, whereas satellite missions deliver long-term time series on a global scale, albeit with a ground sampling distance (GSD) inadequate for micro-plots. Multi-UAV systems enable simultaneous coverage of multiple plots but require sophisticated coordination and data management strategies [].

Table 2.
HSI acquisition platforms in precision agriculture and representative studies [,].
Table 3 highlights how the choice of acquisition modality directly affects data quality and suitability for specific scenarios. Pushbroom imagers remain the most widely adopted in agriculture due to their compact design and high signal-to-noise ratio (SNR), although they require careful motion correction. Whiskbroom scanners, despite their high spectral accuracy, are less suited to dynamic scenes or UAV deployment. Spectral scanning and snapshot systems are typically used in indoor or industrial sorting contexts, where rapid acquisition and reduced costs are prioritized.

Table 3.
HSI acquisition technologies/modes and use cases.
Table 4 synthesizes the main families of algorithms applied to HSI analysis. Traditional machine learning approaches (PLSR, SVM, and RF) remain highly relevant for biochemical parameter estimation thanks to their efficiency with limited datasets and selected bands. Deep learning models (1D, 2D, 3D CNNs) leverage both spectral and spatial information, achieving superior accuracy in discrimination tasks such as weed detection or stress identification. Generative approaches (GANs, autoencoders) have emerged more recently to mitigate overfitting and generate synthetic samples. This convergence suggests that methodological innovations are increasingly transferable across application domains.

Table 4.
HSI analysis methods (ML/DL) with operational characteristics.
Table 5 illustrates the wide spectrum of applications reported in the literature, ranging from nutrient and water stress monitoring to crop/weed discrimination and pest detection. The availability of software tools (e.g., MATLAB, Python libraries, such as SPy and PlantCV, and embedded SDKs) is shown to play a decisive role, often requiring custom pipelines tailored to specific tasks. In sorting and quality control, embedded software is critical for enabling real-time implementation. The same software frameworks can be adapted across domains with different performance requirements—for instance, rapid decision-making in surgical guidance versus high-throughput processing in food sorting.

Table 5.
HSI applications in precision agriculture.
Table 6 emphasizes the trade-offs among computing platforms for managing hyperspectral datasets. GPUs remain the standard for training and inference of deep learning models, whereas FPGAs offer extremely low latency and high throughput in sorting and compression tasks, albeit at the expense of design complexity. Embedded systems (e.g., Jetson TX1/TX2/Xavier) are emerging solutions for UAVs and UGVs, enabling on-board processing but constrained by limited memory. Broader challenges are linked to costs and massive dataset management, where HPC clusters and dedicated storage infrastructures become necessary, particularly in cross-domain applications such as medical imaging or water quality monitoring.

Table 6.
Hardware and real-time implementation for HSI with examples/performances.
Beyond vineyard use, detailed in numerous recent studies [,,,,,], ground-based HSI has been successfully applied to wheat, vegetables, stone fruit, citrus, and for quality assessment and defect detection in fresh produce, such as apples, potatoes, and tomatoes [,]. For example, the review by Dale [] and the overview provided by Benelli [] report multiple sub-studies documenting the use of HSI for apple ripeness classification, tomato damage detection, and non-destructive evaluation of quality parameters in kiwis and bell peppers.
A wide range of predictive and classification approaches were employed in the studies reported in Table 7. Among regression techniques, Partial Least Squares Regression (PLSR) was by far the most commonly applied method, often combined with preprocessing or variable selection procedures. Other linear approaches such as multiple linear regression and optimized vegetation indices were also adopted, while non-linear methods, including convolutional neural networks (CNNs), support vector machines (SVMs), multilayer perceptrons (MLPs), and discriminant analysis, were implemented particularly for classification tasks, such as variety discrimination, disease detection, and weed recognition. With respect to predictive performance, most studies reported moderate to good accuracy, with determination coefficients (R2) typically ranging from 0.70 to 0.88 for chlorophyll and nitrogen estimation, and classification accuracies above 90% in tasks such as fruit maturity assessment and weed resistance detection. Even though environmental factors (e.g., illumination variability, shadows, canopy structure, or wind) often affected data quality, the majority of models achieved acceptable and reproducible results, demonstrating that ground-based hyperspectral imaging can reliably support both quantitative predictions and qualitative classifications in real field conditions.

Table 7.
On-field ground based applications of hyperspectral imaging in the agri-food sector.
In Table 7, some agricultural studies on-field using hyperspectral technology are listed. The only type of hardware-related information we aim to include, in order to provide a broader overview of these applications, is economic. The overall equipment required for hyperspectral image acquisition in the field comes at a significantly high cost. The aqcuisition system listed in Table 7 have an average cost of €20,000 and can reach up to €40,000 for the camera body alone, without lens/objective.
Despite its great potential, the use of HSI on-field entails several practical and technical challenges. Below, we analyze the most important ones, with a greater focus on those less related to the specific hardware in use. Table 8 summarizes the various issues investigated. The first three are mainly tied to the type of technology currently available in research centers and on the market. It is not impossible that prices and/or technical specifications will improve in the coming years to match current RGB cameras. The remaining issues concern data acquisition and processing methods for raw field data. In what follows, each issue is separately analyzed.

Table 8.
Main practical issues of on-field HSI in agriculture, related causes, effects, and references.
3.1. Limited Acquisition Speed
Current HSI systems are often characterized by low acquisition speeds, especially when using push-broom or line-scan spectroradiometers with internal moving parts in the camera to generate spectral bands, which acquire one line at a time [,,]. This strongly limits the extent of monitoring campaigns, particularly when large areas need to be covered or when working with moving platforms. For instance, studies reported by [] highlight that scanning a complete tree canopy may take several minutes, making large-scale use or use in high temporal variability contexts difficult. Limited acquisition speed is not only a technological bottleneck but also a major barrier for operational scalability. In UAV-based applications, line-scanning systems require precise synchronization with platform velocity, which increases operational complexity and makes long campaigns prone to incomplete or misaligned coverage []. However, the use of portable NIR spectrometers allows for faster data collection, making monitoring more efficient, as shown by []. On-the-go HSI systems also enable data acquisition on entire vineyard rows while in motion, further reducing the operational time []. Beyond agriculture, similar constraints are observed in food quality inspection and medical imaging, where rapid decision-making is critical, and pushbroom systems are often replaced by snapshot cameras, despite their lower spectral fidelity []. Another limiting factor for acquisition speed, in addition to the intrinsic behavior of the hyperspectral camera, is illumination. Under field conditions, reduced light levels—caused, for example, by cloud cover—often require longer exposure times to achieve an acceptable signal. Even snapshot cameras, which are designed for rapid cube reconstruction, suffer from low light sensitivity and reduced signal-to-noise ratios, thus constraining their effective speed in outdoor deployments. Furthermore, when the target is located several meters away from the sensor, as in many ground-based applications, increasing the exposure time inevitably leads to motion blur. This issue is commonly mitigated by reducing the platform’s travel speed, which in turn further decreases the overall acquisition efficiency.
3.2. Low Robustness in Outdoor Conditions
This lack of robustness is consistently reported across the literature as one of the key obstacles for field deployment. According to Ram et al [], the combination of platform vibrations and atmospheric interference severely impacts the reliability of outdoor acquisitions, particularly for UAVs. For ground-based application, vibrations due to ground unevenness can compromise acquisition quality, causing blurring, distortions, and misalignment between spectral bands [,]. For example, in vineyard campaigns, it has been observed that even minor oscillations in the mounts lead to registration errors, which are difficult to correct in post-processing. Some authors suggest using anti-vibration supports or active stabilization systems [].
3.3. Low Spatial Resolution
Despite the high spectral detail, many portable HSI systems offer lower spatial resolution compared to conventional cameras [,].
Spatial resolution represents a critical parameter in hyperspectral imaging, as it directly affects the ability to capture meaningful spatial and spectral information. Four main challenges are typically encountered: (i) the trade-off between pixel size and optical quality, where small pixels may be limited by diffraction or optical aberrations, while large pixels reduce spatial detail; (ii) the balance between resolution and noise, since operating near the resolution limit often increases noise, requiring strategies such as spatial or spectral binning; (iii) the need to match the spatial resolution with the size of the target particles or contaminants, as insufficient resolution produces mixed pixels that dilute relevant signals; and (iv) the influence of surface irregularities and geometry, which introduce reflections, shadowing, and other artifacts that degrade effective resolution. When these limitations prevent reliable pixel-wise analysis, many studies have resorted to using the ROI average spectrum []. This approach reduces noise, simplifies data structures, and enhances computational feasibility, making it particularly attractive in applied and industrial contexts. Over the last five years, this trade-off has resulted in more than 80% of published studies adopting the ROI-based strategy, as it compensates for spatial resolution constraints and improves the robustness of classification models []. Nevertheless, this choice inevitably limits the exploitation of hyperspectral imaging’s unique capacity to map spatial heterogeneity, thereby reducing its potential for detecting localized defects, adulterants, or contaminants.
3.4. HSI Data Management and Complexity
Another critical issue relates to the size and complexity of HSI data. Each acquisition generates large volumes of multi-band data, often tens or hundreds of MB per image, posing significant challenges in terms of storage, transfer, and processing time [,,]. Embedded platforms such as Jetson boards struggle to process full hypercubes in real time, necessitating dimensionality reduction or band selection prior to modeling, highlighting that data management is often a greater challenge than data acquisition itself []. Das et al. [] emphasize that efficient data management is one of the key aspects for operational deployment of HSI, suggesting the adoption of automated pipelines for data cleaning and compression. In some cases, the cited literature suggests that the use of machine learning techniques for feature selection and dimensionality reduction is essential for handling large datasets []. Moreover advanced processing techniques like parallel and distributed architectures, and high-performance computing can help to reduce computational and storage cost [].
3.5. Lighting Control
Natural light, variable in intensity and direction, is one of the main sources of error in outdoor HSI imaging [,,,]. The presence of clouds, shadows, reflections, or rapid light variations can alter the sensor’s spectral response, making it difficult to compare repeated acquisitions. Various methods (e.g., use of reference targets for reflectance calibration or normalization algorithms) have been proposed in the literature, but standardizing conditions remains problematic [,].
3.6. Radiometric Image Calibration
Radiometric calibration is an essential phase in the acquisition and analysis of hyperspectral and multispectral images. This refers to the set of procedures that transform raw digital values (digital numbers, DN) produced by a sensor into absolute radiometric measurements, such as reflectance or scene radiance [,,], considering both sensor response and lighting conditions. In the case of hyperspectral and multispectral images, calibration serves two purposes: ensuring data comparability across different acquisitions, sensors, and lighting conditions; and enabling reliable quantitative information extraction, essential for biophysical monitoring, species or material identification, diagnostics, and automatic classification.
It requires stable, controlled conditions, which are difficult to maintain in the field [,]. Calibration errors can propagate throughout the data-processing chain, significantly affecting the accuracy of qualitative and quantitative estimates. Benelli et al. [] underline that the lack of standardization in calibration procedures among research groups still limits widespread outdoor HSI use. The evolution of methods, from classic reference-based strategies to computational and AI-based solutions, has expanded operational capabilities, allowing for use across a wide range of scenarios. However, the choice of calibration procedure must be carefully adapted to operating conditions, sensor type, and application objectives, always considering the challenges and limitations reported in the recent literature.
The most widespread method, using physical references (on-field and laboratory-based), involves reference panels with known reflectance (Spectralon or similar materials), acquired before, during, and after the acquisition session [,,,,]. This strategy can be applied both in the lab and directly in the field and includes the following steps: acquisition of panel images under the same operating conditions as the scene; estimation of the sensor’s radiometric response function for each band, typically through linear regression between panel values and their certified reflectances []; application of the calibration curve to all operational images to obtain radiometrically corrected data [,,].
Panel choice (pure white, gray, black, multiple) and operational handling strongly affect calibration quality and final measurement accuracy [,].
3.6.1. Calibration Based on Illumination Estimation and Compensation
Recently, techniques have spread that estimate the spectral distribution of the light source directly from images, then compensate its effects on the data [,,,,]. These methods include using reference patches in the scene to extract the SPD (Spectral Power Distribution) of the light [,]; applying color constancy algorithms adapted to multispectral data, such as gray world, max-RGB, shades of gray, or gray edge [,]; and finally using statistical or machine learning (deep learning) methods for automatic, per-pixel estimation of the illuminant [].
These approaches are particularly useful where physical references cannot be included, or in dynamic scenes and complex operational environments [,,].
3.6.2. Calibration for Non-Conventional or Consumer Instruments
The spread of consumer digital cameras has stimulated the development of calibration procedures adapted to RGB sensors or non-scientific devices, especially under low-light conditions [,]. Adopted methods include using reference panels, grayscale charts, and careful management of acquisition parameters (ISO, exposure time, dark correction) [] and spectral calibration strategies using diffraction filters and simplified physical models []. The analyzed literature highlights several recurring issues in the radiometric calibration of multispectral/hyperspectral images: variations in natural light [,,,]; errors due to panel misplacement, shadowing, surface contamination, or non-optimal angles [,]; non-linear sensor response and temporal drift [,,]; absence of physical references [,,]; multiple or non-uniform illumination effects [,,]; and application requirements and sector-specific constraints [,,].
In Table 9, several of the radiometric calibration methods previously discussed are summarized. Particular attention is given to the last two columns, which concern the practical aspects of their implementation. In many cases, the lack of knowledge of the sensor spectral response represents a major limitation, especially when non-scientific or consumer-grade cameras are employed. Conversely, methods requiring a reference target in the field may complicate certain in situ applications, where the deployment of calibration panels is not always feasible.

Table 9.
Summary of radiometric calibration methods for hyperspectral and multispectral images.
4. Spectral Super-Resolution: Methods, Challenges, and Perspectives
Spectral Super-Resolution (SSR), that is, the reconstruction of hyperspectral images from RGB images, is a central topic in modern computer vision, with applications ranging from precision agriculture to medical diagnostics and cultural heritage preservation.
Nevertheless, it is important to emphasize that spectral reconstruction only replaces the traditional acquisition stage: once the hyperspectral cube is reconstructed, all subsequent steps—such as data preprocessing, dimensionality reduction, and the development of predictive models—remain essentially the same as in the traditional HSI pipeline, along with their inherent difficulties.
A phenomenon that has a direct implications on spectral reconstruction is metamerism. Metamerism arises from the fact that imaging devices reduce the continuous spectral information of light to a small number of channels (red, green, and blue). As a result, two different spectral reflectance functions may produce identical RGB values and thus appear as the same color, despite their underlying physical differences. This phenomenon highlights the inherent information loss in color imaging systems, where a high-dimensional spectral signal is projected onto a low-dimensional color space. Formally, the response of a given channel can be expressed as a weighted integral over the visible spectrum:
where denotes the surface spectral reflectance, the spectral power distribution of the illuminant, and the spectral sensitivity function of the channel. Two distinct reflectance spectra, and , are said to be metameric under a given illuminant if they produce identical responses in all channels:
In this case, although , the recorded color will be indistinguishable, which represents the essence of the metamerism problem. Recovering the full incident spectrum from only three RGB responses is intrinsically an inverse ill-posed problem. Since infinitely many different spectra can lead to the same set of RGB values, the mapping from RGB space back to the hyperspectral domain is not unique.
Despite this problem, the recent literature highlights the significant progress enabled by statistical models and, especially, deep learning approaches [,].
Regression [], one of the earliest techniques, has become popular because of its straightforward, fast, accurate, and closed-form solution. RGB and their spectral estimations are related in the most basic “linear regression” [] by a single linear transformation matrix. Moreover, polynomial and root-polynomial regression [,] expand the RGB into polynomial/root-polynomial terms, which are subsequently transferred to spectra using a linear transform, in order to add non-linearity. Regressions that minimize the mean squared error (MSE) in the training set are sometimes referred to as “least-squares” regressions. Li et al. [] proposed a locally linear embedded sparse coding approach for spectral reconstruction from RGB images, achieving competitive results on various benchmarks. Similarly, the A+ method based on Adjusted Anchored Neighborhood Regression, developed by Timofte et al. [], is a reference in traditional super-resolution and is often used as a baseline in SSR applications.
Several advanced architectures have been proposed in recent years, such as HSCNN+ [] reported in Figure 2, attention-based models, adversarial networks [,], and techniques that exploit prior knowledge (e.g., the camera’s spectral response) [,]. In particular, Yan et al. [] suggest to use informative priors on material categories, meta-learning, and dimensionality reduction techniques to improve generalization and model efficiency.

Figure 2.
Spectral reconstruction workflow using deep learning.
The New Trends in Image Restoration and Enhancement (NTIRE) challenges, held annually alongside the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), provide standardized benchmarks and competitive venues for low-level vision problems. They foster rapid progress by releasing carefully curated datasets and clear evaluation protocols, enabling fair, reproducible comparisons among state-of-the-art architectures in spectral reconstruction.
As shown in Table 10, most of the submitted solutions rely on deep learning architectures, predominantly convolutional and transformer-based models, whereas approaches grounded in sparse recovery are almost entirely absent from the recent NTIRE challenges. In the earliest editions (e.g., NTIRE 2018), convolutional neural networks rapidly supplanted traditional sparse coding or regression-based techniques, which had previously been considered effective for spectral recovery. From NTIRE 2020 onwards, virtually all competitive entries relied on increasingly deeper CNNs, often incorporating residual or dense connections to improve stability and representation capacity. More recent challenges (e.g., NTIRE 2022) show a further shift towards transformer-based designs, such as MST++ and hybrid CNN–Transformer frameworks, reflecting the field’s convergence with broader trends in image restoration. Another noticeable trend in recent NTIRE challenges is the increasing integration of architectures that are sensitive not only to spectral dependencies but also to spatial features. Transformer-based models and hybrid CNN–attention frameworks explicitly leverage spatial context to improve reconstruction fidelity, marking a shift from purely spectral, pixel-wise mappings to more context-aware designs. Notably, dictionary learning and sparse recovery approaches are almost entirely absent from these recent competitions, confirming that they no longer represent competitive baselines when large-scale datasets and efficient GPU implementations enable the training of complex neural models. This progression underscores the decisive role of deep learning in setting the state of the art for spectral reconstruction.

Table 10.
Architectures reported in the NTIRE challenges under consideration (representative/ backbone families used by participants).
The summary reported in Table 11 shows not only the technical diversity of the methods submitted to NTIRE competitions but also the gradual evolution of the benchmarking protocols. A distinctive element of these challenges is the division into tracks, each designed to emulate different acquisition conditions or levels of task difficulty. For instance, the NTIRE 2018 and 2020 spectral recovery competitions defined two parallel tracks: the Clean track, which considered RGB images generated from hyperspectral data through a known and noise-free camera response function, and the real-world track, which simulated the practical scenario of JPEG-compressed RGB images obtained with an unknown response function. This dual setup provided insight into both the theoretical upper bound of algorithmic performance and the more challenging case of uncontrolled acquisition. In the NTIRE 2022 spectral recovery challenge, the organizers opted for a single track with a more realistic camera simulation pipeline, which incorporated automatic exposure, sensor noise, basic in-camera processing, and compression. This choice reflects a shift in focus from idealized conditions to scenarios that better approximate real-world usage. Overall, the design of tracks across NTIRE competitions illustrates the organizers’ intention to balance scientific rigor with practical relevance. By progressively moving from clean synthetic scenarios to complex, noise-affected, and compression-degraded data, the NTIRE challenges have established themselves as a benchmark series that both advances methodological innovation and pushes models closer to real-world applicability.

Table 11.
Summary of reported methods, hardware/software configurations, and performance in NTIRE challenges.
Table 12 summarizes both the official NTIRE datasets and the most widely used public hyperspectral benchmarks. A clear distinction can be observed between the NTIRE collections and traditional datasets. The NTIRE datasets (BGU HS, ARAD 1K, and LDV 2.0) were specifically curated to support competitive benchmarking, providing standardized train/validation/test splits and evaluation protocols. They are generally smaller in spatial resolution (e.g., ARAD 1K at 482 × 512 px) but offer carefully controlled acquisition pipelines, such as realistic camera simulations and JPEG compression, which are crucial for fair comparison across methods. In contrast, public hyperspectral datasets such as ICVL, CAVE, and Harvard remain indispensable for algorithm development due to their higher spatial resolution and scene diversity, though they lack standardized challenge settings. Remote sensing datasets (e.g., Chikusei, Houston, Pavia, Washington DC Mall, Botswana, and Cuprite) typically consist of large single-scene acquisitions with high spectral dimensionality, providing opportunities to test scalability but posing challenges for generalization. More recent UAV-based benchmarks, such as WHU-Hi, reflect a growing interest in domain-specific applications, particularly agriculture and crop monitoring. Overall, while NTIRE datasets drive methodological innovation through controlled and competitive benchmarking, the broader public datasets remain essential for testing robustness and adaptability of spectral reconstruction methods across varied imaging conditions and application domains. Many public datasets used for training and testing models do not include standardized calibration procedures, further complicating the generalization of results [,]. Chen et al. [] highlight that deep networks achieve excellent results on benchmarks, but also the challenge of maintaining high performance on unseen data or under different acquisition conditions.

Table 12.
Public and NTIRE hyperspectral datasets commonly used in spectral recovery and super-resolution.
It is important to note that the NTIRE challenges are not specifically oriented toward agri-food applications. Rather, they are conceived as general-purpose benchmarks for low-level vision tasks, such as spectral recovery, image restoration, and super-resolution, using standardized datasets and evaluation protocols. While the methodological advances achieved in NTIRE competitions are of potential relevance for agri-food imaging—particularly in terms of spectral fidelity and reconstruction accuracy—their datasets are typically composed of generic natural scenes or video content. Consequently, transferring these approaches to agri-food requires additional validation under domain-specific conditions, including outdoor acquisition variability, crop heterogeneity, and application-driven performance metrics.
In this regard, it is also important to acknowledge recent contributions that go beyond the NTIRE framework and explicitly address some of the physical shortcomings of current spectral reconstruction methods. Lin and Finlayson [] demonstrated that most deep learning architectures, while highly accurate on fixed benchmark datasets, are strongly dependent on exposure conditions. Their analysis revealed that leading CNN-based models fail to generalize when illumination or camera settings vary, a scenario that is unavoidable in field acquisitions. To mitigate this limitation, they revisited regression-based approaches and introduced root-polynomial regression, a model that is inherently exposure-invariant. Although simpler than deep neural networks, this method was shown to maintain stable performance across a range of exposure conditions, thus highlighting the importance of robustness over architectural complexity. Building on this line of work, Lin and Finlayson [] proposed a physically plausible formulation of spectral reconstruction, directly addressing another fundamental limitation of existing methods. In conventional pipelines, reconstructed hyperspectral signals often fail to reproduce the RGB values that were originally captured, meaning that the mapping from RGB to spectra is not physically consistent with the image formation process. To resolve this, the authors decompose each spectrum into two components: a fundamental metamer, which is uniquely determined by the measured RGB values, and a metameric black, which represents the residual spectral degrees of freedom invisible to the camera. This guarantees that reconstructed spectra always integrate back to the exact RGB measurements, ensuring zero colorimetric error while still allowing for learning-based methods to predict the hidden spectral variation. Beyond improved spectral fidelity, this approach enhances robustness to changes in illumination and camera response, thereby aligning reconstruction with the physical constraints of image capture. Taken together, these studies suggest that for application domains, such as agri-food, where imaging is frequently conducted in uncontrolled outdoor conditions, advances in exposure invariance and physical plausibility may be as critical as improvements in benchmark accuracy. Consequently, progress in this field cannot rely solely on NTIRE-driven deep learning pipelines, but must also integrate physics-based constraints and robustness criteria to ensure reliable deployment in real-world agricultural monitoring and food quality assessment. A critical aspect concerns model portability: the output of RGB cameras heavily depends on the sensor’s spectral response, calibration, white balance, and compression processes [,,]. As noted by Koundinya et al. [], models trained on data from a specific camera may degrade significantly when applied to data acquired with different devices. Normalization using a white reference and the incorporation of physical constraints on the plausibility of the reconstructed response are promising strategies. Overall, spectral super-resolution remains an open challenge: excellent results have been achieved in controlled conditions and on standard datasets, but operational application requires robust pipelines for calibration, validation, and critical selection of data and models [,]. All these factors make reconstruction highly dependent on the dataset, the camera, the resolution, the FOV, and the camera sensor, among others [,,]. The validity of an SSR model on cameras/datasets different from those used in training is often limited. In particular, the SSR model depends on the spectral response function of the specific camera used, in addition to the nature of the scene and the acquisition methodology. In datasets acquired in reflectance (where the response is normalized to a white standard), this dependence is mitigated, but considering the RGB response of each system remains fundamental [].
The main challenges of spectral super-resolution concern the need for large and heterogeneous datasets for effective training, the limited physical interpretability of deep learning models, and the difficulty of ensuring generalization across different cameras and datasets. In summary, spectral super-resolution represents a promising pathway to broaden spectral analysis applications in real-world scenarios; however, it remains an open scientific and technological problem with several hurdles still to be addressed. The following section examines recent applications of spectral reconstruction and super-resolution, both in controlled laboratory settings and under uncontrolled field conditions.
5. Regression Models from RGB via Spectral Reconstruction
HSI is a highly powerful tool for non-destructive characterization of food products and biological systems. However, its widespread application has been hindered by the high cost of equipment, the complexity of measurement procedures, and the low portability of traditional systems. In this context, spectral reconstruction (SR)—the ability to obtain hyperspectral data from simple RGB images captured with consumer cameras or smartphones—has recently emerged as a promising strategy to enable new forms of low-cost, highly accessible quantitative analysis. Through deep learning models trained to predict spectral information (or selected bands) from RGB photographs, it is theoretically possible to estimate chemical–physical and qualitative parameters directly on real products, both in the lab and potentially under field conditions.
Despite this theoretical potential, practical applications of spectral reconstruction for building quantitative regression models in the agri-food domain are still rare. Most studies focus on validating pipelines in laboratory settings as can be seen in Table 13, and only a few report real-world or in-field applications (Table 14). The main challenges include the standardization of RGB image acquisition and calibration procedures, model transferability across different matrices and lighting conditions, the management of instrumental and environmental variability, and the robustness of models when applied to real rather than simulated RGB images.

Table 13.
Applications of hyperspectral image reconstruction in laboratory settings.

Table 14.
Applications of hyperspectral image reconstruction in field conditions.
This section presents a critical discussion of some of the few existing examples in the literature on the construction and validation of quantitative regression models from RGB images via spectral reconstruction, applied to fruits and food products.
5.1. Tomato Quality Assessment
One of the earliest examples of laboratory-based spectral reconstruction applied to agri-food products is reported by Zhao et al. []. The authors developed and tested a deep learning pipeline for reconstructing hyperspectral images from single RGB images of tomatoes, aiming to rapidly and non-destructively predict several key quality parameters, including soluble solids content, titratable acidity, sugar-to-acid ratio, and anthocyanin/lycopene index. The study used tomato samples at different ripeness stages, which were analyzed using both hyperspectral imaging and RGB capture—partially simulated and partially real. RGB images were either simulated from hyperspectral reflectance data using standard illumination (CIE Standard illuminant D65) and camera response models (CIE 1931 Standard Observer with gamma correctio 1.4) or acquired directly by smartphone (Samsung Galaxy S9+). The reference parameters were measured through laboratory analyses. The reconstruction approach was based on a residual convolutional neural network (HSCNN-R), trained in a supervised manner with RGB inputs and hyperspectral outputs. Training was performed using only simulated RGB images. This ensured pixel-wise alignment between RGB and hyperspectral data. This network reconstructs a 31-channel spectra from 400 to 1000 nm with an RMSE of 6.22, MRAE of 0.05, and SAM of 1.68. The trained model was then validated both on simulated and real RGB images, in order to evaluate its robustness and transferability. The results showed that HSCNN-R was able to reconstruct hyperspectral pixel spectra with good accuracy—especially in the visible range—and enabled the estimation of quality parameters with R2 values ranging from 0.5 to 0.9 (Table 13). The pipeline combined automatic fruit segmentation with regression on reconstructed spectra. However, one major methodological limitation was that training relied on perfectly standardized RGB simulations, making the model highly effective—but only under identical conditions to those used for training. When applied to real RGB images (captured via smartphone, with different lighting, sensor response, noise, and compression), the accuracy of both spectral reconstruction and quantitative predictions tended to decrease, although some robustness was still observed. Notably, the pipeline did not include explicit calibration of real RGB images using color charts or white references, limiting the model’s transferability under variable conditions. The network basically links reflectance HSI to simulated and uncalibrated RGB images. The authors themselves highlighted that the model trained on tomatoes under lab conditions cannot be directly applied to other products, matrices, or lighting setups without retraining or including more diverse data in the training set. This study represents a milestone in validating spectral reconstruction for fresh product analysis in the lab. It demonstrates that it is possible to obtain reliable, non-destructive estimates of quality parameters from simple RGB images.
5.2. Sweet Potato Quality Evaluation
A relevant example of spectral reconstruction applied to quantitative regression in agri-food is the work by Ahmed et al. [], which focused on sweet potatoes. This study showed how an advanced deep learning pipeline can reconstruct hyperspectral data from simulated RGB images, aiming to predict the soluble solids content (SSC), a key industrial and nutritional quality parameter for this crop. The authors used 141 sweet potatoes of three cultivars, acquired in a laboratory setting. Each sample was imaged using a Specim IQ hyperspectral camera, with two acquisitions at different angles under 750 W tungsten halogen lamps simmetrically positioned ensured even field-of-view illumination. RGB images were then simulated from hyperspectral reflectance data using a standard rendering process and later converted to JPEG format for model input. The pipeline included careful segmentation to isolate the sweet potato from the background based on spectral differences. Only pixels from the region of interest (ROI) were used in both the HSI and RGB datasets. Spectra from the ROI were preprocessed using methods like Multiplicative Scatter Correction (MSC), Standard Normal Variate (SNV), and Savitzky–Golay smoothing. Informative bands were selected using a genetic algorithm to optimize SSC prediction. For training, each sample consisted of a pair: a simulated RGB image (not calibrated in absolute reflectance) and its corresponding HSI cube (reduced to selected bands and limited to the ROI). No real RGB images were used. The model employed was HSCNN-D, a dense hyperspectral convolutional neural network, with an RMSE of 0.05, MRAE of 0.86, and PSNR of 26 dB (Table 15). The model achieved more faithful spectral reconstructions than the previous study, particularly in the visible range, thanks to a deeper network (HSCNN-D versus HSCNN-R) at the cost of a higher computational load.

Table 15.
Evaluation metrics used in spectral reconstruction studies. In the formulas: i = generic pixel, = ground truth value, = predicted value, n = number of samples, and = spectral vectors, = maximum signal value.
As depicted in the study, traditional HSI steps follow the spectral reconstruction one in order to obtain a predictive model. Quantitative regression using Partial Least Square Regression (PLSR) on reconstructed data yielded a performance comparable to—and, in some cases, better than—that obtained using full-band real HSI, especially when only the selected informative bands were used. Feature selection helped reduce collinearity and noise compared to full-spectrum analysis.
This work extends spectral reconstruction to a complex matrix like sweet potato, showing that a segmented, feature-driven deep learning pipeline can deliver accurate regression models from simulated RGB inputs. However, the absence of real RGB data and field-calibrated imaging protocols remains a key challenge for practical deployment in production environments.
5.3. Glutamate Estimation in Beef
A recent study by Dong et al. [] provides an advanced benchmark for spectral reconstruction applied to animal-based products. The authors explored the feasibility of reconstructing hyperspectral data from RGB images to predict glutamic acid content in beef, comparing the performance of eleven state-of-the-art deep learning models including Multi-stage Spectral-wise Transformer (MST) and MST++ [], trained on simulated RGB-HSI pairs with separate subsets for training/testing and regression calibration/validation. The dataset included 360 beef samples from three cuts and four regions in China. Hyperspectral images were acquired in the visible-NIR range using a 125-band system, and corresponding RGB images were captured using the same device. The pipeline involved white/dark correction for the HSI data and feature selection via competitive adaptive reweighted sampling (CARS), reducing the dataset to 31 informative bands. Regions of interest were segmented to isolate meat pixels, and multivariate analysis included correlation assessments and t-SNE visualizations. All deep learning models delivered strong results. MST++ and MST showed the highest consistency with real HSI data, with an RMSE of 0.02, MRAE of 0.01, and PSNR of 36.97 dB, with respect to the previous studies. For the regression task, the MST++ model performed nearly as well on the reconstructed HSI as on the real HSI, achieving a prediction R2 of 0.842 vs. 0.853, respectively. In contrast, RGB images alone yielded a significantly lower accuracy, with an R2 around 0.60. This study demonstrates the robustness of cutting-edge spectral reconstruction networks for quantitative prediction in meat products.
5.4. Early Detection of Maize Diseases
The study by Fu et al. [] proposes a low-cost pipeline for early maize disease detection under field conditions, using hyperspectral data reconstructed from RGB images acquired in real operational settings. The dataset was collected at Jilin University (China) using the Xianyu 335 maize variety. Images were acquired with a Specim IQ camera (204 bands, 400–1000 nm, 512 × 512 pixels), from a distance of 1–1.5 meters. Each image contained both healthy and diseased areas, with pixel-wise annotations. While hyperspectral images were calibrated using a white reference panel (99% reflectance), RGB images—captured simultaneously and perfectly aligned—were not calibrated in reflectance. Image patches of 128 × 128 pixels were extracted and augmented through flipping and rotation. The HSI cubes were reduced to 31 bands spanning 400–700 nm. Spectral recovery was performed using an HSCNN+ architecture, a deep CNN with dense blocks, trained to reconstruct hyperspectral outputs from RGB inputs. The network was trained on 90% of the data and tested on the remaining 10%. A second CNN was used to classify disease status at the pixel level (healthy, infected, other), applied to both real and reconstructed HSI data. The results showed that HSCNN+ outperformed competing models like MST++, MIRNet, and HRNet, achieving an MRAE of 0.0713 and RMSE of 0.1204, with lowest errors in the central bands and higher errors at spectral extremes. In real field scenarios, disease detection based on reconstructed HSI outperformed pure RGB-based detection by 6–7 percentage points in overall accuracy, particularly in complex cases. The performance on the reconstructed HSI was very close to that on the real HSI, and robustness was validated via a five-fold cross-validation.
5.5. UAV-Based In-Field Phenotyping
A recent study by Zhao et al. [] presents one of the first applications of spectral reconstruction in open-field conditions for high-throughput phenotyping using UAVs. The goal was to reconstruct multispectral images from simple RGB images acquired by commercial drones to evaluate vegetation indices and crop parameters across large areas in a fast and cost-effective manner. The experiments were conducted on maize fields comprising 216 plots across three German sites, with image acquisitions on five dates during the growing season. UAVs equipped with both consumer-grade RGB cameras (DJI Phantom 4 Pro) and multispectral sensors (MicaSense RedEdge MX) were flown simultaneously. While the multispectral images were calibrated using reflectance panels, RGB images were not calibrated in absolute reflectance but were geometrically aligned with the reference images. The spectral reconstruction model was based on a modified DeepLabv3+ architecture, with RGB inputs of 256 × 256 pixels and outputs consisting of five multispectral bands (Blue, Green, Red, RedEdge, and NIR). The model was trained and validated using supervised learning with data augmentation and tested across sites using a cross-site validation strategy. The results were promising: RMSE values for band reconstruction ranged from 0.025 to 0.034, and pixel-level R2 values were between 0.93 and 0.97. The NIR band proved the most difficult to reconstruct. Vegetation indices such as NDVI, GNDVI, and NDRE calculated from reconstructed data showed excellent agreement with reference data, with errors typically below 5%. This study confirms that UAV-RGB spectral reconstruction enables multispectral mapping and phenotyping in the field with an accuracy comparable to dedicated multispectral sensors. The approach offers an accessible solution for both research and precision agriculture, although future work must address absolute calibration and robustness under more extreme conditions.
6. Discussion
The first limitation observed in the analyzed studies concerns the very nature of the RGB inputs: rather than being acquired in parallel with hyperspectral data, RGB images are often mathematically derived from the hyperspectral cube. This raises questions about the realism of the experimental setup, as the neural networks are effectively trained on radiometrically uncalibrated inputs, without the effects of diverse camera spectral responses, and image compression and preprocessing. Moreover, the spectral reconstruction itself is typically performed on evenly spaced bands within the 400–1000 nm range, without prioritizing agronomically meaningful regions, such as the red edge. In this sense, many approaches appear to reproduce strategies developed for generic spectral reconstruction challenges (e.g., NTIRE) rather than being specifically tailored to agricultural applications. But not only the number and density of spectral bands seems to be a limitation. To date, no study has demonstrated the feasibility of reconstructing the full VIS–NIR interval (400–2500 nm), which would be of greater agronomic interest than the 400–1000 nm range.
Moreover, all of the analyzed studies rely exclusively on proprietary datasets collected for the specific experiment, without leveraging publicly available repositories that could provide broader coverage or crop-specific annotations. The pivotal role of datasets rely not only in terms of their size and diversity but also regarding the type of input data (e.g., reflectance vs. radiance, laboratory vs. in-field acquisitions) and the level of calibration provided. Most deep learning models for SSR rely heavily on large, well-annotated datasets. However, many commonly used datasets lack proper calibration, particularly with respect to reflectance standards. This introduces a substantial source of variability, limiting the transferability and generalizability of trained models. When SSR is applied to field conditions—where lighting is highly variable and the environment is complex—the choice of dataset becomes even more consequential. Models trained exclusively on laboratory-acquired, perfectly standardized data often fail to maintain performance when exposed to real-world variability, such as non-uniform illumination, different backgrounds, and the presence of occlusions or shadows. The issue of “domain shift” between training and deployment environments is particularly acute in agriculture, where seasonal changes, crop types, and imaging hardware may all differ from those represented in the training data.
7. Conclusions
In summary, field hyperspectral imaging remains constrained by high costs, complex instrumentation, and limited portability, which restrict its use largely to research contexts. Spectral super-resolution (SSR), leveraging low-cost RGB cameras, offers a more accessible alternative with potential to democratize spectral analytics in agriculture. While SSR methods usually deliver lower accuracy than direct HSI acquisition, they provide clear advantages in scalability, affordability, and the feasibility of near real-time applications. To move from laboratory prototypes to field-ready solutions, it is essential to establish standardized pipelines covering image acquisition, calibration, segmentation, and inference. Equally critical is the creation of large, open, and crop-specific datasets that capture environmental variability, along with robust calibration references to ensure transferability. Advances in adaptive and physically grounded SSR models will be key to overcoming domain shift across sites, devices, and crop types. Although SSR cannot yet replace hyperspectral imaging in all scenarios, it represents a pragmatic pathway toward the wider adoption of spectral tools in precision agriculture. Real progress will depend on coordinated efforts across academia, industry, and stakeholders to translate current advances into robust, user-friendly, and impactful solutions.
Author Contributions
Conceptualization, M.M., C.C., and M.S.; methodology, M.M.; writing—original draft preparation, M.M.; writing—review and editing, M.M., C.C., and M.S.; supervision, C.C. and M.S.; funding acquisition, C.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the EU-NextGenerationEU-Piano Nazionale Ripresa e Resilienza (PNRR) Mission 4, Component 2, Investment 3.3 (DM 352/2022) and by CNH Industrial Italia SPA.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
HSI | Hyperspectral Imaging |
CNN | Convolution Neural Network |
GAN | Generative Adversial Network |
MLP | Multi Layer Perceptron |
MSC | Multiplicative Scatter Correction |
MST | Multi-stage Spectral-wise Transformer |
MRAE | Mean Relative Average Error |
NIR | Near-Infrared |
NDVI | Normalized Difference Vegetation Index |
GDVI | Green Normalized Difference Vegetation Index |
NDRE | Normalized Difference Red Edge |
PSNR | Peak to Signal Noise Ratio |
SNV | Standard Normal Variate |
SVM | Support Vector Machine |
SPD | Spectral Power Distribution |
SSR | Spectral Super Resolution |
SSC | Soluble Solid Content |
ROI | Region of Intrest |
R2 | Determination Coefficient |
RMSE | Root Mean Squared Error |
UAV | Unammed Aerial Veichle |
References
- Zhang, J.; Su, R.; Fu, Q.; Ren, W.; Heide, F.; Nie, Y. A survey on computational spectral reconstruction methods from RGB to hyperspectral imaging. Sci. Rep. 2022, 12, 11905. [Google Scholar] [CrossRef]
- Fsian, A.N.; Thomas, J.B.; Hardeberg, J.Y.; Gouton, P. Spectral reconstruction from RGB imagery: A potential option for infinite spectral data? Sensors 2024, 24, 3666. [Google Scholar] [CrossRef]
- He, J.; Yuan, Q.; Li, J.; Xiao, Y.; Liu, D.; Shen, H.; Zhang, L. Spectral super-resolution meets deep learning: Achievements and challenges. Inf. Fusion 2023, 97, 101812. [Google Scholar] [CrossRef]
- Lin, Y.T.; Finlayson, G.D. Physically Plausible Spectral Reconstruction. Sensors 2020, 20, 6399. [Google Scholar] [CrossRef]
- Yan, L.; Wang, X.; Zhao, M.; Kaloorazi, M.; Chen, J.; Rahardja, S. Reconstruction of hyperspectral data from RGB images with prior category information. IEEE Trans. Comput. Imaging 2020, 6, 1070–1081. [Google Scholar] [CrossRef]
- Arad, B.; Timofte, R.; Ben-Shahar, O.; Lin, Y.T.; Finlayson, G.D. NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 882–883. [Google Scholar]
- Arad, B.; Timofte, R.; Yahel, R.; Morag, N.; Bernat, A.; Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; et al. NTIRE 2022 Spectral Recovery Challenge and Data Set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 863–864. [Google Scholar]
- Medina-García, M.; Amigo, J.M.; Martínez-Domingo, M.A.; Valero, E.M.; Jiménez-Carvelo, A.M. Strategies for analysing hyperspectral imaging data for food quality and safety issues—A critical review of the last 5 years. Microchem. J. 2025, 214, 113994. [Google Scholar] [CrossRef]
- Ram, B.G.; Oduor, P.; Igathinathane, C.; Howatt, K.; Sun, X. A systematic review of hyperspectral imaging in precision agriculture: Analysis of its current state and future prospects. Comput. Electron. Agric. 2024, 222, 109037. [Google Scholar] [CrossRef]
- Bhargava, A.; Sachdeva, A.; Sharma, K.; Alsharif, M.H.; Uthansakul, P.; Uthansakul, M. Hyperspectral imaging and its applications: A review. Heliyon 2024, 10, e33208. [Google Scholar] [CrossRef]
- Polder, G.; Blok, P.M.; de Villiers, H.A.C.; van der Wolf, J.M.; Kamp, J. Hyperspectral imaging for detection of plant diseases with deep learning. Plant Pathol. 2019, 68, 1017–1024. [Google Scholar] [CrossRef]
- Mé rida-García, R.; Gálvez, S.; Solís, I.; Martínez-Moreno, F.; Camino, C.; Soriano, J.M.; Sansaloni, C.; Ammar, K.; Bentley, A.R.; Gonzalez-Dugo, V.; et al. High-throughput phenotyping using hyperspectral indicators supports the genetic dissection of yield in durum wheat grown under heat and drought stress. Front. Plant Sci. 2024, 15, 1470520. [Google Scholar] [CrossRef]
- Virlet, N.; Sabermanesh, K.; Sadeghi-Tehran, P.; Hawkesford, M.J. Field Scanalyzer: An automated robotic field phenotyping platform for detailed crop monitoring. Funct. Plant Biol. 2017, 44, 143–153. [Google Scholar] [CrossRef]
- Eddy, D.; Gerhards, R.; Kühbauch, W. Precision agriculture using hyperspectral imaging: Early results. Precis. Agric. 2008, 9, 123–135. [Google Scholar] [CrossRef]
- Zhang, J.; Pu, R.; Wang, J. Hyperspectral band selection for crop disease detection. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 2456–2459. [Google Scholar] [CrossRef]
- Zhang, J.; Pu, R.; Wang, J. Detection of weeds in maize fields using hyperspectral imaging. Comput. Electron. Agric. 2012, 80, 44–54. [Google Scholar] [CrossRef]
- Pantazi, X.E.; Moshou, D.; Bravo, C. Active learning system for weed species recognition based on hyperspectral sensing. Biosyst. Eng. 2016, 146, 193–202. [Google Scholar] [CrossRef]
- Williams, M.; Verhoeven, S.; Roelofs, J. Hyperspectral weed detection using tractor-mounted imaging. In Proceedings of the European Conference on Precision Agriculture (ECPA), Edinburgh, UK, 16–20 July 2017; pp. 121–128. [Google Scholar]
- Horstrand, P.; Guerra, R.; Aranda, J. Deep learning for weed detection in UAV-based hyperspectral imagery. Remote Sens. 2019, 11, 1559. [Google Scholar] [CrossRef]
- Horstrand, P.; Guerra, R.; Aranda, J. On-board hyperspectral data processing with UAVs for real-time applications. ISPRS J. Photogramm. Remote Sens. 2019, 158, 36–48. [Google Scholar] [CrossRef]
- Sousa, J.J.; Pádua, L.; Adão, T. Hyperspectral UAV imaging for vineyard monitoring. Remote Sens. 2022, 14, 352. [Google Scholar] [CrossRef]
- Feng, L.; Wu, B.; He, Y.; Zhang, C. Hyperspectral Imaging Combined With Deep Transfer Learning for Rice Disease Detection. Front. Plant Sci. 2021, 12, 693521. [Google Scholar] [CrossRef]
- Lu, B.; Dao, P.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
- Maes, W.; Steppe, K. Perspectives for hyperspectral sensing in precision agriculture. Trends Plant Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef]
- Chen, Y.; An, X.; Gao, S.; Li, S.; Kang, H. A Deep Learning-Based Vision System Combining Detection and Tracking for Fast On-Line Citrus Sorting. Front. Plant Sci. 2021, 12, 622062. [Google Scholar] [CrossRef]
- Eddy, D.; Gerhards, R. Machine learning approaches for hyperspectral weed detection. Comput. Electron. Agric. 2014, 105, 173–182. [Google Scholar] [CrossRef]
- Herrmann, I.; Shapira, U.; Kinast, S.; Karnieli, A.; Bonfil, D.J. Ground-level hyperspectral imagery for detecting weeds in wheat fields. Precis. Agric. 2013, 14, 637–659. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhang, L.; Li, J. Advances in hyperspectral band selection for crop monitoring. ISPRS J. Photogramm. Remote Sens. 2022, 188, 38–54. [Google Scholar] [CrossRef]
- Upadhyay, A.; Chandel, N.S.; Singh, K.P.; Chakraborty, S.K.; Nandede, B.M.; Kumar, M.; Subeesh, A.; Upendar, K.; Salem, A.; Elbeltagi, A. Deep learning and computer vision in plant disease detection: A comprehensive review of techniques, models, and trends in precision agriculture. Artif. Intell. Rev. 2025, 58, 92. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, Y.; Wang, Z. Generative adversarial networks for hyperspectral data augmentation in crop classification. Remote Sens. 2022, 14, 2894. [Google Scholar] [CrossRef]
- Zheng, H.; Cheng, T.; Li, D.; Yao, X.; Tian, Y.; Cao, W.; Zhu, Y. Combining Unmanned Aerial Vehicle (UAV)-Based Multispectral Imagery and Ground-Based Hyperspectral Data for Plant Nitrogen Concentration Estimation in Rice. Front. Plant Sci. 2018, 9, 936. [Google Scholar] [CrossRef]
- Rubio, A.; Rueda-Ayala, V.; Gerhards, R. Hyperspectral and machine learning approaches for crop nutrient assessment. Comput. Electron. Agric. 2021, 185, 106157. [Google Scholar] [CrossRef]
- Corti, M.; Cavalli, D.; Colombo, R. Assessment of crop water stress with UAV hyperspectral data. Agric. Water Manag. 2017, 193, 150–160. [Google Scholar] [CrossRef]
- Sanchez, S.; Plaza, A. GPU implementation of endmember extraction algorithms for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4157–4171. [Google Scholar] [CrossRef]
- Aragón, M.; González, J.; Plaza, A. Efficient hyperspectral PCA on embedded GPU platforms for precision agriculture. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2636–2640. [Google Scholar] [CrossRef]
- Cervero, T.; Lopez, S.; Callico, G.M.; Lopez, J.F.; Sarmiento, R. Scalable architectures for real-time hyperspectral unmixing. Microelectron. J. 2014, 45, 1138–1150. [Google Scholar] [CrossRef]
- Gyaneshwar, D.; Nidamanuri, R.R. A real-time FPGA accelerated stream processing for hyperspectral image classification. Geocarto Int. 2022, 37, 52–69. [Google Scholar] [CrossRef]
- Rosario, D.; López, S.; Plaza, A. Real-time FPGA implementation of the Vertex Component Analysis algorithm for hyperspectral unmixing. In Proceedings of the SPIE 9244, Image and Signal Processing for Remote Sensing XX, Amsterdam, The Netherlands, 22–24 September 2014. [Google Scholar] [CrossRef]
- Benelli, A.; Cevoli, C.; Fabbri, A. In-field Vis/NIR hyperspectral imaging to measure soluble solids content of wine grape berries during ripening. In Proceedings of the 2020 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Trento, Italy, 4–6 November 2020; pp. 227–231. [Google Scholar] [CrossRef]
- Benelli, A.; Cevoli, C.; Ragni, L.; Fabbri, A. In-field and non-destructive monitoring of grapes maturity by hyperspectral imaging. Biosyst. Eng. 2021, 207, 59–67. [Google Scholar] [CrossRef]
- Gutiérrez, S.; Fernández-Novales, J.; Diago, M.P.; Tardaguila, J. On-The-Go hyperspectral imaging under field conditions and machine learning for the classification of grapevine varieties. Front. Plant Sci. 2018, 9, 1102. [Google Scholar] [CrossRef]
- Fernández-Novales, J.; Tardáguila, J.; Gutiérrez, S.; Diago, M.P. On The Go VIS SW NIR spectroscopy as a reliable monitoring tool for grape composition within the vineyard. Molecules 2019, 24, 2795. [Google Scholar] [CrossRef]
- Ferrara, G.; Melle, A.; Marcotuli, V.; Botturi, D.; Fawole, O.A.; Mazzeo, A. The prediction of ripening parameters in Primitivo wine grape cultivar using a portable NIR device. J. Food Compos. Anal. 2022, 114, 104836. [Google Scholar] [CrossRef]
- Gutiérrez, S.; Tardáguila, J.; Fernández-Novales, J.; Diago, M.P. On the go hyperspectral imaging for the in field estimation of grape berry soluble solids and anthocyanin concentration. Aust. J. Grape Wine Res. 2018, 24, 127–133. [Google Scholar] [CrossRef]
- Dale, L.M.; Thewis, A.; Boudry, C.; Rotar, I.; Dardenne, P.; Baeten, V.; Fernandez Pierna, J.A. Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: A review. Appl. Spectrosc. Rev. 2013, 48, 142–159. [Google Scholar] [CrossRef]
- Benelli, A.; Cevoli, C.; Fabbri, A. In-field hyperspectral imaging: An overview on the ground-based applications in agriculture. J. Agric. Eng. 2020, 51, 1030. [Google Scholar] [CrossRef]
- Underwood, J.; Wendel, A.; Schofield, B.; McMurray, L.; Kimber, R. Efficient in-field plant phenomics for row-crops with an autonomous ground vehicle. J. Field Robot. 2017, 34, 1061–1083. [Google Scholar] [CrossRef]
- Jiang, Y.; Li, C.; Robertson, J.S.; Sun, S.; Xu, R.; Paterson, A.H. GPhenoVision: A ground mobile system with multi-modal imaging for field-based high throughput phenotyping of cotton. Sci. Rep. 2018, 8, 1213. [Google Scholar] [CrossRef]
- Wendel, A.; Underwood, J.; Walsh, K. Maturity estimation of mangoes using hyperspectral imaging from a ground based mobile platform. Comput. Electron. Agric. 2018, 155, 298–313. [Google Scholar] [CrossRef]
- Wang, B.; Zhang, X.; Dong, Y.; Zhang, J.; Zhang, J.; Zhou, X. Retrieval of leaf chlorophyll content of paddy rice with extracted foliar hyperspectral imagery. In Proceedings of the 2018 7th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Hangzhou, China, 6–9 August 2018; pp. 1–5. [Google Scholar]
- Wu, Q.; Wang, C.; Fang, J.J.; Ji, J.W. Field monitoring of wheat seedling stage with hyperspectral imaging. Int. J. Agric. Biol. Eng. 2016, 9, 143–148. [Google Scholar] [CrossRef]
- Jay, S.; Gorretta, N.; Morel, J.; Maupas, F.; Bendoula, R.; Rabatel, G.; Dutartre, D.; Comar, A.; Baret, F. Estimating leaf chlorophyll content in sugar beet canopies using millimeter- to centimeter-scale reflectance imagery. Remote Sens. Environ. 2017, 198, 173–186. [Google Scholar] [CrossRef]
- Onoyama, H.; Ryu, C.; Suguri, M.; Iida, M. Nitrogen prediction model of rice plant at panicle initiation stage using ground-based hyperspectral imaging: Growing degree-days integrated model. Precis. Agric. 2015, 16, 558–570. [Google Scholar] [CrossRef]
- Vigneau, N.; Ecarnot, M.; Rabatel, G.; Roumet, P. Potential of field hyperspectral imaging as a non destructive method to assess leaf nitrogen content in Wheat. Field Crop. Res. 2011, 122, 25–31. [Google Scholar] [CrossRef]
- Whetton, R.L.; Waine, T.W.; Mouazen, A.M. Hyperspectral measurements of yellow rust and fusarium head blight in cereal crops: Part 2: On-line field measurement. Biosyst. Eng. 2018, 167, 144–158. [Google Scholar] [CrossRef]
- Zhao, J.; Zhang, D.; Huang, L.; Zhang, Q.; Liu, W.; Yang, H. Vertical features of yellow rust infestation on winter wheat using hyperspectral imaging measurements. In Proceedings of the 2016 5th International Conference on Agro-Geoinformatics, Tianjin, China, 18–20 July 2016; pp. 1–4. [Google Scholar]
- Römer, C.; Wahabzada, M.; Ballvora, A.; Pinto, F.; Rossini, M.; Panigada, C.; Behmann, J.; Léon, J.; Thurau, C.; Bauckhage, C.; et al. Early drought stress detection in cereals: Simplex volume maximisation for hyperspectral image analysis. Funct. Plant Biol. 2012, 39, 878–890. [Google Scholar] [CrossRef]
- Reddy, K.N.; Huang, Y.; Lee, M.A.; Nandula, V.K.; Fletcher, R.S.; Thomson, S.J.; Zhao, F. Glyphosate-resistant and glyphosate-susceptible Palmer amaranth (Amaranthus palmeri S. Wats.): Hyperspectral reflectance properties of plants and potential for classification. Pest Manag. Sci. 2014, 70, 1910–1917. [Google Scholar] [CrossRef]
- Chen, W.; Li, H.; Niu, Q.; Hongnan, H. Hyperspectral image segmentation for maize stubble in no-till field. In Proceedings of the 2017 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers, Spokane, WA, USA, 16–19 July 2017; p. 6. [Google Scholar]
- Das, M.; Yeo, W.S.; Saptoro, A. A review of machine learning in hyperspectral imaging for food safety. Vib. Spectrosc. 2025, 139, 103828. [Google Scholar] [CrossRef]
- Torres, I.; Amigo, J.M. An overview of regression methods in hyperspectral and multispectral imaging. Data Handl. Sci. Technol. 2019, 32, 205–230. [Google Scholar] [CrossRef]
- Zhang, C.; Zhao, Y.; Yan, T.; Bai, X.; Xiao, Q.; Gao, P.; Li, M. Huang, W.; Bao, Y.; He, Y. Application of near-infrared hyperspectral imaging for variety identification of coated maize kernels with deep learning. Infrared Phys. Technol. 2020, 111, 103550. [Google Scholar] [CrossRef]
- Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Hasan, M.; Van Essen, B.C.; Awwal, A.A.S.; Asari, V.K. A state-of-the-art survey on deep learning theory and architectures. Electronics 2019, 8, 292. [Google Scholar] [CrossRef]
- Khanna, R.; Sa, I.; Nieto, J.; Siegwart, R. On Field Radiometric Calibration for Multispectral Cameras. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 6503–6509. [Google Scholar] [CrossRef]
- Deng, L.; Hao, X.; Mao, Z.; Yan, Y.; Sun, J.; Zhang, A. A Subband Radiometric Calibration Method for UAV-Based Multispectral Remote Sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 8, 2869–2880. [Google Scholar] [CrossRef]
- Simoneau, A.; Aubé, M. Methods to Calibrate a Digital Colour Camera as a Multispectral Imaging Sensor in Low Light Conditions. Remote Sens. 2023, 15, 3634. [Google Scholar] [CrossRef]
- Poncet, A.M.; Knappenberger, T.; Brodbeck, C.; Fogle, M., Jr.; Shaw, J.N.; Ortiz, B.V. Multispectral UAS Data Accuracy for Different Radiometric Calibration Methods. Remote Sens. 2019, 11, 1917. [Google Scholar] [CrossRef]
- Wang, S.; Baum, A.; Zarco-Tejada, P.J.; Dam-Hansen, C.; Thorseth, A.; Bauer-Gottwein, P.; Bandini, F.; Garcia, M. Unmanned Aerial System multispectral mapping for low and variable solar irradiance conditions: Potential of tensor decomposition. ISPRS J. Photogramm. Remote Sens. 2019, 155, 58–71. [Google Scholar] [CrossRef]
- Barker, J.B.; Woldt, W.E.; Wardlow, B.D.; Neale, C.M.U.; Maguire, M.S.; Leavitt, B.C.; Heeren, D.M. Calibration of a Common Shortwave Multispectral Camera System for Quantitative Agricultural Applications. Precis. Agric. 2020, 21, 922–935. [Google Scholar] [CrossRef]
- Qin, Z.; Li, X.; Gu, Y. An Illumination Estimation and Compensation Method for Radiometric Correction of Multispectral Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5545012. [Google Scholar] [CrossRef]
- Kitanovski, V.; Thomas, J.B.; Hardeberg, J.Y. Reflectance Estimation from Snapshot Multispectral Images Captured under Unknown Illumination. In Proceedings of the 29th Color and Imaging Conference Final Program and Proceedings, Online, 1–4 November 2021; pp. 264–269. [Google Scholar] [CrossRef]
- Khan, H.A.; Thomas, J.B.; Hardeberg, J.Y.; Laligant, O. Illuminant Estimation in Multispectral Imaging. J. Opt. Soc. Am. A 2017, 34, 1085–1099. [Google Scholar] [CrossRef]
- Thomas, J.B. Illuminant Estimation from Uncalibrated Multispectral Images. In Proceedings of the 2015 Colour and Visual Computing Symposium (CVCS), Gjovik, Norway, 25–26 August 2015; pp. 1–6. [Google Scholar] [CrossRef]
- Li, Y.; Fu, Q.; Heidrich, W. Multispectral illumination estimation using deep unrolling network. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 2672–2681. [Google Scholar] [CrossRef]
- Amziane, A.; Losson, O.; Mathon, B.; Dumenil, A.; Macaire, L. Frame-based Reflectance Estimation from Multispectral Images for Weed Identification in Varying Illumination Conditions. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Paris, France, 9–12 November 2020; pp. 1966–1970. [Google Scholar] [CrossRef]
- Alvarez-Cortes, S.; Kunkel, T.; Masia, B. Practical Low-Cost Recovery of Spectral Power Distributions. Comput. Graph. Forum 2016, 35, 166–178. [Google Scholar] [CrossRef]
- Dinguirard, M.; Slater, P.N. Calibration of Space-Multispectral Imaging Sensors: A Review. Remote Sens. Environ. 1999, 68, 194–205. [Google Scholar] [CrossRef]
- Ayala, L.; Seidlitz, S.; Vemuri, A.; Wirkert, S.J.; Kirchner, T.; Adler, T.J.; Engels, C.; Teber, D.; Maier-Hein, L. Light Source Calibration for Multispectral Imaging in Surgery. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1117–1125. [Google Scholar] [CrossRef] [PubMed]
- Moya, I.; Camenen, L.; Evain, S.; Goulas, Y.; Cerovic, Z.G.; Latouche, G.; Flexas, J.; Ounis, A. A new instrument for passive remote sensing Measurements of sunlight-induced chlorophyll fluorescence. Remote Sens. Environ. 2004, 91, 186–197. [Google Scholar] [CrossRef]
- Heikkinen, V.; Lenz, R.; Jetsu, T.; Parkkinen, J.; Hauta-Kasari, M.; Jääskeläinen, T. Evaluation and unification of some methods for estimating reflectance spectra from RGB images. J. Opt. Soc. Am. A 2008, 25, 2444–2458. [Google Scholar] [CrossRef]
- Lin, Y.T.; Finlayson, G.D. Exposure Invariance in Spectral Reconstruction from RGB Images. In Proceedings of the IS&T International Symposium on Electronic Imaging, Burlingame, CA, USA, 13–17 January 2019; p. 284. [Google Scholar] [CrossRef]
- Connah, D.R.; Hardeberg, J.Y. Spectral recovery using polynomial models. In Proceedings of the Color Imaging X: Processing, Hardcopy, and Applications, Bellingham, WA, USA, 17–20 January 2005; Volume 5667, pp. 65–75. [Google Scholar] [CrossRef]
- Li, Y.; Wang, C.; Zhao, J. Locally Linear Embedded Sparse Coding for Spectral Reconstruction From RGB Images. IEEE Signal Process. Lett. 2018, 25, 363–367. [Google Scholar] [CrossRef]
- Timofte, R.; De Smet, V.; Van Gool, L. A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution. In Proceedings of the Asian Conference on Computer Vision (ACCV), Singapore, 1–5 November 2014; Springer: Cham, Switaerland, 2015; Volume 9006, pp. 111–126. [Google Scholar]
- Shi, Z.; Chen, C.; Xiong, Z.; Liu, D.; Wu, F. HSCNN+: Advanced CNN-Based Hyperspectral Recovery from RGB Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 11052–11060. [Google Scholar]
- Liu, P.; Zhao, H. Adversarial Networks for Scale Feature-Attention Spectral Image Reconstruction from a Single RGB. Sensors 2020, 20, 2426. [Google Scholar] [CrossRef]
- Alvarez-Gila, A.; van de Weijer, J.; Garrote, E. Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 480–489. [Google Scholar] [CrossRef]
- Arad, B.; Ben-Shahar, O.; Timofte, R.; Gool, L.V.; Zhang, L.; Yang, M.; Xiong, Z.; Chen, C.; Shi, Z.; Liu, D.; et al. NTIRE 2018 Challenge on Spectral Reconstruction from RGB Images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Chen, C.; Wang, Y.; Zhang, N.; Zhang, Y.; Zhao, Z. A review of hyperspectral image super-resolution based on deep learning. Remote Sens. 2023, 15, 2853. [Google Scholar] [CrossRef]
- Arad, A.; Ben-Shahar, O. Sparse Recovery of Hyperspectral Signal from Natural RGB Images. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
- Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef]
- Chakrabarti, S.; Zickler, T. Statistics of Real-World Hyperspectral Images. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 193–200. [Google Scholar]
- Shi, G.; Huang, H.; Li, Z.; Duan, Y. Multi-manifold locality graph preserving analysis for hyperspectral image classification. Neurocomputing 2020, 388, 45–59. [Google Scholar] [CrossRef]
- Ghamisi, P.; Benediktsson, J.A.; Phinn, S. Fusion of Hyperspectral and LiDAR Data in Classification of Urban Areas. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014. [Google Scholar]
- Pacifici, F.; Chanussot, J.; Du, Q. 2011 GRSS Data Fusion Contest: Exploiting WorldView-2 Multi-Angular Acquisitions. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011. [Google Scholar] [CrossRef]
- Landgrebe, D. Hyperspectral Image Data Analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
- Katkovsky, L.V.; Martinov, A.O.; Siliuk, V.A.; Ivanov, D.A.; Kokhanovsky, A.A. Fast Atmospheric Correction Method for Hyperspectral Data. Remote Sens. 2018, 10, 1698. [Google Scholar] [CrossRef]
- Green, R.O.; Sarture, C.M.; Chrien, T.G.; Aronsson, M.; Chippendale, B.J.; Faust, J.A.; Pavri, B.E.; Chovit, C.J.; Solis, M.; Olah, M.R.; et al. Imaging Spectroscopy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
- Zhong, Y.; Hu, X.; Luo, C.; Wang, X.; Zhao, J.; Zhang, L. WHU-Hi: UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF. Remote Sens. Environ. 2020, 250, 112012. [Google Scholar] [CrossRef]
- Koundinya, S.; Sharma, H.; Sharma, M.; Upadhyay, A.; Manekar, R.; Mukhopadhyay, R.; Karmakar, A.; Chaudhury, S. 2D-3D CNN based architectures for spectral reconstruction from RGB images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2020; pp. 957–958. [Google Scholar]
- Zhao, J.; Kechasov, D.; Rewald, B.; Bodner, G.; Verheul, M.; Clarke, N.; Clarke, J. Deep learning in hyperspectral image reconstruction from single RGB images—A case study on tomato quality parameters. Remote Sens. 2020, 12, 3258. [Google Scholar] [CrossRef]
- Lailyshofa, N.; Saputro, A.H. Hyperspectral Rice Grain Image Reconstruction Using HR-ResNet Algorithm to Construct Rice Spectral Reflectance Profile. In Proceedings of the 2023 International Conference on Information Technology Research and Innovation (ICITRI), Jakarta, Indonesia, 16 August 2023; pp. 54–59. [Google Scholar] [CrossRef]
- Ahmed, M.T.; Monjur, O.; Kamruzzaman, M. Deep learning-based hyperspectral image reconstruction for quality assessment of agro-product. J. Food Eng. 2024, 382, 112223. [Google Scholar] [CrossRef]
- Ahmed, M.; Villordon, A.; Kamruzzaman, M. Comparative analysis of Hyperspectral Image Reconstruction using Deep Learning for Agricultural and Biological Applications. Results Eng. 2024, 23, 102623. [Google Scholar] [CrossRef]
- Ahmed, M.; Ahmed, M.; Monjur, O.; Emmert, J.L.; Chowdhary, G.; Kamruzzaman, M. Hyperspectral image reconstruction for predicting chick embryo mortality towards advancing egg and hatchery industry. Smart Agric. Technol. 2024, 9, 100533. [Google Scholar] [CrossRef]
- Dong, F.; Xu, Y.; Shi, Y.; Feng, Y.; Ma, Z.; Li, H.; Zhang, Z.; Wang, G.; Chen, Y.; Xian, J.; et al. Spectral reconstruction from RGB image to hyperspectral image: Take the detection of glutamic acid index in beef as an example. Food Chem. 2025, 463, 141543. [Google Scholar] [CrossRef]
- Fu, J.; Liu, J.; Zhao, R.; Chen, Z.; Qiao, Y.; Li, D. Maize disease detection based on spectral recovery from RGB images. Front. Plant Sci. 2022, 13, 1056842. [Google Scholar] [CrossRef]
- Yang, W.; Zhang, B.; Xu, W.; Liu, S.; Lan, Y.; Zhang, L. Investigating the impact of hyperspectral reconstruction techniques on the quantitative inversion of rice physiological parameters: A case study using the MST++ model. J. Integr. Agric. 2024, 24, 2540–2557. [Google Scholar] [CrossRef]
- Zhao, J.; Kumar, A.; Banoth, B.N.; Marathi, B.; Rajalakshmi, P.; Rewald, B.; Ninomiya, S.; Guo, W. Deep-Learning-Based Multispectral Image Reconstruction from Single Natural Color RGB Image—Enhancing UAV-Based Phenotyping. Remote Sens. 2022, 14, 1272. [Google Scholar] [CrossRef]
- Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; Pfister, H.; Van Gool, L. MST++: Multi-stage spectral-wise transformer for efficient spectral reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 21–24 June 2022; pp. 744–754. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).