Next Article in Journal
Numerical Study on Failure Mechanisms of Deep Roadway Sidewalls with Different Height-Width Ratios and Lateral Pressures
Previous Article in Journal
Comprehensive Six-Degrees-of-Freedom Trajectory Design and Optimization of a Launch Vehicle with a Hybrid Last Stage Using the PSO Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Shadow Compensation Method in Hyperspectral Images via Multi-Exposure Fusion and Edge Fusion

by
Yan Meng
,
Guanyi Li 
and
Wei Huang
*
Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai Institute of Advanced Communication and Data Science, Shanghai University, Shangda Road 99, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(9), 3890; https://doi.org/10.3390/app14093890
Submission received: 2 April 2024 / Revised: 22 April 2024 / Accepted: 29 April 2024 / Published: 1 May 2024

Abstract

:
Shadows in hyperspectral images lead to reduced spectral intensity and changes in spectral characteristics, significantly hindering analysis and applications. However, current shadow compensation methods face the issue of nonlinear attenuation at different wavelengths and unnatural transitions at the shadow boundary. To address these challenges, we propose a two-stage shadow compensation method based on multi-exposure fusion and edge fusion. Initially, shadow regions are identified through color space conversion and an adaptive threshold. The first stage utilizes multi-exposure, generating a series of exposure images through adaptive exposure coefficients that reflect spatial shadow intensity variations. Fusion weights for exposure images are determined based on exposure, contrast, and spectral variance. Then, the exposure sequence and fusion weights are constructed as Laplacian pyramids and Gaussian pyramids, respectively, to obtain a weighted fused exposure sequence. In the second stage, the previously identified shadow regions are smoothly reintegrated into the original image using edge fusion based on the p-Laplacian operator. To further validate the effectiveness and spectral fidelity of our method, we introduce a new hyperspectral image dataset. Experimental results on the public dataset and proposed dataset demonstrate that our method surpasses other mainstream shadow compensation methods.

1. Introduction

Hyperspectral imaging is a sophisticated imaging technology [1] that collects information from hundreds of contiguous spectral bands across the spectrum, including ultraviolet, visible, and infrared light. Each pixel in hyperspectral images contains complete spectral information, essentially providing a unique spectral curve for materials present in the scene [2]. The ability of hyperspectral imaging to provide detailed spectral characteristics of different materials enables its wide application in fields such as environmental monitoring [3], agriculture [4], mineralogy [5], and surveillance [6]. Shadows in hyperspectral images significantly reduce the spectral intensity of materials and change their spectral characteristics [7]. The high spatial resolution of hyperspectral images leads to complex material compositions and a blurred shadow boundary [8]. To address these issues, current shadow processing methods generally include two stages: accurate shadow detection and effective shadow compensation.
Hyperspectral shadow detection requires accurate identification of shadow areas. Fredembach et al. [9] distinguished between shadowed and non-shadowed regions by analyzing the darkness in visible and near-infrared (NIR) spectra. Richter et al. [10] used spectral data from NIR and short-wave infrared bands and used covariance matrices and matched filters for detection to distinguish shadowed from non-shadowed areas. This approach involves generating a shadow function and identifying core shadow regions through histogram thresholding. Yet, the limitation of these approaches often lies in the incorrect identification of environmental elements [11], such as the confusion between sunlit dark surfaces and shadows, or mistaking shadowed bright areas for sunlit ones. Furthermore, due to the gradual reduction of shadow components in the transition areas between shadowed and non-shadowed surfaces, it is not accurate to simply classify them as shadowed or non-shadowed. Zhang et al. [12] utilized the Inner–Outer Outline Profile Line (IOOPL) method, which extends analysis both inward and outward from shadow edges, aiding in the precise delineation of shadow boundaries, particularly in transitional areas. Wang et al. [13] effectively handled the transition between shadowed and non-shadowed areas in high-resolution remote sensing images using a matting method. By minimizing the energy function that utilizes relationships between local pixels, this method provides the shadow probability for each pixel, ensuring smooth transitions at shadow edges. However, even if the shadow edges are meticulously processed pixel by pixel, existing methods still cannot handle transition areas well [14].
Hyperspectral shadow compensation is aimed at addressing the spectral attenuation and distortion caused by shadows to restore the spectral characteristics of material under well-lit conditions. Zhao et al. [15] utilized nonlinear unmixing to solve the nonlinear spectral shifts induced by shadows. They achieved this by extracting spectra from both shadowed and non-shadowed areas and using a nonlinear model to evaluate pixel abundance, then reconstructing pixels in shadowed regions. Zhao et al. [16] developed a network based on the CycleGAN (Cycle-Consistent Generative Adversarial Network), named SC-CycleGAN, eliminating the need for paired training data and shadow detection. The network, comprising two generators and two discriminators, learns to map between shadowed and non-shadowed domains, effectively compensating shadows while preserving the integrity of non-shadowed areas. Although these methods restore the spectra of materials, they overlook the spatial details of the surface. Duan et al. [8] compensated for shadows by adjusting the spectral intensity of the image, thereby maintaining consistency in spatial and visual details before and after processing. However, a single adjustment parameter cannot solve the problem of nonlinear spectral attenuation caused by shadows. The limitations of hyperspectral shadow compensation methods lie in the fact that model-based methods require assumptions about the interaction between ground objects and spectra. The generative deep learning methods are not suitable for hyperspectral data with high spatial resolution, which cannot maintain spatial details well [17,18]. Moreover, current methods cannot effectively address the issue of nonlinear spectral attenuation caused by shadows, and lack consideration for the transition regions at the edges of shadows.
In this work, we propose an adaptive shadow compensation method in hyperspectral images via multi-exposure fusion and edge fusion. Shadow detection is performed by obtaining shadow feature maps through color space conversion and applying adaptive threshold segmentation to identify shadow regions. In the first stage of shadow compensation, due to the spatial changes in shadow intensity, we adaptively compute exposure coefficients by contrasting spectral reflectance under shadowed and non-shadowed conditions. Fusion weights are derived from single-channel exposure, contrast, and inter-channel spectral variances, while the inter-channel spectral variances can effectively reduce the nonlinear attenuation caused by shadows. These weights facilitate the merging of Laplacian and Gaussian pyramids, which are constructed from exposure sequences and fusion weights, respectively. In the second stage, smooth transitions between shadowed and non-shadowed areas are achieved by using edge fusion based on the p-Laplacian operator. Specifically, the p-Laplacian operator is obtained by calculating the gradient of the image, and pixel-wise fusion is performed based on the confidence of edge pixels. It resolves the edge spectral distortions caused by variations in shadow components at the boundary. Additionally, we introduce a hyperspectral image dataset to evaluate spectral fidelity after shadow compensation and validate our method using the new dataset. In summary, the primary contributions of this work can be concluded as follows:
  • We introduce a shadow compensation method for hyperspectral images utilizing multi-exposure fusion. This method adaptively computes exposure coefficients and effectively merges exposure sequences by employing spectral variance as the fusion weight, thus mitigating nonlinear attenuation in shadow regions.
  • We propose an edge fusion method using the p-Laplacian operator to achieve smooth transitions and seamless merging between shadowed and non-shadowed areas.
  • To address the issue that existing datasets cannot evaluate spectral fidelity after shadow compensation, we develop a hyperspectral image dataset with a uniform background to validate different methods using spectral similarity metrics.
  • Experimental results from both the public and proposed dataset demonstrate that our method can effectively compensate shadows, improving classification performance in hyperspectral images.

2. Related Work

In recent decades, shadow detection and compensation techniques in hyperspectral images have been widely researched. Shadow detection methods are categorized into property-based, model-based, and machine-learning methods. Shadow compensation mainly employs methods based on physical models and deep learning methods.

2.1. Shadow Detection

Property-based methods do not require any prior knowledge and directly extract features that can recognize shadows from image data. Arévalo et al. [19] utilized a region-growing process in specific color space and assessed pixel saturation, intensity, and edge gradients to detect shadows. Tian et al. [20] developed a shadow detection method using the Tricolor Attenuation Model (TAM) for single outdoor images. The method employs Planck’s blackbody irradiance theory to estimate the spectral power distributions of daylight and skylight, allowing shadows to be detected without prior knowledge. Huang et al. [21] introduced a model that identifies shadows based on their high hue values compared to those of non-shadowed areas. This method utilizes a thresholding strategy within the hue, saturation, and intensity (HSI) color space to accurately differentiate shadowed regions from their surroundings. Similarly, Tsai et al. [22] and Zhang et al. [12] provided a simple and effective approach for extracting shadow features by utilizing shadow attributes in invariant color spaces.
Model-based methods depend on prior knowledge, such as atmospheric lighting conditions and radiation, to simulate the physical interactions between light and the Earth’s surface and atmosphere. Tolt et al. [23] used a four-step shadow detection process, which included using the Digital Surface Model (DSM) for initial shadow estimation, training a supervised classifier to identify shadow regions, using the support vector machine (SVM) for shadow detection, and refining the classification results through post-processing. Li et al. [24] introduced a shadow detection approach that merges photogrammetry with image analysis, using shadow simulation via DSM and sun position, alongside ray tracing and histogram-based segmentation to accurately identify shadows.
Machine learning includes unsupervised learning to discover data patterns and supervised learning using labeled samples for prediction. Martel-Brisson et al. [25] developed a dynamic shadow detection method using Gaussian Mixture Models (GMM) for identification. This method utilizes color space conversion for shadow differentiation and minimizes segmentation errors and false positives. Wu et al. [26] used a Bayesian framework to extract shadows from a single image. By using Poisson’s equation and Bayesian optimization, this method accurately distinguishes between shadowed and non-shadowed regions.
In summary, the features extracted by property-based methods are insufficient to characterize shadows. Model-based methods require assumptions about the interaction between light and matter, combined with prior knowledge [27] to identify shadows, which does not apply to complex scenes [11,28]. Machine learning-based methods require a large amount of data, while hyperspectral data collection is difficult, and labeled data are limited.

2.2. Shadow Compensation

Physical models-based shadow compensation methods rely on radiometric measurements and geometric information. These methods compensate for the spectral quality of shadows by analyzing the basic principle of shadow formation. Liu et al. [29] used spectral decomposition to compensate for shadows. This method optimizes the linear mixture model to better distinguish and compensate for shadow regions. Zhang et al. [30] proposed a method to compensate for shadows in hyperspectral images through nonlinear spectral decomposition, which distinguishes between non-shadowed and shadowed endmembers and uses weighted non-shadowed spectra to compensate for shadow spectra. Yamazaki et al. [31] developed a shadow correction method that adjusts radiance values based on variations in sunlight strength and wavelength. This method calculates the radiance ratio between non-shadowed and shadowed areas, applying linear regression to compensate for the radiance of shadowed pixels across different spectral bands. Finlayson et al. [32] introduced a method for shadow compensation, progressing from a 1-D illuminant invariant representation to a full-color 3-D image. This process includes shadow edge recognition and repair, effectively compensating for shadows while preserving texture details.
Deep learning methods extract shadow features from large datasets, and the trained model effectively preserves the spectral features of materials. Windrim et al. [17] developed a method to generate shadow-invariant hyperspectral image features using deep learning and physics-based illumination modeling, eliminating the need for labeled data or extra sensors. Zhao et al. [16] introduced an unsupervised method for shadow compensation in hyperspectral images, effectively transforming shadowed regions to non-shadowed areas without the need for paired samples or prior shadow detection.
Most of the existing hyperspectral shadow compensation methods are based on physical models [33]. This is because deep learning-based methods require a large amount of training data [34,35], and hyperspectral datasets are generally collected in a single pass, without shadow-free reference data, making them difficult to train. The methods of Friman et al. [36] and Uezato et al. [37] inspire us to compensate for shadow spectra while maintaining the original ground material details when constructing physical models.

3. Method

3.1. Overall Architecture

The proposed adaptive shadow compensation method involving multi-exposure fusion and edge fusion in hyperspectral images, shown in Figure 1, consists of shadow detection and two-stage shadow compensation. The process of shadow detection is depicted in Figure 1a. First, the RGB channels from the hyperspectral image are extracted. Next, a conversion to the HSI color space is carried out, followed by utilizing differences in hue and intensity values to obtain a shadow feature map. Finally, the shadow mask is adaptively obtained using the Otsu threshold segmentation method. The process for spectral compensation of shadow areas using the multi-exposure fusion method is depicted in Figure 1b, which utilizes the shadow detection results combined with spectral intensity to identify the direction of shadow component attenuation, which contributes to determining the coefficients to obtain an exposure sequence. For each exposure image, the fusion weight is composed of the exposure, contrast, and spectral variance. Fusion weights are constructed into the Gaussian pyramid to perform weighted averaging on the Laplacian pyramid of exposure images, and this is combined with using shadow detection results to obtain the shadow-compensated image. The process of the edge fusion method is depicted in Figure 1c. The shadow edge mask is generated based on the detection result. By iteratively applying Laplacian filters, edge pixels are continuously selected for processing. The obtained edge pixels are processed according to their priority, which is composed of confidence and data items, where the data item refers to the edge feature map obtained by applying the p-Laplace operator to the original image. To process each pixel, we select the best-matching pixel from a fixed-size patch centered on that pixel to perform spectral correction on that pixel. After processing each pixel in the shadow edge mask, the final shadow compensation image is obtained.

3.2. Shadow Detection

Previous research [22] has demonstrated that shadowed regions in hyperspectral images often have characteristics distinct from non-shadowed areas, such as reduced intensity and increased hue values. In this work, we utilize the RGB channels selected from the original hyperspectral image and apply an existing color space transformation method [38] for shadow region identification. This process involves the extraction of RGB channels and converses into the HSI color space. Subsequently, a shadow feature map is generated
F ( x , y ) = H ( x , y ) I ( x , y )
where F ( x , y ) represents the shadow feature for pixel ( x , y ) in the HSI color space. H ( x , y ) and I ( x , y ) represent the hue and intensity values for the pixel. Subsequently, a binary detection map is derived by applying a threshold to the feature image
A ( x , y ) = 1 , if F ( x , y ) > t h r e s h o l d 0 , Otherwise
where the threshold is computed by Otsu’s algorithm [39]. Due to isolated pixels caused by threshold segmentation, image morphology methods are utilized to process these pixels, resulting in the final shadow detection map A.

3.3. Shadow Compensation

3.3.1. Multi-Exposure Fusion

Based on the shadow detection result, the shadowed and non-shadowed areas can be represented as R s and R n s , respectively. By calculating the average spectral values of the two areas, we obtain the ratio of spectral reflectance ρ , which serves as a basic parameter for exposure, representing the average spectral reflectance difference between non-shadowed and shadowed areas. Due to the gradual decrease in shadow components at the edges of the shadowed area, when generating exposure sequences, different exposure coefficients are adaptively generated by considering the spatial differences of shadow components. By performing k times erosion operations on R s , the average spectrum of each changed region is represented as τ i , i = 1 , . . . ,   k , and thus we obtain exposure reference image γ
γ = ρ τ i , i = 1 , . . . , k
The final exposure sequence is created by multiplying elements in the exposure base map γ with corresponding pixels in the original hyperspectral image I. The exposure sequence at stage k is represented as
I k = γ k · I
After obtaining the exposure sequence I k , the shadowed areas can be compensated by weighting these sequences. The weights are composed of the exposure, contrast, and spectral variance of each exposure sequence. The exposure E k ( x , y ) is calculated by applying a Gaussian function to each spectral band
E k ( x , y ) = exp ( I k ( x , y ) μ 0 ) 2 2 σ 0 2
where ( x , y ) denotes the pixel location, μ 0 is 0.5, and σ 0 is 0.2. The contrast C k ( x , y ) can be expressed as the absolute value of the output by applying a Laplacian filter to each spectral band
C k ( x , y ) = | L F k ( x , y ) |
where L F k ( x , y ) is the result of applying the Laplacian filter to the image I k at the pixel coordinates ( x , y ) . Spectral variance V k ( x , y ) is used to quantify the spectral intensity variation of each pixel
V k ( x , y ) = 1 n i = 1 n ( I k , b i ( x , y ) μ ) 2
where I k , b i ( x , y ) represents the value at pixel coordinates ( x , y ) for the i-th band, and n is the total number of bands considered. Considering the correlation between bands, we assign higher weights to bands with large variations in the spectral dimension based on spectral variance. The final calculation formula for the weights is
W k = C k · E k · V k .
After normalization, W k ^ is obtained
W k ^ ( x , y ) = [ k = 1 k W k ( x , y ) ] 1 W k ( x , y )
Inspired by the work of Mertens et al. [40], we utilize an algorithm that combines two key components: the Laplacian pyramid, which is constructed based on the exposure sequence I k and the Gaussian pyramid, which is constructed based on the weight map W k ^ . By integrating these elements, we generate a weighted average exposure image within the framework of a Laplacian pyramid. The formula is
L { I Exposure ( x , y ) } l = k = 1 k G { W ^ k ( x , y ) } l · L { I k ( x , y ) } l
Applying the inverse Laplacian transform to L { I Exposure } yields the shadow-compensated image I Exposure . Based on the binary shadow detection map A obtained in Section 3.2, we replace the shadow-compensated areas into the original image I
I O u t p u t 1 = A · I Exposure + ( 1 A ) · I

3.3.2. Edge Fusion

Directly replacing the compensated shadow areas with the original image can lead to visible seams at the boundary. Additionally, due to the attenuation of the shadow component at the edges, the binary shadow detection map cannot fully encompass areas with fewer shadow components. To address this issue, we propose an edge fusion method based on the p-Laplacian operator, ensuring a smooth transition between shadowed and non-shadowed areas and maintaining consistency in spectral characteristics.
To tackle the issue of transitions at the shadow boundary, we use a preprocessing step, which applies a Gaussian blur to the shadow mask A:
A b l u r r e d = G σ A
where G σ represents a Gaussian filter with a standard deviation of σ , and ∗ denotes the convolution operation. The input image I I n p u t for the edge fusion method is
I I n p u t = A b l u r r e d · I Exposure + ( 1 A b l u r r e d ) · I
The p-Laplacian operator is constructed based on the first-order gradient U x , U y and second-order gradient U x x , U x y , U y x , U y y of the image.
U z z = U y 2 · U x x 2 · U x · U y · U x y + U x 2 · U y y U x 2 + U y 2
U n n = U x 2 · U x x + 2 · U x · U y · U x y + U y 2 · U y y U x 2 + U y 2
The p-Laplacian operator is expressed as follows
P L o p = ( U x 2 + U y 2 ) 0.5 · ( U z z + 0.5 · U n n )
For the input image I I n p u t , the p-Laplacian operator P L o p is applied channel-wise, resulting in the edge feature map. Areas with high values of the edge feature map indicate rapid changes in spectral values, probably representing regions in which the shadow boundary is not coherently fused. To identify areas requiring edge fusion, we perform dilation and erosion operations on the shadow mask A to obtain pixel regions A p that cover shadow edges. Subsequently, we execute fusion operations in this area.
First, we need to initialize the confidence of pixels, setting the confidence of pixels to be processed to 1 and the confidence of pixels that do not require processing to 0. The initial expression of the confidence term C o n f is
C o n f ( x , y ) = 1 , if ( x , y ) A p 0 , Otherwise
The fusion operation needs to start from the outer edge pixels and gradually progress towards the center of the area. By applying a 3 × 3 Laplacian filter to A p , some pixels near the edge are identified. By comparing the priorities of these pixels, they are selected for fusion in sequence. The priority of pixels is composed of data item D and confidence item C o n f , where data item D is the value of the corresponding pixel in the edge feature map generated by the p-Laplacian operator. The expression is as follows
P r i o r i t y ( x , y ) = C o n f ( x , y ) · D ( x , y )
After identifying the pixel with the highest priority, the best matching pixel is chosen from a 9 × 9 patch centered around it, based on the input image I I n p u t . This optimal match takes into account spectral differences and the spatial distance for replacing the pixel with the highest priority. After updating the pixel, the confidence term C o n f and data term D must also be updated, and the pixel is no longer considered a candidate pixel. To update the confidence C o n f of a given pixel ( p x , p y ) , we take a patch of 9 × 9 centered on this pixel. We calculate the average confidence of certain pixels in this patch as the new confidence for ( p x , p y ) . The pixels used for calculation only include pixels that do not require processing or have already been processed. The formula is as follows
C o n f ( p x , p y ) = ( x , y ) P a t c h ( p x , p y ) C o n f ( x , y ) N p
where P a t c h ( p x , p y ) represents the set of pixels within a 9 × 9 patch centered on pixel ( p x , p y ) that do not require processing or have already been processed, and N p denotes the number of pixels within this set. The update strategy for the data item D is to directly use the data item at the matching pixel location.
The iterative process sequentially marks the pixels in the A p region, producing the edge fusion result I O u t p u t 2 . The procedure of the edge fusion algorithm is provided in pseudocode Algorithm 1.
Algorithm 1 Edge fusion algorithm
Require: 
Hyperspectral image I I n p u t , Area to be processed A p .
Ensure: 
Edge fusion image I O u t p u t 2 .
  1:
 Compute I I n p u t gradients U x , U y , U x x , U x y , U y x , U y y .
  2:
 Compute p-Laplacian Operator:
  3:
    Compute U z z and U n n using gradients, as shown in Equations (14) and (15).
  4:
    Compute P L o p from U z z and U n n , as shown in Equation (16).
  5:
 Initialize Confidence ( C o n f ) and Data (D) Terms:
  6:
    Set C o n f to 1 in A p and 0 elsewhere, as shown in Equation (17).
  7:
    Set D as the output of applying P L o p to I I n p u t .
  8:
 Edge Fusion Process:
  9:
 while  A p contains pixels to process do
10:
       Identify boundary pixels of A p by applying a 3 × 3 Laplacian filter.
11:
       while Boundary pixels unprocessed do
12:
          Select the highest-priority pixel using C o n f and D, as shown in Equation (18).
13:
          Select the best-matching pixel within a 9 × 9 patch centered around the highest priority pixel.
14:
          Update A p , C o n f , D, I I n p u t .
15:
       end while
16:
 end while
17:
 Assign processed I I n p u t to I O u t p u t 2 .
18:
 Output:  I O u t p u t 2 .

4. Experiment

In this section, we first introduce the datasets used in our experiments, which include both the dataset we propose and a public dataset, as well as the evaluation metrics employed. We then describe the experimental details, which encompass the comparison methods and the proposed method. Subsequently, we design multiple comparative experiments and utilize downstream classification tasks to validate the performance of different methods using the support vector machine (SVM [41]). Finally, we discuss the limitations of the proposed method and then provide ideas for future work through experimental verification.

4.1. Dataset

4.1.1. Airport Dataset

The Airport dataset used in this work is a real airport scene. The spatial resolution of this dataset is 400 × 400 pixels. The data cover a spectral range from 400 to 950 nm, with each of the 63 bands having a spectral resolution of 2.34 nm. The Airport dataset is specifically designed for shadow analysis, which facilitates the evaluation of different shadow compensation methods. Figure 2 shows the false color image of the Airport dataset, depicting the airplane and its shadows cast on the ground.
This dataset has a uniform background, and to some extent, the spectrum of the same material can be considered identical at high spatial resolution. Therefore, this dataset allows for the calculation of spectral similarity values between shadowed and non-shadowed areas, evaluating the spectral fidelity of different shadow compensation methods.

4.1.2. Houston Dataset

The Houston dataset offers a real scene located at the University of Houston campus and its surrounding urban areas, as illustrated in Figure 3. The data were captured using the ITRES-CASI 1500 sensor on June 23, 2012, from 17:37:10 to 17:39:50 UTC. The spatial resolution of the image is 349 × 1905 pixels, presenting large areas of shadow due to cloud cover. The dataset has a spectral range from 380 nm to 1050 nm, providing 144 bands and a spectral resolution of up to 4.65 nm. The data were taken from an average altitude of about 1676 m, ensuring a spatial resolution of 2.5 m. This dataset was made available by the 2013 Data Fusion Contest, managed by the IEEE Geoscience and Remote Sensing Society (GRSS), and includes 15 classes of interest.

4.2. Evaluation Metrics

4.2.1. Classification Evaluation

To evaluate the performance of different shadow compensation results on classification accuracy, the Overall Accuracy (OA), Average Accuracy (AA), and Kappa are used. OA refers to the ratio of the number of correctly classified samples to the total number of samples, and it is the most direct performance metric, representing the ability to classify correctly. AA is the average value of the accuracy of all categories. It accounts for the classification accuracy of each category, making it fairer for imbalanced data. The Kappa is a metric that measures classification accuracy, taking into account the possibility of random classification. Higher values of OA, AA, and Kappa represent better classification performance in hyperspectral images. By randomly selecting different training samples for 10 repeated experiments, the average values of OA, AA, and Kappa are obtained, providing a detailed assessment of the classification result after shadow compensation.

4.2.2. Spectral Similarity

To quantify the spectral fidelity of shadow compensation methods, three metrics are employed: Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Spectral Angle Mapper (SAM [42]). MAE measures the average absolute deviation between the compensated and the original image pixel values. RMSE computes the standard deviation between the compensated image and the original image. SAM assesses spectral similarity by computing the angle between the spectral vectors of the compensated image and the original image. For materials in shadowed conditions, we choose the same in non-shadowed conditions to calculate metrics.

4.3. Implementation Details

In this work, four mainstream shadow compensation methods are used for comparison with the proposed method: MSR [43] (Multi-Scale Retinex), SC-CycleGAN [16] (Shadow Compensation via Cycle-Consistent Adversarial Networks), MF [8] (Multi-exposure Fusion) and ISR [44] (Interactive Shadow Removal). The MSR processes images with Gaussian blur and then calculates the logarithmic difference between the original image and the Gaussian blurred image to obtain the Retinex image. The MSR is achieved by applying multiple σ values to a single image, where the σ sequences are [10, 15, and 200] for the Airport dataset and [15, 80, and 200] for the Houston dataset. The SC-CycleGAN model uses the default parameters such as the learning rate and batch size. The MF employs the exposure settings provided in the original publication, where the exposure coefficients for the Airport dataset and the Houston dataset are [0.8, 1, 1.2] and [0.5, 1, 1.5], respectively. The ISR distinguishes between shadowed and non-shadowed areas through manual annotation, requiring only the selection of a portion of pixels from both shadowed and non-shadowed areas during implementation, without strict regulations.
In our method, after threshold segmentation on the shadow feature map, morphological operations are required to address isolated pixels and small pixel regions. Specifically, this is accomplished through basic operations such as dilation, erosion, and removal of small connected regions. During the exposure fusion stage, the exposure coefficients are derived from the degree of attenuation at shadow edges. The stages of shadow attenuation k for the Airport dataset and the Houston dataset are 3 and 4, respectively. In the edge fusion stage, the area A p is obtained by expanding p pixels inward and outward from the shadow edges, with p being 3 for both the Airport and the Houston dataset.

4.4. Results Discussion

The shadow detection results for the Airport and Houston datasets are shown in Figure 4. By comparing the shadow detection results with the false-color images, it can be observed that the shadow detection method is effective in identifying shadowed areas.
The results of different shadow compensation methods based on the Airport dataset are shown in Figure 5. The MSR globally increases spectral intensity to minimize the spectral intensity difference between shadowed and non-shadowed areas, but this compensation causes overexposure, distorts spectral features, and blurs the details of ground objects. The SC-CycleGAN demonstrates strong spectral compensation in shadowed areas. However, it is extremely reliant on the distribution of training data, which leads to incorrect reconstruction of spectra for small sample materials and blurred details. The MF relies entirely on exposure to compensate for shadowed areas and cannot solve the problem of nonlinear spectral attenuation. Moreover, due to the gradual reduction of shadow components near non-shadowed areas, this method has an unnatural transition at the shadow boundary. ISR introduces spectral distortion in compensating images, which is caused by the complexity of material composition in high-resolution scenes. In contrast, our method exhibits superior performance in both spectral similarity and visual reality. This allows it to avoid issues such as overexposure, blurred details, and unnatural transitions.
Table 1 presents the spectral similarity of different compensation methods on the Airport dataset using MAE, RMSE, and SAM. The spectral compensation effect of materials in shadowed areas can only be measured based on the spectra of the same material in non-shadowed areas. For this purpose, 20, 50, and 100 pixels are randomly selected from the shadowed and non-shadowed areas to calculate these metrics and their average values. Our method achieves the best values across all metrics, demonstrating its superior performance in shadow compensation on real-world hyperspectral imagery.
Figure 6 displays the result of different shadow compensation methods on the Houston dataset. The MSR increases the overall spectral intensity, to some extent compensating for material details in shadowed areas, thereby diminishing the difference between shadowed and non-shadowed regions. The SC-CycleGAN alters the spectral characteristics of different materials, resulting in unreasonable visual effects. Although MF preserves the original features of ground object details, the compensation area still exhibits a certain degree of spectral distortion overall. The ISR causes significant spectral distortion, which is primarily attributed to the diversity and complexity of materials in the scene. In contrast, our method effectively compensates shadowed areas without introducing spectral distortion, and the overall false-color image shows consistent and reasonable colors. Unlike MF, our fusion method takes into account spectral variance to better capture spectral features. Figure 7 demonstrates a detailed comparison of shadow compensation in different channels of the Houston dataset, and our method demonstrates superior visual results. In channel 108, for example, the distinction between shadowed and non-shadowed areas is nearly imperceptible, indicating that the proposed method effectively maintains spectral continuity and visual consistency in hyperspectral images.
To further explore the spectral fidelity of shadow compensation methods for different materials, we analyze the spectral similarity between the shadow materials processed by different compensation methods and the non-shadow materials in the original image. Figure 8 shows the spectral similarity metrics between the compensated shadow pixels and the non-shadowed pixels in the original image of different materials in the Houston dataset. The computation of the metrics is based on Figure 3b and Figure 4b, where the average spectral value of all pixels in the same material and state represents the spectrum of the material in that state. Except for the slightly higher SAM of our method in Figure 8j compared to the MF method, we achieve the best spectral similarity metrics among all other materials. Compared with other methods in terms of spectral similarity metrics in different materials, our method has a more stable spectral fidelity ability.
Shadow compensation, as an upstream task, aims to improve the quality of hyperspectral images and facilitate the development of higher-level visual analysis of downstream tasks. To demonstrate the spectral fidelity of our method in both non-shadowed and shadowed regions, we validate it using an improved downstream classification task. We randomly select 15% of samples from each category located in the non-shadowed area of the original image to form a training set. The SVM method is utilized to test samples in shadowed and non-shadowed areas separately. Table 2 shows the classification performance of the SVM on the original image and the images processed by MSR, SC-CycleGAN, MF, ISR, and our method, separately, including the classification performance of non-shadowed areas and shadowed areas. The MSR causes severe spectral distortion in both non-shadowed and shadowed areas. SC-CycleGAN, MF, and our method avoid spectral distortion in non-shadowed areas. In shadowed areas, the classification results of the original image are not ideal. Compared with other methods, the shadow categories processed by our method perform the best in terms of classification performance. Other methods performed poorly, especially in the challenging ‘Water’ category, while our method improves the classification performance. Similarly, in the categories of ‘Residential’, ‘Road’, and ‘Railway’, our method significantly improves the classification accuracy in comparison to other methods. Compared with the original image and the suboptimal method, our method improves the OA of the shadowed area by 210.14% and 14%, respectively. Overall, our method can effectively compensate for shadowed areas while ensuring that non-shadowed areas are not distorted. This is precisely because the spectral characteristics of the shadowed materials processed by our method are more similar to those of well-lit materials, which contributes to a better classification performance and proves the effectiveness of our method.
To deeply analyze the reasons for the superiority of our shadow compensation method in solving the nonlinear spectral attenuation problem, we conducted ablation experiments on the fusion weights in the multi-exposure fusion method, as shown in Figure 9. Two pixels located at (1729, 76) and (1738, 68) are selected to represent the same material in shadowed and non-shadowed states, respectively. A portion of the image from column 1557 to column 1905 in the Houston dataset is cropped for display, with the two pixel positions highlighted with a blue dot and a red dot, respectively. As shown in Figure 9a,c,f, without considering spectral variance, the spectral curves of the compensated shadowed areas are only linearly enhanced compared to the original spectra. This enhancement ignores the spectral characteristics of materials at some peaks. In fact, for materials affected by shadows, nonlinear attenuation is more severe in spectral bands with high values and large fluctuations. The experimental results using spectral variance indicate that the spectral curve of the material in the compensated shadowed area more accurately reflects the spectral characteristics of the materials, especially at several peaks with severe spectral attenuation. In summary, our compensation method utilizes the correlation between bands to better address the nonlinear attenuation caused by shadows and restore the spectral characteristics of the material.

4.5. Discussion of Limitations and Future Analysis

It is necessary to analyze the limitations of the proposed method, which can provide ideas for future work. There are two limitations worthy of further discussion. The first point is that the performance of our detection method is affected by the contextual sensitivity of the background. This is because the inter-class variance between the foreground and background affects the effective segmentation of Otsu’s method. Inaccurate shadow detection results lead to subsequent compensation methods being unable to handle missed and false detection pixels. Therefore, it is necessary to improve the adaptability of shadow detection methods to background information. The second point is that when processing hyperspectral data with a large amount of information, it is necessary to improve computational efficiency to promote practical applications. The exposure fusion stage in the compensation method processes the entire image, but in reality, only the shadowed areas need to be processed. Excess data calculations reduce efficiency and require further improvement.
To explore and overcome the limitations of the proposed method, we conduct experimental analysis and improvement with regard to the two points mentioned above. By cropping non-shadowed areas in the original image, we investigate the impact of contextual sensitivity of the background on shadow detection performance. Figure 10 shows the false color images and shadow detection images under different backgrounds based on the Airport and Houston datasets, indicating that the shadow detection results are sensitive to changes in the background. To improve the adaptability of the shadow detection method, we explore Formula (1) in Section 3.2. This formula constructs the shadow feature map, which is the key to detecting shadows. It is improved as follows
F = H + α I + β · δ
where α , β and δ are fine-tuning parameters. Figure 11 compares the effect of fine-tuning parameters on the shadow detection result. We use the same fine-tuning parameters for images in the same dataset, and the detection performance is significantly improved compared to that achieved without fine-tuning parameters. This indicates that using fine-tuning parameters can improve the shadow feature map, thereby reducing the limitations of background contextual sensitivity on the shadow detection method. However, improving adaptability can lead to a decrease in some detection results. Balancing the adaptability and detection accuracy of the detection method will be a key research direction in the future.
By comparing the running time of our method with those of other methods, and combining the characteristics of our method, we improve the computational efficiency. Because the shadow area is identified when compensating for shadows in our method, we can improve the computational efficiency by only compensating for necessary shadow areas. We optimize the input of the compensation method based on the shadow detection and optimize the Airport dataset from 400 × 400 to 180 × 185 with (125, 65) as the top left corner and (305, 250) as the bottom right corner. We also optimize the Houston dataset from 349 × 1905 to 349 × 800 between columns 985 and 1785. Table 3 shows the runtime of different methods based on two datasets. Due to SC-CycleGAN being a training-based deep learning method, its runtime considers the training duration. Our improved method has improved computational efficiency by 57.56% and 56.8% for the Airport and Houston datasets, respectively. Compared to the MF method with the highest efficiency before improvement, it has increased by 38.2% and 41.34%, respectively. With rough optimization of the input data volume, the computational efficiency has been greatly improved. A future direction for performance improvement is to fully utilize detection results, avoid unnecessary resource waste, and improve practical application capabilities.

5. Conclusions

In this work, we propose an adaptive shadow compensation method for hyperspectral images based on multi-exposure fusion and edge fusion. The method converts images to HSI color space to map shadows based on their unique high hue and low intensity, then uses an adaptive threshold to identify shadow regions. In the compensation phase, we use multi-exposure fusion for overall improvement and merge compensated shadows into the original image through edge fusion, which is guided by shadow detection results. Exposure coefficients for multi-exposure fusion are adaptively derived from the reflectance differences between shadowed and non-shadowed areas. Fusion weights consider the correlation between bands, utilizing spectral variance as the weight to reflect intensity variations within the spectral neighborhood, which to some extent addresses the issue of nonlinear attenuation caused by shadows across different bands. Edge fusion uses a p-Laplacian operator, based on image gradients, to identify pixel transitions near shadow edges. It selects optimal pixels for replacement, guided by spectral similarity and spatial distance, seamlessly merging shadowed and non-shadowed regions. To validate the effectiveness of our method and the spectral fidelity of various approaches, a new hyperspectral image dataset is proposed. The experimental results indicate that our method has higher spectral fidelity. Our method demonstrates competitive results in the improved downstream classification task. Adaptability and computational efficiency are the focus of our future work.

Author Contributions

Y.M. and G.L. proposed the method and performed the experiments. W.H. guided the research and processed the dataset. All authors wrote the article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 62372284).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Houston dataset for hyperspectral image shadow compensation is available in: https://hyperspectral.ee.uh.edu/?page_id=459 (accessed on 22 September 2022). The Airport dataset is available in: https://github.com/yan-m1014/Airport-Dataset (accessed on 20 April 2024).

Acknowledgments

Our appreciation goes to Yueming Wang from the Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, for providing the Airport dataset utilized in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NIRNear-infrared
CycleGANCycle-Consistent Generative Adversarial Network
IOOPLInner-Outer Outline Profile Line
SC-CycleGANShadow compensation via cycle-consistent adversarial networks
HSIHue, Saturation, and Intensity
TAMTricolor Attenuation Model
DSMDigital Surface Model
SVMSupport Vector Machine
GMMGaussian Mixture Models
GRSSIEEE Geoscience and Remote Sensing Society
OAOverall Accuracy
AAAverage Accuracy
MAEMean Absolute Error
RMSERoot Mean Squared Error
SAMSpectral Angle Mapper
MSRMulti-scale Retinex
MFMulti-exposure Fusion
ISRInteractive shadow removal

References

  1. ElMasry, G.; Sun, D.W. Principles of hyperspectral imaging technology. In Hyperspectral Imaging for Food Quality Analysis and Control; Elsevier: Amsterdam, The Netherlands, 2010; pp. 3–43. [Google Scholar] [CrossRef]
  2. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  3. Goetz, A.F.; Vane, G.; Solomon, J.E.; Rock, B.N. Imaging spectrometry for earth remote sensing. Science 1985, 228, 1147–1153. [Google Scholar] [CrossRef]
  4. Dale, L.M.; Thewis, A.; Boudry, C.; Rotar, I.; Dardenne, P.; Baeten, V.; Pierna, J.A.F. Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: A review. Appl. Spectrosc. Rev. 2013, 48, 142–159. [Google Scholar] [CrossRef]
  5. Zhang, B.; Wu, D.; Zhang, L.; Jiao, Q.; Li, Q. Application of hyperspectral remote sensing for environment monitoring in mining areas. Environ. Earth Sci. 2012, 65, 649–658. [Google Scholar] [CrossRef]
  6. Yuen, P.W.; Richardson, M. An introduction to hyperspectral imaging and its application for security, surveillance and target acquisition. Imaging Sci. J. 2010, 58, 241–253. [Google Scholar] [CrossRef]
  7. Qiao, X.; Yuan, D.; Li, H. Urban shadow detection and classification using hyperspectral image. J. Indian Soc. Remote Sens. 2017, 45, 945–952. [Google Scholar] [CrossRef]
  8. Duan, P.; Hu, S.; Kang, X.; Li, S. Shadow removal of hyperspectral remote sensing images with multiexposure fusion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–11. [Google Scholar] [CrossRef]
  9. Fredembach, C.; Süsstrunk, S. Automatic and Accurate Shadow Detection from (Potentially) a Single Image Using Near-Infrared Information. 2010. Available online: https://infoscience.epfl.ch/record/165527 (accessed on 28 October 2023).
  10. Richter, R.; Müller, A. De-shadowing of satellite/airborne imagery. Int. J. Remote Sens. 2005, 26, 3137–3148. [Google Scholar] [CrossRef]
  11. Dare, P.M. Shadow analysis in high-resolution satellite imagery of urban areas. Photogramm. Eng. Remote Sens. 2005, 71, 169–177. [Google Scholar] [CrossRef]
  12. Zhang, H.; Sun, K.; Li, W. Object-oriented shadow detection and removal from urban high-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6972–6982. [Google Scholar] [CrossRef]
  13. Wang, Q.; Yan, L.; Yuan, Q.; Ma, Z. An automatic shadow detection method for VHR remote sensing orthoimagery. Remote Sens. 2017, 9, 469. [Google Scholar] [CrossRef]
  14. Zhou, T.; Fu, H.; Sun, C.; Wang, S. Shadow detection and compensation from remote sensing images under complex urban conditions. Remote Sens. 2021, 13, 699. [Google Scholar] [CrossRef]
  15. Zhao, M.; Chen, J.; Rahardja, S. Hyperspectral shadow removal via nonlinear unmixing. IEEE Geosci. Remote Sens. Lett. 2020, 18, 881–885. [Google Scholar] [CrossRef]
  16. Zhao, M.; Yan, L.; Chen, J. Hyperspectral image shadow compensation via cycle-consistent adversarial networks. Neurocomputing 2021, 450, 61–69. [Google Scholar] [CrossRef]
  17. Windrim, L.; Ramakrishnan, R.; Melkumyan, A.; Murphy, R.J. A physics-based deep learning approach to shadow invariant representations of hyperspectral images. IEEE Trans. Image Process. 2017, 27, 665–677. [Google Scholar] [CrossRef] [PubMed]
  18. Roper, T.; Andrews, M. Shadow modelling and correction techniques in hyperspectral imaging. Electron. Lett. 2013, 49, 458–460. [Google Scholar] [CrossRef]
  19. Arévalo, V.; González, J.; Ambrosio, G. Shadow detection in colour high-resolution satellite images. Int. J. Remote Sens. 2008, 29, 1945–1963. [Google Scholar] [CrossRef]
  20. Tian, J.; Sun, J.; Tang, Y. Tricolor attenuation model for shadow detection. IEEE Trans. Image Process. 2009, 18, 2355–2363. [Google Scholar] [CrossRef]
  21. Huang, J.; Xie, W.; Tang, L. Detection of and compensation for shadows in colored urban aerial images. In Proceedings of the Fifth World Congress on Intelligent Control and Automation (IEEE Cat. No. 04EX788), Hangzhou, China, 15–19 June 2004; IEEE: New York, NY, USA, 2004; Volume 4, pp. 3098–3100. [Google Scholar] [CrossRef]
  22. Tsai, V.J. A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1661–1671. [Google Scholar] [CrossRef]
  23. Tolt, G.; Shimoni, M.; Ahlberg, J. A shadow detection method for remote sensing images using VHR hyperspectral and LIDAR data. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; IEEE: New York, NY, USA, 2011; pp. 4423–4426. [Google Scholar] [CrossRef]
  24. Li, Y.; Gong, P.; Sasagawa, T. Integrated shadow removal based on photogrammetry and image analysis. Int. J. Remote Sens. 2005, 26, 3911–3929. [Google Scholar] [CrossRef]
  25. Martel-Brisson, N.; Zaccarin, A. Moving cast shadow detection from a gaussian mixture shadow model. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: New York, NY, USA, 2005; Volume 2, pp. 643–648. [Google Scholar]
  26. Wu, T.P.; Tang, C.K. A bayesian approach for shadow extraction from a single image. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Beijing, China, 17–21 October 2005; IEEE: New York, NY, USA, 2005; Volume 1, pp. 480–487. [Google Scholar] [CrossRef]
  27. Zhan, Q.; Shi, W.; Xiao, Y. Quantitative analysis of shadow effects in high-resolution images of urban areas. Int. Arch. Photogramm. Remote Sens. 2005, 36, 1–6. [Google Scholar]
  28. Zhou, W.; Huang, G.; Troy, A.; Cadenasso, M. Object-based land cover classification of shaded areas in high spatial resolution imagery of urban areas: A comparison study. Remote Sens. Environ. 2009, 113, 1769–1777. [Google Scholar] [CrossRef]
  29. Liu, Y.; Bioucas-Dias, J.; Li, J.; Plaza, A. Hyperspectral cloud shadow removal based on linear unmixing. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; IEEE: New York, NY, USA, 2017; pp. 1000–1003. [Google Scholar] [CrossRef]
  30. Zhang, G.; Cerra, D.; Müller, R. Shadow detection and restoration for hyperspectral images based on nonlinear spectral unmixing. Remote Sens. 2020, 12, 3985. [Google Scholar] [CrossRef]
  31. Yamazaki, F.; Liu, W.; Takasaki, M. Characteristics of shadow and removal of its effects for remote sensing imagery. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; IEEE: New York, NY, USA, 2009; Volume 4, pp. IV–426. [Google Scholar] [CrossRef]
  32. Finlayson, G.D.; Hordley, S.D.; Lu, C.; Drew, M.S. On the removal of shadows from images. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 28, 59–68. [Google Scholar] [CrossRef] [PubMed]
  33. Hartzell, P.; Glennie, C.; Khan, S. Terrestrial hyperspectral image shadow restoration through lidar fusion. Remote Sens. 2017, 9, 421. [Google Scholar] [CrossRef]
  34. Jia, S.; Jiang, S.; Lin, Z.; Li, N.; Xu, M.; Yu, S. A survey: Deep learning for hyperspectral image classification with few labeled samples. Neurocomputing 2021, 448, 179–204. [Google Scholar] [CrossRef]
  35. He, J.; Yuan, Q.; Li, J.; Xiao, Y.; Liu, D.; Shen, H.; Zhang, L. Spectral super-resolution meets deep learning: Achievements and challenges. Inf. Fusion 2023, 97, 101812. [Google Scholar] [CrossRef]
  36. Friman, O.; Tolt, G.; Ahlberg, J. Illumination and shadow compensation of hyperspectral images using a digital surface model and non-linear least squares estimation. In Proceedings of the Image and Signal Processing for Remote Sensing XVII; SPIE: Bellingham, WA, USA, 2011; Volume 8180, pp. 183–190. [Google Scholar] [CrossRef]
  37. Uezato, T.; Yokoya, N.; He, W. Illumination invariant hyperspectral image unmixing based on a digital surface model. IEEE Trans. Image Process. 2020, 29, 3652–3664. [Google Scholar] [CrossRef] [PubMed]
  38. Kang, X.; Huang, Y.; Li, S.; Lin, H.; Benediktsson, J.A. Extended random walker for shadow detection in very high resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2017, 56, 867–876. [Google Scholar] [CrossRef]
  39. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  40. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure fusion: A simple and practical alternative to high dynamic range photography. In Proceedings of the Computer Graphics Forum; Wiley Online Library: Oxford, UK, 2009; Volume 28, pp. 161–171. [Google Scholar] [CrossRef]
  41. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  42. Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  43. Lin, H.; Shi, Z. Multi-scale retinex improvement for nighttime image enhancement. Optik 2014, 125, 7143–7148. [Google Scholar] [CrossRef]
  44. Gong, H.; Cosker, D. Interactive removal and ground truth for difficult shadow scenes. JOSA A 2016, 33, 1798–1811. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overall flowchart of shadow detection and compensation in hyperspectral image. (a) Shadow detection. (b) Shadow compensation stage one: multi-exposure fusion. (c) Shadow compensation stage two: edge fusion.
Figure 1. Overall flowchart of shadow detection and compensation in hyperspectral image. (a) Shadow detection. (b) Shadow compensation stage one: multi-exposure fusion. (c) Shadow compensation stage two: edge fusion.
Applsci 14 03890 g001
Figure 2. The Airport dataset.
Figure 2. The Airport dataset.
Applsci 14 03890 g002
Figure 3. The Houston dataset. (a) Hyperspectral image. (b) Classification map. (c) Class name.
Figure 3. The Houston dataset. (a) Hyperspectral image. (b) Classification map. (c) Class name.
Applsci 14 03890 g003
Figure 4. The shadow detection results. (a) Airport dataset. (b) Houston dataset.
Figure 4. The shadow detection results. (a) Airport dataset. (b) Houston dataset.
Applsci 14 03890 g004
Figure 5. Spectral reflectance comparison before and after shadow compensation on the Airport dataset. (a) Original image; (bf) results from MSR [43], SC-CycleGAN [16], MF [8], ISR [44], and our method. The red and blue dots represent the selected pixels in the shadowed and non-shadowed areas, respectively. * The spectral values of MSR are divided by 5 to facilitate alignment with the coordinate axes of other methods.
Figure 5. Spectral reflectance comparison before and after shadow compensation on the Airport dataset. (a) Original image; (bf) results from MSR [43], SC-CycleGAN [16], MF [8], ISR [44], and our method. The red and blue dots represent the selected pixels in the shadowed and non-shadowed areas, respectively. * The spectral values of MSR are divided by 5 to facilitate alignment with the coordinate axes of other methods.
Applsci 14 03890 g005
Figure 6. Shadow compensation results of different methods on the Houston dataset. (a) Original image; (bf) results from MSR [43], SC-CycleGAN [16], MF [8], ISR [44], and our method.
Figure 6. Shadow compensation results of different methods on the Houston dataset. (a) Original image; (bf) results from MSR [43], SC-CycleGAN [16], MF [8], ISR [44], and our method.
Applsci 14 03890 g006
Figure 7. The shadow compensation results for the Houston dataset are demonstrated through pseudocolor images across various channels (36, 72, 108, and 144). This includes the Original image, MSR [43], SC-CycleGAN [16], MF [8], ISR [44], and our method.
Figure 7. The shadow compensation results for the Houston dataset are demonstrated through pseudocolor images across various channels (36, 72, 108, and 144). This includes the Original image, MSR [43], SC-CycleGAN [16], MF [8], ISR [44], and our method.
Applsci 14 03890 g007
Figure 8. Compute the spectral fidelity of different materials processed by different compensation methods on the Houston dataset. (a) Healthy grass. (b) Stressed grass. (c) Trees. (d) Water. (e) Residential. (f) Commercial. (g) Road. (h) Highway. (i) Railway. (j) Parking lot 2. The values of SAM are divided by 4 for display purposes. Lower MAE, RMSE, and SAM indicate better performance.
Figure 8. Compute the spectral fidelity of different materials processed by different compensation methods on the Houston dataset. (a) Healthy grass. (b) Stressed grass. (c) Trees. (d) Water. (e) Residential. (f) Commercial. (g) Road. (h) Highway. (i) Railway. (j) Parking lot 2. The values of SAM are divided by 4 for display purposes. Lower MAE, RMSE, and SAM indicate better performance.
Applsci 14 03890 g008
Figure 9. Fusion weight ablation study based on the Houston dataset. (a) Only contrast. (b) Only variance. (c) Only exposure. (d) Contrast and variance. (e) Variance and exposure. (f) Contrast and exposure. (g) Contrast, variance, and exposure. The red and blue dots represent the selected pixels in the non-shadowed and shadowed areas, respectively.
Figure 9. Fusion weight ablation study based on the Houston dataset. (a) Only contrast. (b) Only variance. (c) Only exposure. (d) Contrast and variance. (e) Variance and exposure. (f) Contrast and exposure. (g) Contrast, variance, and exposure. The red and blue dots represent the selected pixels in the non-shadowed and shadowed areas, respectively.
Applsci 14 03890 g009
Figure 10. The impact of contextual sensitivity of background on shadow detection. Based on the Airport dataset, the shadow detection result (a) of the original; (b) with the minority of the background removed; (c) with the majority of the background removed. Based on the Houston dataset, the shadow detection result (d) of the original; (e) with the minority of background removed; (f) with the majority of background removed.
Figure 10. The impact of contextual sensitivity of background on shadow detection. Based on the Airport dataset, the shadow detection result (a) of the original; (b) with the minority of the background removed; (c) with the majority of the background removed. Based on the Houston dataset, the shadow detection result (d) of the original; (e) with the minority of background removed; (f) with the majority of background removed.
Applsci 14 03890 g010
Figure 11. The impact of fine-tuning parameters on shadow detection. Based on the Airport dataset, the change in the shadow detection result (a) of the original; (b) with the minority of the background removed; (c) with the majority of the background removed. Based on the Houston dataset, the change in the shadow detection result (d) of the original; (e) with the minority of the background removed; (f) with the majority of the background removed.
Figure 11. The impact of fine-tuning parameters on shadow detection. Based on the Airport dataset, the change in the shadow detection result (a) of the original; (b) with the minority of the background removed; (c) with the majority of the background removed. Based on the Houston dataset, the change in the shadow detection result (d) of the original; (e) with the minority of the background removed; (f) with the majority of the background removed.
Applsci 14 03890 g011
Table 1. Spectral similarity of different compensation methods on the Airport dataset.
Table 1. Spectral similarity of different compensation methods on the Airport dataset.
MetricsOriginalMSRSC-Cyc.MFISROurs
MAE (20 px)0.03440.12990.01140.01500.01130.0082
RMSE (20 px)0.03820.13830.01190.01590.01310.0099
SAM (20 px)0.68670.08560.05980.17890.16550.0490
MAE (50 px)0.03330.13620.00890.01370.01030.0077
RMSE (50 px)0.03690.14620.00940.01460.01220.0094
SAM (50 px)0.67020.09840.05130.17250.16150.0500
MAE (100 px)0.03460.13940.01010.01400.01020.0078
RMSE (100 px)0.03830.14970.01060.01490.01210.0095
SAM (100 px)0.67600.09880.05500.17490.16280.0511
Lower MAE, RMSE, and SAM mean better performance, with optimal values in bold.
Table 2. Classification accuracy of different methods in both shadowed and non-shadowed states after training on non-shadowed samples on the Houston dataset.
Table 2. Classification accuracy of different methods in both shadowed and non-shadowed states after training on non-shadowed samples on the Houston dataset.
Accuracies of Non-Shadowed (%)Accuracies of Shadowed (%)
Class NameOrigi.MSRSC-Cyc.MFISROursOrigi.MSRSC-Cyc.MFISROurs
Healthy grass98.080.0097.7597.8197.9197.8612.920.0025.2854.4916.8551.12
Stressed grass98.270.0098.3298.4398.3898.2218.290.0032.9354.8847.5675.00
Synthetic grass88.460.0088.4988.7988.4688.03------
Trees98.120.0097.9698.3798.3898.1741.130.0042.7475.0037.1067.74
Soil91.420.0091.4686.1585.2591.82------
Water98.110.0098.1197.6797.6798.5312.500.005.002.500.0022.50
Residential72.730.0072.7675.2774.7371.7910.000.0016.2551.2518.7571.25
Commercial55.3411.1954.5155.8456.1455.7253.730.0069.9671.4970.6174.78
Road67.480.0067.3966.8666.6968.198.1118.922.7029.7316.2256.76
Highway43.680.0043.3445.1143.9343.554.290.003.0778.536.1385.28
Railway60.610.0061.0153.2251.3460.343.910.003.2647.888.1467.10
Parking lot 148.160.0048.1749.1347.8047.88------
Parking lot 215.460.0013.2016.3517.2316.350.005.000.0010.005.0015.00
Tennis court85.640.0085.1985.8985.6185.08------
Running track99.100.0099.0289.9798.3199.11------
OA77.125.9377.0576.4375.9677.0922.580.4629.2761.4331.3570.03
AA71.306.2571.1870.6570.3071.3216.492.3920.1247.5822.6458.65
Kappa75.220.0075.1474.4773.9675.187.380.0015.3953.8617.8864.15
Higher OA, AA, and Kappa values indicate a better performance, with optimal values in bold. ‘-’ represents that there is no such category in the shadowed area.
Table 3. The runtime (s) of different methods based on the Airport and Houston datasets.
Table 3. The runtime (s) of different methods based on the Airport and Houston datasets.
DatasetsMSRSC-Cyc.MFISROurs 1Ours 2
Airport68.212375.1413.22921.3519.258.17
Houston237.31101,723.52172.486632.19234.23101.18
Lower running time, with optimal values in bold. 1 Before improvement. 2 After improvement.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Meng, Y.; Li , G.; Huang, W. Adaptive Shadow Compensation Method in Hyperspectral Images via Multi-Exposure Fusion and Edge Fusion. Appl. Sci. 2024, 14, 3890. https://doi.org/10.3390/app14093890

AMA Style

Meng Y, Li  G, Huang W. Adaptive Shadow Compensation Method in Hyperspectral Images via Multi-Exposure Fusion and Edge Fusion. Applied Sciences. 2024; 14(9):3890. https://doi.org/10.3390/app14093890

Chicago/Turabian Style

Meng, Yan, Guanyi Li , and Wei Huang. 2024. "Adaptive Shadow Compensation Method in Hyperspectral Images via Multi-Exposure Fusion and Edge Fusion" Applied Sciences 14, no. 9: 3890. https://doi.org/10.3390/app14093890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop