Next Article in Journal
Real-Time Monitoring of Personal Protective Equipment Adherence Using On-Device Artificial Intelligence Models
Next Article in Special Issue
LO-MLPRNN: A Classification Algorithm for Multispectral Remote Sensing Images by Fusing Selective Convolution
Previous Article in Journal
In-Season Estimation of Japanese Squash Using High-Spatial-Resolution Time-Series Satellite Imagery
Previous Article in Special Issue
CTHNet: A CNN–Transformer Hybrid Network for Landslide Identification in Loess Plateau Regions Using High-Resolution Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Color Nighttime Light Remote Sensing Imagery Using Dual-Sampling Adjustment

1
College of Geomatics and Geoinformation, Guilin University of Technology, Guilin 541004, China
2
Guangxi Key Laboratory of Ecological Spatio-Temporal Big Data Perception, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(7), 2002; https://doi.org/10.3390/s25072002
Submission received: 1 February 2025 / Revised: 2 March 2025 / Accepted: 19 March 2025 / Published: 22 March 2025
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)

Abstract

:
Nighttime light remote sensing imagery is limited by its single band and low spatial resolution, hindering its ability to accurately capture ground information. To address this, a dual-sampling adjustment method is proposed to enhance nighttime light remote sensing imagery by fusing daytime optical images with nighttime light remote sensing imagery, generating high-quality color nighttime light remote sensing imagery. The results are as follows: (1) Compared to traditional nighttime light remote sensing imagery, the spatial resolution of the fusion images is improved from 500 m to 15 m while better retaining the ground features of daytime optical images and the distribution of nighttime light. (2) Quality evaluations confirm that color nighttime light remote sensing imagery enhanced by dual-sampling adjustment can effectively balance optical fidelity and spatial texture features. (3) In Beijing’s central business district, color nighttime light brightness exhibits the strongest correlation with business, especially in Dongcheng District, with r = 0.7221, providing a visual tool for assessing urban economic vitality at night. This study overcomes the limitations of fusing day–night remote sensing imagery, expanding the application field of color nighttime light remote sensing imagery and providing critical decision support for refined urban management.

1. Introduction

Nighttime light remote sensing plays a significant role in various fields due to its ability to reflect human activity characteristics through the intensity of nighttime light [1,2,3]. With the study of DMSP/OLS (Defense Meteorological Satellite Program/Operational Linescan System) and NPP/VIIRS (National Polar-orbiting Partnership/Visible Infrared Imaging Radiometer Suite), China’s launch of the Luojia-1 satellite has achieved a significant breakthrough in obtaining high-resolution nighttime light remote sensing data. The continuity and objective independence advantages of nighttime light remote sensing make it closely related to human activities on the Earth’s surface. Therefore, nighttime light remote sensing imagery can provide an important data source for earthquake disaster assessment, power consumption, light pollution, carbon emission assessment, and other fields [4,5,6,7]. However, traditional nighttime light remote sensing imagery has the limitations of single band and low resolution, due to which it cannot truly capture the ground surface. Therefore, color nighttime light remote sensing has been produced. It achieves a spatial resolution ranging from kilometers to meters thanks to spectral imaging technology, and spectral information has also developed from single-band black-and-white images to multi-band color images. This enables color nighttime light remote sensing to perform an important function in surface environmental monitoring.
In 2017, China’s independently developed Jilin-1 satellite (JL1-3B) was able to obtain high-resolution (1 m) color nighttime light remote sensing imagery which can better monitor and analyze urban nighttime light [8,9]. For example, Watson C. S. et al. conducted a correlation analysis between Jilin-1 data and economic development, finding that the brightness of urban nighttime lights determines the level of economic development [10]. Guk E et al. conducted a correlation analysis of urban nighttime light types using different spectral indices and found that compared to the red and green bands, the correlation between the blue band and light intensity is the lowest [11]. In order to better promote the achievement of sustainable development goals, in 2021, China launched the Sustainable Development Goals Satellite 1 (SDGSAT-1), a satellite specifically designed to collect data related to sustainable development [12,13]. Its onboard low-light imager can capture the distribution of color nighttime light on the Earth’s surface at night. Guo et al. used SDGSAT-1 to study the spatial differences between spectral bands and light intensity in land use, revealing that spectral bands can influence land use conditions [14]. Zhang et al. developed a denoising algorithm for SDGSAT-1, effectively removing striping noise and salt-and-pepper noise from images [15]. However, color nighttime light remote sensing imagery has high design and operation costs, and there are still issues such as limited data coverage, inconvenient access, and high cost, which seriously restrict its global application. Nowadays, nighttime light remote sensing can fully reflect the distribution of regional light, and daytime optical images can reflect ground information. So, a combination of the two can provide a more comprehensive analysis of urban functional types [16,17]. Thus, image fusion technology provides a practical solution. It is the process of combining data from different sensors to create a new fused image. Different fusion algorithms have their own advantages and weaknesses. Common fusion methods include super-resolution Bayesian pansharp, intensity hue saturation (IHS), and principal component change (PCA) [18,19,20,21,22]. However, the resolution of daytime and nighttime remote sensing imagery has a large gap, and it is necessary to sample images with the appropriate resolution to obtain the best fusion image. Therefore, it is of great significance to effectively sample daytime and nighttime remote sensing imagery.
Based on the above problems, a dual-sampling adjustment enhancement method is proposed based on the existing fusion algorithm. The method generates color nighttime light remote sensing imagery by fusing daytime optical images with nighttime light remote sensing imagery. Up–down dual sampling modes are used for comparative analysis, which verifies the effectiveness of the dual-sampling adjustment in enhancing color nighttime light remote sensing imagery. This provides an important reference basis for the study of urban functional types.

2. Research Area and Data Processing

2.1. Research Area

As the capital of the People’s Republic of China, Beijing is an important center for politics, culture, and the economy. It is also a key transportation hub and a popular tourist destination in the country. Located on the northern edge of the North China Plain, its geographical coordinates range from 115°20′ to 117°30′ East and from 39°28′ to 41°05′ North. The region is densely populated and is one of the representative cities of high-level urbanization development.

2.2. Data Processing

(1)
NPP/VIIRS data: These data come from the long-term nighttime light dataset for China published by the Global Change Science Research Data Publishing System [23] and from the National Geophysical Data Center (NGDC) under the National Oceanic and Atmospheric Administration (NOAA) of the United States. Compared to DMSP/OLS nighttime light remote sensing, the cloud-free monthly data from NPP-VIIRS with 500 m spatial resolution do not have the problem of pixel value saturation. Moreover, the monthly data have completely eliminated the influence of moonlight, aurora borealis, and other stray light sources, and they effectively monitor the ground nighttime light situation. The steps of data processing were as follows: The Chinese administrative division vector data extracted by the mask were used to obtain nighttime light remote sensing images of Beijing during 2013–2021. The data were projected onto the WGS1984 geographic coordinate system and the Lambert Equal Area projection coordinate system. The processing of data for the six-year period was completed as shown in Figure 1.
(2)
Landsat 8 data: These data were sourced from the Geospatial Data Cloud. This refers to Landsat 8OL-TIRS (OLI = Operational Land Image; TIRS = Thermal Infrared Sensor) images covering Beijing for the period from 2013 to 2021, where OLI is mainly in the visible to short-wave infrared bands and TIRS is in the thermal infrared band. This study uses OLI data, which contain multispectral images with a resolution of 30 m and panchromatic images with a resolution of 15 m. The radiometric calibration removes the error caused by the difference in sensor response; the atmospheric correction further removes the atmospheric scattering and absorption effects to obtain the true reflectance of the ground surface, and the panchromatic images are radiometrically calibrated to correct the radiometric distortion caused by atmospheric effects to ensure the clarity of spatial details. The fidelity of the pre-processed images is greatly improved, and the color distortion is reduced during fusion. Finally, the images are mosaicked and extracted from the regional map to obtain the multispectral images and panchromatic remote sensing images for 2013–2021. The results are shown in Figure 2 and Figure 3.

3. Dual-Sampling Adjustment to Generate Color Nighttime Light Remote Sensing Imagery

3.1. Dual-Sampling Adjustment Method

(1)
Down-sampling adjustment
Due to the differences in imaging time and sensors between daytime and nighttime remote sensing imagery, the 30 m multispectral image (Landsat_MS) is first down-sampled to 500 m to match the nighttime light remote sensing imagery. The red and green bands (R, G) are fused with NPP-VIIRS (B) to obtain the 500 m composite image 1 (R, G, B) by using the band stacking method. To maintain consistency with the original daytime optical image, the bicubic convolution method is used to up-sample it to a 30 m resolution to obtain composite image 2, ensuring it maintains an integer multiple relationship with the 15 m panchromatic image.
(2)
GS transformation
Gram–Schmidt (GS) Pan Sharpening is a method that uses multidimensional linear orthogonal transformations to process images, reducing information redundancy while better preserving spectral characteristics and detailed texture information [24,25]. The essence lies in using GS orthogonalization to decompose the information of synthetic image 2 (R, G, B30m) into a linear combination of orthogonal basis vectors. By applying GS transformation to synthetic image 2 with a spatial resolution of 15 m, a 15 m fused image is generated.
(3)
High-pass filter enhancement
High-pass filter enhancement allows high-frequency signals to pass through while weakening and preventing low-frequency signals, achieving an enhancement of specific frequencies in images. Its advantage lies in the ability to precisely control image information based on different frequencies and specific needs, thereby optimizing image quality and highlighting detailed features. The formula is as follows [26,27]:
H u , v = 0 , D u , v D 0 1 , D u , v > D 0
where H ( u , v ) means the frequency of the high-pass wave, D u , v = u 2 + v 2 1 / 2 denotes the distance from the point u , v to the origin, and D 0 denotes the distance from the as-of-frequency point to the origin. When D u , v is greater than 0, the filter coefficient is taken as 1, which represents the enhancement of high-frequency information; when it is less than 0, the filter coefficient is taken as 0, which means that the low-frequency information is weakened. The specific process is shown in Figure 4.
(4)
Based on the above methods, color nighttime light remote sensing imagery of Beijing for 2013, 2015, 2017, 2017, 2019, and 2021 was generated. The results are shown in Figure 5 and Figure 6.
From 2013 to 2021, the distribution of nighttime light in the central area of Beijing remained relatively stable. Not only can one directly observe the distribution of nighttime lights in the city, represented by the purple-covered areas, but one can also see the information about the objects covered by the nighttime light. For example, the most central urban area usually shows a significant brightness of light, while the old city areas may exhibit different light distribution due to the renovation of lighting facilities.
As shown in Figure 6, the brightness of light in the center of Beijing tends to gradually weaken as the area expands. For example, commercial centers and transportation facilities exhibit brighter light, while residential areas and surrounding green spaces show weaker light. Moreover, the farther away from the city center, the gradually weaker the light brightness becomes.

3.2. Sampling Comparison Method

To evaluate the effectiveness of dual-sampling adjustment on image quality, this paper compares it with single up-sampling and down-sampling using NND transformation, Brovey transformation, PCA transformation, and GS transformation for fusion to obtain color nighttime light remote sensing imagery. The specific methods are discussed below.
(1)
Down-sampling multispectral image
In order to keep the same resolution with the 500 m nighttime light remote sensing image, the 30 m multispectral image is down-sampled, and its red and green bands are selected to be fused with the nighttime light remote sensing image to obtain the 500 m color nighttime light remote sensing image. The color nighttime light remote sensing image is fused with the 15 m panchromatic image by using a specific fusion algorithm.
(2)
Up-sampling adjustment
Bicubic convolution resampling is performed on the 500 m spatial resolution nighttime light remote sensing image to match the 30 m spatial resolution multispectral image. The red and green bands are extracted and fused with the nighttime light remote sensing image to obtain the 30 m color nighttime light remote sensing image, and then it is fused with a 15 m panchromatic image.
(3)
Dual-sampling multispectral image
Based on the down-sampling multispectral image, the 500 m color nighttime light remote sensing image is up-sampled to the 30 m level matching the original multispectral image and then fused with the 15 m panchromatic image to generate the color nighttime light remote sensing image. The results of the processing are shown in Figure 7.

3.3. Image Quality Evaluation

(1)
Subjective Evaluation
Through visual judgment of Figure 7, it is found that the down-sampling method retains better spectral information than the other methods, but it makes the loss of regional edge light information more serious in the band fusion, and the feature detail information is not as clear as the images obtained by the other two methods. Up-sampling directly fuses the color nighttime light remote sensing image from 500 m to 15 m by using a panchromatic image during image fusion, which is not consistent with the 30 m multispectral image corresponding to the 15 m panchromatic image in the original Landsat 8. The dual-sampling adjustment method can better improve the effect of multi-resolution image fusion while retaining the edge lighting information of the 500 m nighttime light remote sensing image. In order to further verify the advantages of the dual-sampling adjustment method, its third band is compared, as shown in Figure 8.
The red square area clearly shows that the spatial detail of NND, Brovey, PCA, GS, and GS_High has been enhanced by the dual-sampling adjustment. This is especially true for the edges of buildings, residential areas, and river water bodies, where the contours are clearer. Compared with GS and GS_High, the images transformed using NND, Brovey, and PCA are blurred in low-light areas, which makes it impossible to visualize the detailed textures of the features. In terms of spectral retention effect, the texture of GS_High features is more obvious than that of GS.
(2)
Objective Evaluation
Standard Deviation (STD), Information Entropy (EN), Average Gradient (AG) correlation coefficient (CC), Spectral Distortion (SD), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) are used as the main evaluation indicators. This is shown in Table 1 below.
The images generated by the three sampling methods were evaluated using the above indexes, with the results shown in Figure 9.
The down-sampled image has the highest AG, with GS performing the best and Brovey performing the best in terms of STD. In addition, CC and SSIM are the lowest and have the largest distortion. The up-sampled image has the highest CC and PSNR. It is lower in terms of MEAN, STD, EN, and AG, but the GS and PCA transforms perform better in these aspects. The images generated by using dual-sampling adjustment and their third band perform best in terms of MEAN, STD, EN, CC, and PSNR; especially, the evaluation indicators of NDD, Brovey, PCA, and GS transforms are all improved. This indicates that the image adjusted by using dual-sampling performs better in terms of the average brightness and pixel dispersion and has the best ability to maintain spectral features. Among them, the indicators processed by GS are more stable, so a high-pass filter algorithm is used for enhancement, as shown in Figure 10.
Compared with GS transformation, the color nighttime light remote sensing imagery obtained by GS_High has obvious improvement with each index of the third band, so GS_High is selected for image fusion to generate color nighttime light remote sensing imagery and is used for the analysis of urban functional types.

4. Dual-Sampling Adjustment to Generate Color Nighttime Light Remote Sensing Imagery

4.1. Research Data Analysis

Chaoyang District, Dongcheng District, and Xicheng District of Beijing were selected as the research areas. These three areas are located in the central urban districts of Beijing, covering different socio-economic characteristics, including commercial centers, historical and cultural heritage areas, and high-end residential areas. This provides a deeper understanding of the living conditions of urban residents and the level of economic development, as shown in Figure 11a–c.
In order to show the distribution of urban functional types more clearly, this paper shows the distribution of urban functions in three counties and districts by using points. It is difficult to distinguish different urban functional types using only different colors. Therefore, the fourteen urban functional types were sorted, and the top six functional types were extracted to obtain the distribution results of urban functional types in the city center. The urban function situation of the three city centers was also counted, as shown in Figure 12.
Shopping is the main functional type in the city center, occupying a quarter of the city’s area, and it is mainly distributed in the southwestern region. Additionally, Chaoyang District, located in the northeastern part of Beijing, has a large number of shopping malls, office buildings, and corporate enterprises, so the traffic facilities are relatively lower. Dongcheng District and Xicheng District are located in the central urban area of Beijing, adjacent to Tiananmen Square and the Forbidden City. They are the political and cultural centers of Beijing, so traffic facilities are prioritized over companies and businesses.

4.2. Kernel Density Analysis of POIs in Urban Functional Areas

Kernel density analysis is an assessment index that represents the probability of spatial geographic events occurring within a certain area [28]. It can intuitively reflect the distribution of discrete values within a continuous area and can be used in research fields such as analyzing urban functional types and urban spatial structure division [29]. The formula is as follows:
p i ( x ) = 1 n i = 1 n k h ( x x i )
In the formula, p i ( x ) is the kernel density function and k h ( x ) = 1 n k ( x h ) is the scaling function of the kernel density function. h is the smoothing parameter of the bandwidth, and n is the total number of samples. An appropriate bandwidth R has a significant impact on the research results. Therefore, a bandwidth of 50–1000 m was chosen, as shown in Figure 13.
By comparing Figure 13, it was found that the density center is more stable between bandwidths of 100–150 m. Therefore, 150 m was chosen as the bandwidth for kernel density, with a pixel size of 15 m. Then, the six urban functional types were analyzed, and the results are shown in Figure 14.

4.3. Urban Functional Correlation Analysis Based on Color Light Values

Before conducting the Pearson correlation analysis, the image was log-transformed. The method from reference [30] was used to process the fused imagery and POI kernel density analysis results. Then, Pearson correlation analysis was performed separately on the nighttime light values and kernel density results of the three regions. The formula is as follows:
r = i = 1 m x i x ¯ y i y ¯ i = 1 m x i x ¯ 2 i = 1 m x y i y ¯ 2
where r is the Pearson correlation coefficient; the value range is [−1, 1]; m is the total number of samples; x i and y i represent the observed values of the two variables; x ¯ and y ¯ represents the mean values of the two variables.
Current studies predominantly focus on correlating nighttime light values with overall urban POI data, yet limited attention has been given to urban functional types at finer spatial scales. To address this gap, we conducted a correlation analysis between color nighttime light values across three distinct zones and six urban functional types. This approach aims to expand the applicability of high-resolution color nighttime light remote sensing imagery in small-scale urban planning research. The results are shown in Figure 15.
As shown in Figure 15, the nighttime light brightness in the three study areas exhibits positive correlations with kernel density analysis results, indicating that color nighttime light remote sensing imagery effectively reflect urban night economic activity. Specifically, color nighttime light values show the highest correlation coefficients with businesses (r ≥ 0.7221), followed by companies (r = 0.6120). This phenomenon can be attributed to the economic centrality of these regions, where frequent night business activities drive strong correlations. In Chaoyang District, a high-end commercial zone in Beijing, the relatively dispersed distribution of dining areas results in a weaker correlation with color nighttime light values. Conversely, traffic in Dongcheng and Xicheng Districts demonstrates lower correlations with light brightness (r = 0.4217 and 0.3027, respectively), primarily due to their status as historical and cultural heritage zones, which restricts infrastructure development and reduces interdependencies.
In summary, the dual-sampling adjustment method not only enhances image quality and analytical precision but also strengthens the practicality and validity of this study. By enabling a more accurate identification of urban functional types, urban planners can allocate resources rationally and formulate targeted policies to improve residents’ quality of life and promote sustainable urban development.

5. Conclusions

This study addresses the resolution disparity between daytime and nighttime light remote sensing imagery by proposing a dual-sampling adjustment method to enhance color nighttime light imagery. The effectiveness of the method in preserving image details and improving spectral quality is validated through comprehensive subjective and objective evaluations. Key findings are summarized as follows:
(1)
The dual-sampling adjustment method generates color nighttime light imagery with a spatial resolution improved from 500 m to 15 m and spectral bands expanded from single-band to three bands (red, green, blue). This approach retains nighttime light distribution while incorporating daytime surface features, significantly improving the comprehensive performance of nighttime light remote sensing imagery.
(2)
Subjective and objective evaluations demonstrate that the dual-sampling adjustment image, particularly its third band, achieves the best performance in indicators such as MEAN, STD, EN, CC, and PSNR. These results confirm the method’s superiority in preserving spatial textures, enhancing information capacity, and maintaining spectral fidelity.
(3)
Urban functional type analysis reveals that the enhanced color nighttime light remote sensing imagery accurately captures urban spatial features. The brightness of color nighttime light exhibits the strongest correlation with businesses, followed by companies, providing novel data support for dynamic monitoring of urban functions.

6. Discussion

This study addresses the challenges of low-resolution nighttime light remote sensing imagery and the significant resolution discrepancies in daytime and nighttime image fusion by proposing an innovative dual-sampling adjustment method to enhance color nighttime light remote sensing imagery. This provides a novel perspective for multi-source remote sensing fusion and urban planning applications. However, this study has limitations. First, the study area (Dongcheng, Xicheng, and Chaoyang Districts in Beijing) may constrain the generalizability of the findings. Second, the lack of a standard for image quality assessment introduces some variability in different evaluation indicators; additionally, seasonal variations in light intensity and the dynamic nature of urban economies may influence the analysis of urban functional types. Future work will expand the study range, adopt comprehensive evaluation indicators, and investigate urban functional types from spatiotemporal perspectives. Furthermore, the dual-sampling adjustment method will be optimized to enhance robustness in complex environments, and its application potential will be explored in urban planning, ecological surveys, and environmental monitoring.

Author Contributions

Conceptualization, Y.H.; methodology, Y.L.; software, Y.H.; validation, Y.H.; formal analysis, Y.L.; investigation, Y.H.; resources, Y.L.; data curation, L.Z.; writing—original draft preparation, Y.H.; writing—review and editing, Y.L.; visualization, Y.H. and M.Y.; supervision, Y.H.; project administration, Y.L.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science of China (grant #: 41961063).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author would like to thank the reviewers for their constructive comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Combs, C.L.; Miller, S.D. A review of the far-reaching usage of low-light nighttime data. Remote Sens. 2023, 15, 623. [Google Scholar] [CrossRef]
  2. Zheng, Y.; Fan, M.; Cai, Y.; Fu, M.; Yang, K.; Wei, C. Spatiotemporal pattern evolution of carbon emissions at the city-county-town scale in Fujian Province based on DMSP/OLS and NPP/VIIRS nighttime light data. J. Clean. Prod. 2024, 442, 140958. [Google Scholar] [CrossRef]
  3. Yu, B.L.; Wang, C.X.; Gong, W.K.; Chen, Z.; Shi, K.; Wu, B.; Hong, Y.; Li, Q.; Wu, J. Nighttime light remote sensing and urban studies: Data, methods, applications, and prospects. Natl. Remote Sens. Bull. 2021, 25, 342–364. [Google Scholar]
  4. Li, X.; Cao, H.R.; Gong, Y. Turkey-Syria Earthquake Assessment Using High-Resolution Night-time Light Images. Geomat. Inf. Sci. Wuhan Univ. 2023, 48, 1697–1705. [Google Scholar]
  5. Ye, Y.; Tong, C.; Dong, B.; Huang, C.; Bao, H.; Deng, J. Alleviate light pollution by recognizing urban night-time light control area based on computer vision techniques and remote sensing imagery. Ecol. Indic. 2024, 158, 111591. [Google Scholar]
  6. Lin, Y.; Gao, C.; Zhang, T.H. Analysis of Russia-Ukraine Conflicts Situation and Its Economic Impact Using Night-Time Light Remote Sensing. J. Tongji Univ. (Nat. Sci.) 2024, 52, 1975–1984. [Google Scholar]
  7. Li, W.; Wu, M.; Niu, Z. Spatialization and Analysis of China’s GDP Based on NPP/VIIRS Data from 2013 to 2023. Appl. Sci. 2024, 14, 8599. [Google Scholar] [CrossRef]
  8. Yu, L.; Zheng, Y.; Jiang, J.Q. Development and Innovative Applications of Night Light Remote Sensing Satellites. Satell. Appl. 2024, 4, 46–53. [Google Scholar]
  9. Zheng, Q.M.; Weng, Q.H.; Huang, L.Y.; Wang, K.; Deng, J.; Jiang, R.; Ye, Z.; Gan, M. A new source of multi-spectral high spatial resolution night-time light imagery—JL1-3B. Remote Sens. Environ. 2018, 215, 300–312. [Google Scholar] [CrossRef]
  10. Watson, C.S.; Elliott, J.R.; Córdova, M.; Menoscal, J.; Bonilla-Bedoya, S. Evaluating night-time light sources and correlation with socio-economic development using high-resolution multi-spectral Jilin-1 satellite imagery of Quito, Ecuador. Int. J. Remote Sens. 2023, 44, 2691–2716. [Google Scholar] [CrossRef]
  11. Guk, E.; Levin, N. Analyzing spatial variability in night-time lights using a high spatial resolution color Jilin-1 image–Jerusalem as a case study. ISPRS J. Photogramm. Remote Sens. 2020, 163, 121–136. [Google Scholar]
  12. Yan, L.; Hu, Y.; Dou, C.; Li, X.-M. Radiometric Calibration of SDGSAT-1 Nighttime Light Payload. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1000715. [Google Scholar] [CrossRef]
  13. Guo, H.; Dou, C.; Chen, H.; Liu, J.; Fu, B.; Li, X.; Zou, Z.; Liang, D. SDGSAT-1: The world’s first scientific satellite for sustainable development goals. Sci. Bull. 2023, 68, 34–38. [Google Scholar] [CrossRef]
  14. Guo, B.; Hu, D.; Zheng, Q. Potentiality of SDGSAT-1 glimmer imagery to investigate the spatial variability in nighttime lights. Int. J. Appl. Earth Obs. Geoinf. 2023, 119, 103313. [Google Scholar]
  15. Zhang, D.; Cheng, B.; Shi, L.; Gao, J.; Long, T.; Chen, B.; Wang, G. A destriping algorithm for SDGSAT-1 nighttime light images based on anomaly detection and spectral similarity restoration. Remote Sens. 2022, 14, 5544. [Google Scholar] [CrossRef]
  16. Li, S.T.; Li, C.Y.; Kang, X.D. Development status and future prospects of multi-source remote sensing image fusion. Natl. Remote Sens. Bull. 2021, 25, 148–166. [Google Scholar] [CrossRef]
  17. Wang, X.; Hua, Z.; Li, J. Multi-focus image fusion framework based on transformer and feedback mechanism. Ain Shams Eng. J. 2023, 14, 101978. [Google Scholar]
  18. Vivone, G. Multispectral and hyperspectral image fusion in remote sensing: A survey. Inf. Fusion 2023, 89, 405–417. [Google Scholar]
  19. Tang, Y.L.; Huang, D.S.; Chen, S.L.; Chen, P.M. SFIM Image Fusion Method Combining IHS and Adaptive Filtering. Comput. Mod. 2023, 9, 70. [Google Scholar]
  20. Lu, Y.; Zhou, G.; Huang, M.; Huang, Y. Color Night Light Remote Sensing Images Generation Using Dual-Transformation. Sensors 2024, 24, 294. [Google Scholar] [CrossRef]
  21. Karim, S.; Tong, G.; Li, J.; Qadir, A.; Farooq, U.; Yu, Y. Current advances and future perspectives of image fusion: A comprehensive review. Inf. Fusion 2023, 90, 185–217. [Google Scholar]
  22. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar]
  23. Zhong, X.Y.; Yan, Q.W.; Li, G.E. Development of a long time series nighttime light dataset for China (2000–2020). J. Glob. Change Data Discov. 2022, 6, 416–424. [Google Scholar]
  24. Javan, F.D.; Samadzadegan, F.; Mehravar, S.; Toosi, A.; Khatami, R.; Stein, A. A review of image fusion techniques for pan-sharpening of high-resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2021, 171, 101–117. [Google Scholar]
  25. Zhang, S.; Han, Y.; Wang, H.; Hou, D. Gram-Schmidt Remote Sensing Image Fusion Algorithm Based on Matrix Elementary Transformation. J. Phys. Conf. Ser. 2022, 2410, 012013. [Google Scholar]
  26. Sekrecka, A.; Kedzierski, M.; Wierzbicki, D. Pre-processing of panchromatic images to improve object detection in pansharpened images. Sensors 2019, 19, 5146. [Google Scholar] [CrossRef]
  27. Ablin, R.; Sulochana, C.; Prabin, G. An investigation in satellite images based on image enhancement techniques. Eur. J. Remote Sens. 2020, 53, 86–94. [Google Scholar]
  28. Zhang, J.; Zhao, X. Using POI and multisource satellite datasets for mainland China’s population spatialization and spatiotemporal changes based on regional heterogeneity. Sci. Total Environ. 2024, 912, 169499. [Google Scholar]
  29. Zhang, X.; Xie, Y.; Jiao, J. How to accurately assess the spatial distribution of energy CO2 emissions? Based on POI and NPP-VIIRS comparison. J. Clean. Prod. 2023, 402, 136656. [Google Scholar]
  30. Wang, Y.; Li, Y.; Song, X.; Zou, X.; Xiao, J. Correlation Analysis between NPP-VIIRS Nighttime Light Data and POIs Data—A Comparison Study in Different Districts and Counties of Nanchan. IOP Conf. Ser. Earth Environ. Sci. 2021, 693, 012103. [Google Scholar]
Figure 1. Nighttime light remote sensing imagery dataset.
Figure 1. Nighttime light remote sensing imagery dataset.
Sensors 25 02002 g001
Figure 2. Panchromatic image dataset.
Figure 2. Panchromatic image dataset.
Sensors 25 02002 g002
Figure 3. Multispectral image dataset.
Figure 3. Multispectral image dataset.
Sensors 25 02002 g003
Figure 4. Implementation of the dual-sampling adjustment image fusion method.
Figure 4. Implementation of the dual-sampling adjustment image fusion method.
Sensors 25 02002 g004
Figure 5. Dual-sampling adjustment method for fusion generation of color nighttime light remote sensing imagery.
Figure 5. Dual-sampling adjustment method for fusion generation of color nighttime light remote sensing imagery.
Sensors 25 02002 g005
Figure 6. The third band of color nighttime light remote sensing imagery.
Figure 6. The third band of color nighttime light remote sensing imagery.
Sensors 25 02002 g006
Figure 7. Comparison of color nighttime light remote sensing images with different sampling methods.
Figure 7. Comparison of color nighttime light remote sensing images with different sampling methods.
Sensors 25 02002 g007
Figure 8. Comparison of third-band images with different sampling methods.
Figure 8. Comparison of third-band images with different sampling methods.
Sensors 25 02002 g008
Figure 9. Comparison of quality evaluation of ‘up–down dual sampling’ fusion image.
Figure 9. Comparison of quality evaluation of ‘up–down dual sampling’ fusion image.
Sensors 25 02002 g009
Figure 10. Quality evaluation before and after image enhancement.
Figure 10. Quality evaluation before and after image enhancement.
Sensors 25 02002 g010
Figure 11. Study area data types.
Figure 11. Study area data types.
Sensors 25 02002 g011
Figure 12. Distribution of urban functional types and statistics.
Figure 12. Distribution of urban functional types and statistics.
Sensors 25 02002 g012
Figure 13. Kernel density analysis results.
Figure 13. Kernel density analysis results.
Sensors 25 02002 g013
Figure 14. Distribution of urban functional types.
Figure 14. Distribution of urban functional types.
Sensors 25 02002 g014
Figure 15. Correlation analysis between color nighttime light values and POIs.
Figure 15. Correlation analysis between color nighttime light values and POIs.
Sensors 25 02002 g015
Table 1. Evaluation of indicator performance introduction.
Table 1. Evaluation of indicator performance introduction.
Evaluation MetricPerformance
MEANReflects the overall brightness level of the image, characterizing the distribution of sensitivity.
STDRepresents color contrast by quantifying pixel value dispersion; higher values indicate stronger contrast.
ENMeasures the richness and integrity of spectral information; higher entropy denotes greater informational diversity.
AGEvaluates detail clarity, with higher gradients corresponding to sharper edges and textures.
CCAssesses spectral consistency between the image and a reference; values approaching 1 indicate superior fidelity.
SDQuantifies spectral distortion severity; lower values signify enhanced fidelity.
PSNRIntegrates spectral fidelity and detail preservation; higher PSNR reflects improved image quality.
SSIMHolistically evaluates brightness, contrast, and texture in comparison to a reference image; values closer to 1 denote higher alignment.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Y.; Lu, Y.; Zhang, L.; Yin, M. Enhanced Color Nighttime Light Remote Sensing Imagery Using Dual-Sampling Adjustment. Sensors 2025, 25, 2002. https://doi.org/10.3390/s25072002

AMA Style

Huang Y, Lu Y, Zhang L, Yin M. Enhanced Color Nighttime Light Remote Sensing Imagery Using Dual-Sampling Adjustment. Sensors. 2025; 25(7):2002. https://doi.org/10.3390/s25072002

Chicago/Turabian Style

Huang, Yaqi, Yanling Lu, Li Zhang, and Min Yin. 2025. "Enhanced Color Nighttime Light Remote Sensing Imagery Using Dual-Sampling Adjustment" Sensors 25, no. 7: 2002. https://doi.org/10.3390/s25072002

APA Style

Huang, Y., Lu, Y., Zhang, L., & Yin, M. (2025). Enhanced Color Nighttime Light Remote Sensing Imagery Using Dual-Sampling Adjustment. Sensors, 25(7), 2002. https://doi.org/10.3390/s25072002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop