Next Article in Journal
Deep Learning-Based Interferogram Quality Assessment and Application to Tectonic Deformation Study
Next Article in Special Issue
Spatiotemporal Evolution and Driving Mechanisms of Nighttime Lights and Population Coupling Coordination in China
Previous Article in Journal
A Dual-Channel Passive Limb Imaging System (DUALIS) for Mars with UV Airglow-Based CO2 Retrieval and 557.7 nm Doppler Wind Imaging Interferometry
Previous Article in Special Issue
Evolution of Urban Spatial Morphology and Its Driving Mechanisms in Fujian Province Based on Multi-Source Nighttime Light Remote Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Spectral Composition of Nighttime Lighting in Urban Green Spaces Using SDGSAT-1 NTL Data and Google Earth Imagery

1
School of Geospatial Engineering and Science, Sun Yat-sen University, Zhuhai 519082, China
2
Key Laboratory of Comprehensive Observation of Polar Environment (Sun Yat-sen University), Ministry of Education, Zhuhai 519082, China
3
Zhejiang Academy of Surveying and Mapping, Hangzhou 310012, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(5), 732; https://doi.org/10.3390/rs18050732
Submission received: 30 January 2026 / Revised: 23 February 2026 / Accepted: 27 February 2026 / Published: 28 February 2026

Highlights

What are the main findings?
  • A Swin Transformer-based encoder–decoder framework, UGS-STUNet, was developed to classify five distinct urban green space (UGS) typologies from Google Earth imagery. The proposed UGS-STUNet outperformed state-of-the-art models across multiple evaluation metrics, achieving a precision of 85.72% and an F1 score of 83.73%.
  • Blue-to-green (B/G) and green-to-red (G/R) ratios were proposed to map the spectral composition of lighting across different UGS typologies using SDGSAT-1 NTL data. We identified stark spectral heterogeneity across different UGS typologies in Shanghai. Street trees show highest red exposure, while forest patches, forest belts, and other green spaces exhibit blue-rich lighting environment.
What are the implications of the main findings?
  • This research provides a scalable method for monitoring the spectral quality of urban nightscapes, offering critical evidence to inform sustainable urban planning and the design of light-mitigation strategies to support global biodiversity and public health.
  • The urban ecological health of Shanghai’s UGS is differentially impacted by nighttime lighting, and the high-red-intensity exposure identified in street trees suggests a high risk of shifting the phytochrome photoequilibrium.

Abstract

Characterizing the spectral composition of artificial light at night (ALAN) within urban green spaces (UGS) is vital for ecological conservation, yet traditional sensors often lack the requisite spatial and spectral resolution for fine-scale analysis. To address this gap, this study leverages high-resolution multispectral nighttime light (NTL) data from the SDGSAT-1 to perform a fine-scale characterization of lighting across diverse UGS typologies. We developed UGS-STUNet, a semantic segmentation framework based on Swin Transformer architecture, to accurately extract five UGS categories from Google Earth imagery. Two specialized spectral indices, blue-to-green (B/G) and green-to-red (G/R) ratios, were derived from SDGSAT-1 NTL data to quantify the lighting’s spectral composition. Application in Shanghai demonstrated that UGS-STUNet achieved a precision of 85.72%, significantly outperforming existing methods. Our findings reveal that street trees are subjected to the highest red-light intensity and the lowest B/G and G/R ratios due to their proximity to roadway illumination. In contrast, forest patches and belts exhibit higher spectral ratios, indicating a relatively higher exposure to blue and green wavelengths. This study provides a robust and scalable method for monitoring the spectral quality of urban nightscapes, offering critical insights for sustainable urban planning and lighting mitigation strategies to safeguard global biodiversity and public health.

1. Introduction

The proliferation of artificial light at night (ALAN) represents one of the most pervasive anthropogenic alterations to the Earth’s environment [1,2,3]. Over the past century, the transition from natural darkness to a globally illuminated nocturnal landscape has proceeded at an unprecedented pace [4], particularly within urban centers that host over half of the global population [5,6]. While artificial lighting is an indispensable driver of modern socio-economic activity, including enhancing public safety, extending commercial hours, and facilitating transportation, its environmental externalities have become increasingly critical [7]. Urban light pollution, characterized by skyglow, light trespass, and glare, disrupts circadian rhythms and has recently gained significant attention [8,9].
Urban green space (UGS), comprising vegetated areas such as parks and green buffers [10], constitutes a fundamental pillar of urban ecosystem. UGS provides pivotal ecosystem services [11,12], ranging from environmental benefits like urban heat island mitigation [13,14], air quality improvement [15], and water purification [16], to socio-economic benefits via recreational provisioning [17,18] and public health promotion [19,20]. However, rapid urbanization exerts multifaceted pressures on UGS, leading to habitat fragmentation, biodiversity loss, and intensified exposure to ALAN [21,22]. Specifically, excessive and spectrally suboptimal ALAN can alter plant phenology (e.g., delaying leaf senescence) [23,24], disrupt pollinator behavior [25], and fragment habitats for nocturnal fauna [26,27]. Consequently, as municipalities transition toward sustainable lighting paradigms, characterizing the spectral composition of ALAN within UGS has emerged as a priority for biodiversity conservation and evidence-based urban planning [21,28].
For decades, satellite-based Nighttime Light (NTL) remote sensing has offered a unique perspective for monitoring human activities globally [29]. The utility of NTL data is well-documented, with applications spanning urban expansion dynamics, socio-economic proxy mapping, and light pollution assessment [30,31]. Accordingly, NTL data has transitioned from a proxy for socio-economic development to a vital tool for environmental impact assessments. Previous studies have predominantly relied on NTL intensity to evaluate impacts on ecosystems, vegetation types, species distributions, and designated conservation areas such as biodiversity hotspots and protected areas [32,33,34]. For example, Bennie et al. [35], analyzing DMSP/OLS data from 1992 to 2012, demonstrated a global increase in ecosystem exposure to ALAN. Similarly, Gaston et al. [36] reported that a substantial proportion of protected areas have experienced significant recent increases in nighttime lighting. Zheng et al. [37] utilized a combined DMSP/OLS and NPP-VIIRS dataset (1992–2018) to reveal that protected areas across Africa are subject to intensifying levels of NTL intensity.
Despite these advancements, a significant gap remains in characterizing the spectral composition of urban lighting, particularly within the fragmented and heterogeneous landscape of UGS [21]. This limitation is rooted in two primary constraints. First, most widely used NTL sensors (DMSP/OLS and NPP-VIIRS) are panchromatic; they integrate all visible light into a single intensity value, providing no information on the spectral composition of NTL [32]. This is a critical shortcoming because the ecological impact of ALAN is highly dependent on its spectral distribution, yet these sensors lack sensitivity to the ecologically crucial blue-light spectrum (380–450 nm). For instance, the DMSP/OLS was sensitive from 450 to 1000 nm, while the NPP/VIIRS-DNB is sensitive from 480 to 920 nm [38]. Second, the coarse spatial resolution of NTL data is fundamentally mismatched to the scale of UGS. On the one hand, the coarse resolution (750 m for NPP-VIIRS NTL data and 1000 m for DMSP/OLS NTL data) is inadequate for characterizing intra-park lighting conditions, on the other hand, it results in a significant mixed-pixel effect. For example, the radiance of a dark park interior is often averaged with the bright illumination of adjacent roads. This aggregation fundamentally distorts the light levels and spectral signatures that organisms within the green space actually perceive. Therefore, the absence of multi-spectral, high-resolution NTL data precludes the characterization of the spectral composition of ALAN within UGS.
The launch of the Sustainable Development Science Satellite 1 (SDGSAT-1) in 2021 marks a transformative shift in nocturnal environment monitoring [39,40]. SDGSAT-1 is equipped with three primary sensors: a thermal infrared spectrometer, a Glimmer Imager (GLI), and a multispectral imager [39,41]. Notably, the GLI sensor provides NTL data featuring a rare combination of multi-spectral capabilities (Red, Green, and Blue bands) and high spatial resolution (10 m panchromatic and 40 m multi-spectral), coupled with the distinct advantage of lower geolocation uncertainty [42]. SDGSAT-1 NTL data holds two significant advantages for fine-scale monitoring of UGS-related indicators: one is high spatial resolution, and the other is the inclusion of multi-spectral measures [43]. This configuration allows for unprecedented research opportunities in UGS analysis [44]. This study leverages SDGSAT-1 multi-spectral NTL imagery alongside Google Earth imagery to conduct a comprehensive analysis of spectral composition of lighting within UGS. We developed a deep learning model for UGS mapping from Google Earth imagery, and then constructed two spectral indices to quantify the spectral composition of nighttime lighting within UGS. To the best of our knowledge, this study represents the first attempt to systematically map the spectral composition of nighttime lighting across distinct UGS typologies using SDGSAT-1, establishing a new methodological pathway for urban nightscape ecology and sustainable urban environments.

2. Materials and Methods

2.1. Study Area

Shanghai, a preeminent global megacity and a pivotal economic hub in East China, serves as the study area for this research (Figure 1a). Shanghai has undergone accelerated urbanization, resulting in a dense and complex urban fabric characterized by intensive land-use patterns. Within this high-pressure environment, UGS function as critical ecological infrastructure, providing vital refuges for biodiversity and essential public health benefits [45,46]. Concurrently, the city’s nocturnal environment is defined by an exceptionally high density and diversity of ALAN, driven by a complex transition in lighting technology, where legacy high-pressure sodium (HPS) lamps coexist with the rapid proliferation of broad-spectrum light-emitting diodes (LEDs) [47]. This combination of intense, spectrally diverse lighting adjacent to clearly delineated UGS pronounced radiance gradients and significant spectral encroachment into park interiors. Consequently, Shanghai represents an ideal and representative case study for investigating the spectral composition of ALAN within UGS, as it exemplifies the acute lighting-driven ecological challenges faced by rapidly developing metropolises worldwide.

2.2. Data Sources and Preprocessing

SDGSAT-1 NTL data. A high-quality, cloud-free SDGSAT-1 GLI scene of Shanghai was acquired from the SDGSAT-1 Open Science Program website (www.sdgsat.ac.cnFigure 1). The scene, captured on 23 October 2022, was manually selected to ensure optimal imaging quality. The SDGSAT-1 GLI sensor provides data in two panchromatic bands (i.e., panchromatic low (PL) and panchromatic high (PH)) and three visible spectral bands: a blue band (424–526 nm), a green band (506–612 nm), and a red band (600–894 nm). To address the severe stripe noise present in SDGSAT-1 NTL images, we employed the Linear Structure-Constrained Denoising (LSCD) method [48] to enhance the data quality. The LSCD procedure involves: (1) Contrast enhancement via improved histogram equalization to amplify signal-to-noise ratios; (2) Artifact identification, which couples gradient magnitude detection with region-growing algorithms to isolate linear stripe structures based on their directional anisotropy; and (3) Radiometric reconstruction, where affected pixels are interpolated using spatially correlated neighboring values. This approach effectively suppresses systematic noise while preserving critical image details.
Google Earth Imagery. To achieve fine-grained UGS classification, we utilized very-high-resolution optical imagery from Google Earth with a spatial resolution of approximately 1 m (Figure 1c). Due to quality issues with the Shanghai imagery for 2021 and 2023, we selected the 2020 imagery, which provided superior visual quality for UGS delineation. Given the relatively stable spatial extent of major UGS over short intervals, this temporal gap is expected to have negligible impacts on UGS delineation. To meet the input requirements of the deep learning architecture and optimize computational efficiency, the Google Earth imagery was partitioned into 256 × 256 pixel patches (tiles) prior to model inference, ensuring a balance between local contextual information and GPU memory throughput.
Before calculating spectral indices, the raw data from SDGSAT-1 must be converted into physical units of radiance [41,49]. The conversion of Digital Numbers ( D N ) to radiance ( L , in unit of nW/cm2/sr) is typically performed using the following linear equation:
L λ = G a i n λ × D N λ + O f f s e t λ × W λ × 10 5
where λ represents the specific spectral band (Blue, Green, or Red). G a i n , O f f s e t , and W represent the amplification factor, offset coefficients, and band width, respectively. The related parameters are extracted from the image metadata and summarized in Table 1.

2.3. UGS Extraction

2.3.1. UGS Sample Labels Generation

UGS sample labels were generated through expert manual visual interpretation. Following the UGS classification system defined by the Shanghai Landscape and City Appearance Administration Information Center, we categorized the green space classification into five categories (Figure 2), including green land, forest patches, forest belts, street trees, and other green spaces. Green land is characterized by open lawns and low-lying vegetation, whereas forest patches represent densely wooded areas located within parks or at urban fringes. Forest belts are defined as linear tree strips serving shielding or esthetic functions. Street trees designate vegetation along road corridors, while other green spaces consist of mixed-use or transitional vegetation. In this experiment, each type of green space is counted as one category, and the other non-green category is the background value. Subsequently, the Google Earth images were cropped into patches of size 256 × 256 pixels. The resulting dataset, consisting of 2000 high-quality samples, was randomly split into training, validation, and testing sets in a 7:2:1 ratio.

2.3.2. UGS-STUNet Framework

Overall architecture. Inspired by the DS-TransUNet [50] and ST-Unet [51], we developed UGS-STUNet, a semantic segmentation encoder–decoder framework based on the Swin Transformer (ST) [52] for UGS classification. UGS-STUNet adopts a U-shaped architecture featuring a dual-branch encoder (Figure 3). This design consists of a primary ST-based encoder and an auxiliary convolutional neural network (CNN)-based encoder, aimed at synergizing their respective strengths. UGS-STUNet leverages self-attention mechanisms and a shifted windowing scheme to model global context and long-range spatial dependencies, while utilizing the CNN to model relationships between neighboring pixels for local feature extraction. UGS-STUNet uses a feature fusion module (FFM) to integrate global and local features for improving segmentation accuracy for similar UGS. Additionally, a feature enhancement module (FEM) suppresses the influence of invalid channels on UGS classification, enhancing the model’s utilization of detailed features.
UGS-STUNet follows a symmetrical U-shaped topology comprising a main encoder, an auxiliary encoder, and a decoder, interconnected via skip connections to preserve spatial integrity. In the main encoder, the input Google Earth imagery I R H × W × 3 (where H and W represent the image height and width, respectively) is first processed by a patch partitioning module, which segment the image into non-overlapping patches. These patches are then mapped into a C -dimensional embedding space (48 in this study) via a linear projection layer. The embedded representations are processed through three hierarchical feature extraction stages. Each feature extraction stage includes two ST modules and one patch merging layer. The ST modules are responsible for learning and extracting image features, while the patch merging layer performs downsampling to halve the spatial resolution and double the feature dimension. The output feature map (denoted as X S T i ( i = 1,2 , 3 ) ) of the i -th stage can be expressed as X S T i R H 2 i + 1 × W 2 i + 1 × 2 i 1 C 1 , where R represents the set of real numbers, and c 1 = 96 denotes the channel depth. To complement the global context with local inductive biases, the original Google Earth imagery is simultaneously processed by an auxiliary encoder composed of residual blocks, and the output feature map of the residual block at the i -th stage is denoted as X C N N i R H 2 i + 1 × W 2 i + 1 × 2 i 1 C 2 ,   c 2 = 128 . To mitigate the semantic gap between the dual-branch features, the FFM integrates X S T i and X C N N i at each corresponding stages. This fused information is then propagated back into the primary encoder to refine the feature hierarchy. The decoder facilitates spatial reconstruction through a sequence of ST modules and patch expanding layers. Skip connections are employed to concatenate the refined encoder features with the decoder’s upsampled maps, effectively compensating for the information loss incurred during downsampling. To prioritize informative features and suppress noise, the FEM is integrated within the decoding path to attenuate inter-channel redundancy. The patch expanding layer reshapes the feature maps of adjacent dimensions into a new feature map via 2 × 2 upsampling. The final patch expanding layer performs 4 × 4 upsampling to restore the feature map resolution to the input size. Finally, a linear projection layer is applied to generate the pixel-wise classification map across the predefined UGS categories.
Encoder. The encoder of UGS-STUNet comprises a ST-based main encoder and a CNN-based residual network auxiliary encoder. As shown in Figure 4a, the ST module leverages a shifted window to build a hierarchical representation. Central to this module are the Window-based Multi-head Self-Attention (W-MSA) and Shifted Window-based Multi-head Self-Attention (SW-MSA) mechanisms. By implementing W-MSA, the ST achieves linear computational complexity relative to image size, a significant optimization over the quadratic scaling of traditional MSA. Furthermore, the integration of a SW-MSA addresses the limitations of isolated window processing by re-establishing inter-window correlations [52]. The structure of the residual block [53] is illustrated in Figure 4b. The residual block utilizes convolutional layers to learn features and employs 1 × 1 convolution kernels for dimensionality reduction and expansion. By adding the original feature matrix x and the learned feature matrix F ( x , w ) , the final learned feature is obtained, a process known as residual learning. The residual unit can be expressed as: x l + 1 = R e L U [ x l + F ( x l , w l ) ] , where x l + 1 and x l denote the output and input of the residual unit, respectively; R e L U (·) represents the ReLU activation function; and F ( x l , w l ) represents the residual function, where w l denotes the corresponding weights.
The FFM structure depicted in Figure 5a concatenates the global features from the main encoder with the local features from the auxiliary encoder, followed by the sequential encoding of channel and spatial information to derive the final fused features. The feature map is obtained via concatenation, F i R H 2 i + 1 × W 2 i + 1 × 2 i 1 C 3 , where C 3 = 2 i 1 C 1 + 2 i 1 C 2 . Average pooling and max pooling layers are utilized to compute the statistical features of the feature map across channels, which are then forwarded to a shared fully connected layer. The channel importance weights are derived by summing the outputs of the two paths, specifically expressed as: F i = S i g m o i d M L P A v g P o o l F i + M L P [ M a x P o o l F i ] , where S i g m o i d is the activation function; and M L P denotes a Multi-Layer Perceptron (MLP) consisting of two neural network layers, with ReLU as the activation function.
As illustrated in Figure 5b, the FEM is incorporated after the skip connections to emphasize the critical features required by the model and suppress the influence of invalid channels on UGS classification, thereby facilitating improved utilization of detailed features by the model. First, channel-wise global average pooling and global max pooling are performed on the input feature map to obtain important information from different channel feature maps. Furthermore, the value of kernel size is adaptively determined based on the channel dimension of the feature vector, and a 1D convolution with a kernel size of k and an activation function are utilized to calculate and extract the inter-channel dependencies. Finally, the weights of each channel are multiplied with the input feature map to obtain the final important feature map. This process is specifically expressed as: X i = X i × S i g m o i d f 1 × k A v g P o o l X i + f 1 × k [ M a x P o o l X i ] , where X i and X i represent the output and input features of the FEM at the i -th stage, respectively; f 1 × k represents a 1D convolutional layer with a kernel size of 1 × k .
Loss function. Finally, a joint loss function combining the Weighted Cross-Entropy (WCE) [54] loss and the Dice loss function is employed to supervise the model. The WCE loss function measures pixel-level similarity between sample labels and prediction results for different classes using corresponding weights, whereas the Dice loss function is used to evaluate the overall similarity between ground truth samples and prediction results. The expression for the WCE loss function is l W C E = 1 N i C = 1 M ω C y i C l o g P i C ,   ω c = M e d i a n ( F s ) / F C , where N denotes the total number of pixels in the input image; M represents the number of classes; P i C represents the probability that sample i belongs to class C ; y i C denotes the value when i belongs to class C ; ω c represents the weight of class C ; F C represents the frequency of class C ; and F s represents the set of frequencies for all classes. The Dice loss function is often calculated using a confusion matrix, with the formula as l D i c e = 1 2 N T P 2 N T P + N F N + N F P , where N T P , N F N , and N F P represent the number of true positive pixels, false negative pixels, and false positive pixels, respectively. The final joint loss function is expressed as: l = l W C E + l D i c e .

2.4. Spectral Indices Construction

To quantitatively characterize the spectral composition of ALAN, we formulated two spectral color ratios. These metrics act as diagnostic proxies for the coolness or warmth of the light, effectively reflecting the underlying lighting technologies and their potential ecological impacts.
Blue-to-Green Ratio ( R B G ). The R B G serves as a primary indicator of short-wavelength emissions. High R B G values are typically associated with broad-spectrum white LEDs, which possess a significant blue-light component known to be highly disruptive to nocturnal circadian rhythms and phototropic responses in vegetation. It is calculated as
R B G = L B l u e L G r e e n ,
Green-to-Red Ratio ( R G R ). R G R is designed to differentiate between modern lighting and legacy infrastructure, such as HPS lamps. HPS lamps emit a predominantly amber-to-red spectrum, resulting in lower R G R values. This ratio helps identify “warm” lighting zones:
R G R = L G r e e n L R e d ,
The selection of these specific ratios is grounded in two scientific considerations. First, the B/G and G/R ratios capture the spectral slope of ALAN, which is essential for discriminating between dominant urban light sources (e.g., the distinct red-peak of HPS vs. the blue-peak of cold-white LEDs). Second, since this study focuses on UGS, the green band is strategically utilized as a common denominator. In vegetated environments, the spectral reflectance of the canopy (which reflects green light while absorbing blue and red) interacts with downward ALAN, making these ratios highly sensitive to the synergistic effect between light quality and vegetation structure.

2.5. Evaluation Metrics

To rigorously assess the performance of the UGS-STUNet classification, we employed three standard quantitative metrics: recall, precision, and F1 score. The F1 score, calculated as the harmonic mean of precision and recall, provides a robust measure of model accuracy by balancing commission errors (false positives) and omission errors (false negatives). These metrics are mathematically defined as follows:
P r e c i s i o n = N T P / ( N T P + N F P ) ,
R e c a l l = N T P / ( N T P + N F N ) ,
F 1   s c o r e = 2 × ( p r e c i s i o n × r e c a l l ) / ( p r e c i s i o n + r e c a l l ) ,
where N T P , N F N , and N F P represent the number of true positive pixels, false negative pixels, and false positive pixels, respectively.

3. Results

3.1. Accuracy Evaluation of UGS Extraction

The spatial distribution of the extracted UGS across Shanghai is depicted in Figure 6. To better evaluate the model’s performance in high-density urban environments, two representative sub-regions were magnified to highlight local details (Figure 6, insets). Qualitative visual inspection reveals a high degree of spatial congruence between the UGS-STUNet predictions and the high-resolution imagery, with the model effectively capturing the complex geometries of both large-scale parks and fragmented linear corridors. While visual comparisons suggest robust performance, a quantitative validation was conducted to ensure objective reliability. In the absence of an exhaustive ground-truth database for the entire area, accuracy was benchmarked against two representative sample tiles, with reference data generated via expert interpretation. The UGS-STUNet achieved a precision of 85.72% and an F1-score of 83.73%, demonstrating satisfactory performance. Notably, a recall of 81.83% indicates a relatively low omission rate, which is critical for ensuring the comprehensive mapping of UGS amidst complex urban spectral signatures. Based on the extracted UGS, we quantified the areal extent of the five main types of UGS: green land (540.88 km2), forest patches (786.27 km2), forest belts (64.59 km2), street trees (23.59 km2, the smallest), and other green spaces (24.76 km2). These findings underscore Shanghai’s ongoing efforts to integrate diverse ecological structural elements into its dense urban fabric.

3.2. Distinct Spectral Composition of Nighttime Lighting in UGS

The spectral composition of NTL in UGS, characterized by the intensity histograms (Figure 7) and average NTL intensity values, exhibits distinct patterns across different UGS types. In the red band, street trees exhibit the most pronounced radiance profile, characterized by a broad distribution peaking between 120 and 150 nW/cm2/sr and a high average intensity of 189.99 nW/cm2/sr. This signature indicates maximal exposure to ALAN, primarily stemming from their immediate proximity to high-intensity roadway luminaires and urban infrastructure.
In stark contrast, forest patches, forest belts, and other green spaces display highly constrained, low-intensity distributions, typically concentrated below 30 nW/cm2/sr. The mean red-band radiance for these categories ranges from 10.87 to 33.04 nW/cm2/sr, suggesting either a lack of direct lighting or significant radiometric attenuation provided by dense canopy structures. Green land, while maintaining a moderate average intensity of 98.90 nW/cm2/sr, exhibits a more dispersed spectral distribution with several secondary peaks. This reflects the highly heterogeneous lighting environments inherent in expansive and often fragmented UGS, where open lawns may be intermittently exposed to varying intensities of spill light from adjacent urban zones.
The green and blue bands exhibit spatial trends consistent with those observed in the red spectrum, albeit with markedly lower absolute intensities, thereby confirming the radiometric dominance of long-wavelength (red) emissions in Shanghai’s nocturnal landscape. Street trees consistently exhibit the highest radiance levels in both green (29.08 nW/cm2/sr) and blue (15.62 nW/cm2/sr) channels. However, their probability density distributions in these bands are notably narrower than in the red band, suggesting that while red light is most prominent, green and blue components are still notable. For green land, a sharp attenuation in radiance is observed for both green and blue bands beyond 30 nW/cm2/sr, with average values recorded at 15.42 and 8.18 nW/cm2/sr, respectively. This trend suggests that lower-wavelength radiation is either less prevalent in expansive open spaces or undergoes more intensive absorption/scattering within the near-surface environment. Conversely, forest patches, forest belts, and other green spaces maintain minimal NTL intensities in these bands (averages below 6 nW/cm2/sr), further corroborating the minimal penetration of ALAN into these dense ecological zones. The consistency in low-intensity dominance for green and blue bands across most UGS typologies underscores the spectral selectivity of nighttime lighting, where red wavelengths dominate due to common light source spectra and vegetation’s differential reflectance properties.
Figure 8 shows the maps of the spatial distributions of B/G (a) and G/R (b) ratios across Shanghai’s UGS. In the B/G ratio map (a), higher values are concentrated within forest patches and other green spaces characterized by minimal ALAN exposure, while lower values dominate street trees and urbanized areas, reflecting the influence of red-dominated street lighting. Similarly, the G/R ratio map (b) reveals that elevated ratios are observed in vegetation-dense or less illuminated zones. In contrast, street trees exhibit significantly lower ratios, a direct consequence of the intense red-spectral bias emitted by prevalent roadway luminaires (e.g., HPS lamps). These spatial patterns highlight the heterogeneity of nighttime lighting spectra across UGS, driven by UGS type, proximity to urban infrastructure, and vegetation cover, underscoring the need for targeted lighting strategies to mitigate ecological impacts.
The B/G ratio histograms (Figure 9) reveal distinct spectral characteristics across different UGS typologies. Street trees exhibit a concentrated B/G distribution with a peak between 0.3 and 0.6 and the lowest average B/G value (0.46), indicating that blue light intensity is significantly lower than that of green light in these areas. This is likely attributed to the dominance of street lighting (e.g., HPS lamps), which emits minimal blue light, combined with the proximity of street trees to such light sources. In contrast, forest patches, forest belts, and other green spaces show B/G distributions shifted toward higher values, with peaks concentrated between 1.1 and 1.4 and average values ranging from 1.19 to 1.42, suggesting relatively higher blue light intensity compared to green light. This could result from their greater distance from direct ALAN, where natural or diffuse lighting (with a higher blue component) or vegetation reflectance (e.g., leaves reflecting more blue light) play a more prominent role. Green land serves as a transitional category, exhibiting a broad B/G distribution with an average of 0.56. This wide spectral variance reflects the inherent landscape heterogeneity of Shanghai’s open green spaces, the heterogeneity of its composition, which encompass both high-exposure areas adjacent to urban arterials (low B/G) and more isolated interior lawns that mirror the spectral characteristics of forested zones (higher B/G).
The G/R ratio provides insights into the relative dominance of green versus red wavelengths within the urban nocturnal environment. Street trees have the lowest average G/R value (0.18), with a histogram distribution sharply peaking between 0.1 and 0.3. This observation is highly consistent with the strong red-dominated spectrum of street lamps, which are the primary light source in these areas. Other green spaces, however, show the highest average G/R ratio (0.37), with a distribution peak shifted toward 0.2–0.4, indicating a relatively greater contribution of green light. This may be due to higher vegetation cover, which reflects more green light, or the presence of lighting with a more balanced spectrum (e.g., some LED fixtures) in these spaces. Forest patches and forest belts yield intermediate G/R averages (0.32–0.33), with histograms peaked at 0.2–0.4, suggesting a moderate green light component—likely a combination of vegetation reflectance and limited ALAN influence. Green land, with an average G/R ratio of 0.24 and a histogram peaked at 0.2–0.3, bridges the gap between street trees (low G/R) and other green spaces (high G/R), reflecting its mixed composition of areas exposed to street lighting and more natural, vegetation-rich zones.
To further elucidate the intra-urban spectral heterogeneity, Figure 10 presents a comparative analysis of the B/G and G/R ratios for four parks in Shanghai. Each site exhibits a unique spectral fingerprint, reflecting the interplay between park-specific management and surrounding urban infrastructure. The Shanghai Botanical Garden yielded a B/G of 0.68 (relatively high blue intensity) and G/R of 0.17 (high red intensity). The relatively high B/G ratio suggests a notable contribution of short-wavelength emissions, potentially originating from cold-white LED landscape lighting aimed at enhancing nighttime esthetics. Shanghai Century Park displayed a B/G of 0.58 and a G/R of 0.14. Both metrics are lower than those of the Botanical Garden, indicating a shift toward longer wavelengths and higher radiometric dominance of the red spectrum. Binjiang Forest Park exhibited moderate values (B/G: 0.49; G/R: 0.21). Situated in a more peripheral location, its spectral signature may reflect a mixture of attenuated urban skyglow and localized low-intensity illumination. Xijiao Manor recorded the lowest spectral ratios (B/G: 0.38; G/R: 0.13), signifying the most pronounced red-light dominance among the four sites. This low-ratio profile indicates a nocturnal environment heavily influenced by direct spectral intrusion from adjacent HPS roadway lighting. These site-specific variations underscore that the spectral composition within UGS is not uniform but is tightly coupled with lighting functional zones. For instance, the prevalence of blue-toned landscape lighting in botanical gardens contrasts with the amber-dominated environments of parks adjacent to major transit corridors. Such spatiospectral divergence emphasizes the necessity of considering the ecological context and the specific luminaire typologies when assessing the environmental impacts of ALAN on urban biodiversity.

4. Discussion

4.1. Comparison Between the Proposed Method and Existing Methods

To rigorously evaluate the efficacy of the proposed UGS-STUNet, we conducted a comparative analysis against several benchmark semantic segmentation architectures, including FCN [55], U-Net [56], and DeeplabV3+ [57]. The quantitative results, summarized in Table 2, demonstrate that UGS-STUNet consistently outperforms existing models across all evaluation metrics. The superior performance of UGS-STUNet can be attributed to its dual-branch architecture, which synergistically integrates global contextual modeling via the ST with local feature refinement via the CNN branch. By replacing fully connected layers with convolutional operations, FCN lacks a mechanism to model long-range dependencies between pixels. It largely ignores spatial consistency information, a critical factor for delineating fragmented UGS, resulting in the lowest precision of 75.69%. U-Net leverages skip connections to fuse low-level spatial details with high-level semantic features. While U-Net maintains better boundary integrity and achieves a respectable precision of 78.03%, its fixed-size receptive field limits its ability to capture the diverse scales of UGS. DeeplabV3+ effectively captures multi-scale features, leading to a significant performance gain over FCN with a precision of 80.12%.
To further validate the robustness of UGS-STUNet in extracting diverse UGS typologies from high-resolution Google Earth imagery, we conducted a qualitative comparative analysis against benchmark models. Representative segmentation results are visualized in Figure 11. Due to its inability to capture inter-pixel correlations, FCN fails to maintain spatial continuity, leading to severe misclassification between spectrally similar UGS types. U-Net shows good overall classification results, but due to the lack of global information modeling, misclassification of similar UGS types still occurs. DeeplabV3+ employs atrous convolution to obtain a larger receptive field, showing a substantial improvement in classification results compared to FCN. However, it does not effectively resolve the issue of ground object shadow occlusion, and some impervious surfaces obscured by building shadows are misclassified as UGS.

4.2. Ablation Study and Module Contribution Analysis

To systematically evaluate the individual contributions of the proposed structural components within UGS-STUNet, we conducted a series of ablation experiments. The baseline architecture was defined as a U-Net structure utilizing ST modules as the core encoder–decoder backbone. All comparative configurations maintained identical hyperparameters to ensure consistency. The quantitative results of the ablation study are summarized in Table 3. The integration of the auxiliary residual encoder into the baseline yielded a significant performance boost, with recall and precision increasing by 1.91% and 1.22%, respectively. This enhancement demonstrates the necessity of the CNN branch in recovering fine-grained spatial textures that may be partially smoothed by the Transformer’s self-attention mechanism. Building upon the dual-branch backbone, the incorporation of the FFM resulted in a further gain of 1.10% in precision. The FFM acts as a bridge for multiscale information, effectively synergizing the global contextual cues from the ST-based main encoder with the localized spatial hierarchies extracted by the auxiliary encoder. This synergy is pivotal for reducing inter-class spectral ambiguity in complex urban scenes. Similarly, the independent addition of the FEM improved recall and precision by 0.22% and 1.12%, respectively. By implementing a channel-wise attention recalibration, the FEM successfully attenuates redundant spectral features while amplifying channels that carry highly discriminative information for UGS delineation. The comprehensive integration of both FFM and FEM culminated in the full UGS-STUNet model, which exhibited the highest overall performance—surpassing the baseline with a 1.35% increase in precision and a 0.59% increase in recall. These experimental findings rigorously validate that the FFM and FEM are not merely incremental additions but are functionally complementary, collectively ensuring high-accuracy UGS segmentation by balancing global reasoning with local detail preservation.

4.3. Implications for Urban Ecosystems and Urban Planning

Our spectral analysis uncovers distinct nocturnal light environments across Shanghai’s UGS network. The most critical finding is the extreme red-light dominance localized within street tree corridors. Our data shows these areas are subjected to a mean red intensity of 189.99 nW/cm2/sr, which is nearly six times higher than that of forest belts (33.04 nW/cm2/sr). This quantitative disparity suggests that street trees do not merely experience more light, but an entirely different photobiological regime. Such excessive red-light exposure is known to shift the phytochrome photoequilibrium toward its active form, potentially disrupting the photoperiodic sensitivity of roadway vegetation more severely than in other UGS typologies [58]. Unlike natural twilight, which features a specific red to far-red ratio, the artificial red dominance identified here may inhibit the induction of winter dormancy, thereby compromising the seasonal acclimation and frost hardiness of urban street trees.
Furthermore, the spatial heterogeneity revealed in our B/G and G/R ratio maps (Figure 8) indicates that urban geometry functions as a spectral filter. While street trees exhibit a suppressed B/G ratio of 0.46, forest patches maintain a significantly higher ratio of 1.42. This is an essential distinction: it reveals that even in a high-density city like Shanghai, the structural density of forest canopies is sufficient to block direct, long-wavelength roadway lighting, leaving the forest interior exposed primarily to diffuse, blue-enriched skyglow. This creates spectral niches within the city; our findings imply that urban forests may act as refugia for species that are sensitive to red light but may conversely face unique pressures from blue-light-induced disruptions to their circadian rhythms or stomatal regulation [59,60]. However, we must acknowledge the radiometric complexity inherent in these indices. The derived ratios represent a convolution of the downward ALAN spectrum and the directional reflectance of the vegetation canopy. Since chlorophyll strongly absorbs blue and red photons while reflecting green light, the observed G/R and B/G values are modulated by the biological properties of the canopy [61]. This biological characteristic may lead to lower G/R ratios in vegetated areas compared to bare surfaces under the same illumination. Therefore, the spectral indices in this study reflect the lighting environment of UGS rather than purely the attributes of the light sources. Future research should utilize spectral unmixing or radiative transfer modeling to decouple the vegetation reflectance component from the absolute properties of the ALAN source.
From a biodiversity perspective, the identification of blue-light hotspots within UGS categories (as shown in our spectral maps) is particularly concerning. Blue light is a potent disruptor of insect navigation and deciduous tree phenology [62,63]. Our findings will allow ecologists to identify specific spectral corridors that may be hindering the movement of light-sensitive species [64]. From a sustainable planning perspective, the ability of SDGSAT-1 to monitor the spectral distribution of city lights represents a significant advancement for SDG 11 (Sustainable Cities and Communities). Our spectral maps provide urban planners with a precise evidence-based tool to prioritize the retrofitting of warm-correlated color temperature LEDs or the implementation of shielding geometries in high-red-intensity zones. Beyond mere energy-efficiency metrics, this method offers an objective framework for cities to evaluate the success of dark-sky initiatives, integrating ecological health as a key performance indicator [65]. By aligning urban lighting design with the physiological requirements of UGS, municipalities can foster more resilient and ecologically sustainable urban ecosystems.

4.4. Limitations

While the UGS-STUNet framework demonstrates robust efficacy in mapping UGS, it is important to acknowledge potential limitations regarding annotation completeness, particularly for small UGS. With the increasing availability of high-resolution satellite imagery, the integration of multi-source data for UGS extraction remains an open avenue for research. Consequently, future studies will aim to leverage higher temporal resolution data to characterize the spatiotemporal dynamics of UGS. Although the SDGSAT-1 NTL data have a spatial resolution of 40 m, the analysis of the spectral composition of nighttime lighting using SDGSAT-1 NTL data is inadequate for very small-sized UGS. Small UGS, such as street-side green belts, pocket parks, or individual street trees, often have relatively small areas. Owing to the 40 m resolution of SDGSAT-1 NTL data, these small UGS may be mixed with surrounding non-green elements (like roads, buildings, or other land covers) within a single pixel. This mixing causes spectral contamination, where the recorded spectral signal of a pixel is a composite of the UGS and adjacent land covers, rather than the pure spectral signature of the UGS itself. Future studies should explore sub-pixel unmixing algorithms or the integration of higher-resolution commercial NTL data to refine the spectral signatures of these fine-grained ecological components. Moreover, the SDGSAT-1 GLI imagery exhibits non-negligible salt-and-pepper and striping noises, particularly in low-light areas such as park green spaces. This noise can propagate through the calculation of spectral indices, introducing pixel-level uncertainties. There is a clear need for the development of advanced denoising algorithms tailored for NTL data to enhance signal-to-noise ratios in sparsely lit areas.
Beyond sensor specifications, we must acknowledge the limitations of top-down remote sensing, such as the shielding effect of tree canopies, which may mask ground-level lighting. Future research should integrate SDGSAT-1 data with ground-based hemispherical photography to create a three-dimensional model of the urban light field. Additionally, the sensitivity of NTL spectral indices to observation angles and atmospheric conditions was not fully accounted for in this study. Previous studies have demonstrated that angular observations and atmospheric conditions can lead to inconsistencies in multi-temporal NTL observations of the same area [66,67], thereby introducing uncertainty into NTL data. Consequently, although the SDGSAT-1 data were acquired under specific conditions, the influence of these factors on spectral composition may vary across time or locations. Future work should address this by using angular corrected or atmospherically corrected NTL datasets to explore differences caused by varying observation angles and atmospheric conditions. Finally, this study was constrained by the availability of a single-epoch SDGSAT-1 acquisition, which limits the ability to reflect the temporal evolution of lighting spectral characteristics in UGS. Consequently, future work should incorporate multi-temporal SDGSAT-1 NTL data to explore the long-term spectral evolution of urban ecosystems. Such efforts will be pivotal for transitioning from static mapping to dynamic ecological monitoring.

5. Conclusions

As we strive toward more sustainable cities, maintaining the spectral integrity of UGS will be essential for the health of both urban biodiversity and human residents. The integration of SDGSAT-1 NTL data into urban ecology provides a vital instrument for balancing human safety and economic activity with the preservation of the natural nocturnal environment. This study successfully developed and validated UGS-STUNet, a ST-based encoder–decoder framework optimized for the accurate extraction of UGS. By synergizing the global modeling capabilities of Transformers with the local feature extraction of CNNs, the proposed model achieved a precision of 85.72% and an F1-score of 83.73%, significantly outperforming traditional benchmark architectures. More importantly, this research leverages the multispectral capabilities of SDGSAT-1 NTL data to uncover the spectral heterogeneity across Shanghai’s UGS. Our findings reveal a stark dichotomy in light exposure across UGS typologies: Street trees are subjected to the most intense artificial lighting, characterized by the highest red intensity and the lowest B/G and G/R ratios. In contrast, forest patches, forest belts, and other green spaces exhibit a more natural spectral profile with lower R intensity and higher B/G and G/R ratios. Overall, the spectral composition of nighttime lighting in Shanghai’s green spaces is highly heterogeneous, highlighting the need for spectrum-conscious urban planning to mitigate ecological impacts on vegetation.
The transition from panchromatic to multispectral nighttime remote sensing represents a fundamental paradigm shift in urban ecological monitoring. This study demonstrates that we have moved beyond simply quantifying how bright a city is to a nuanced understanding of what color its nocturnal environment has become. Future research will focus on integrating SDGSAT-1 NTL data with ground-based hemispherical photography to construct a three-dimensional urban light field model and incorporate multi-period SDGSAT-1 NTL data to investigate spectral changes in UGS. Ultimately, these advancements will provide an objective, evidence-based instrument for fostering more resilient and ecologically sustainable metropolitan environments.

Author Contributions

Conceptualization, Y.Y. and B.W. (Bin Wu); methodology, Y.Y., Z.L., H.L. and B.W. (Bin Wu); software, Z.Z. and J.L.; validation, Y.Y., Z.L., H.L., B.W. (Boyang Wang) and Y.X.; formal analysis, Y.Y., Z.L., H.L., B.W. (Boyang Wang), Y.X., Z.Z., J.L. and B.W. (Bin Wu); investigation, Y.Y., Z.L. and H.L.; data curation, Z.Z., J.L. and B.W. (Bin Wu); writing—original draft preparation, Y.Y.; writing—review and editing, B.W. (Bin Wu); visualization, Y.Y., Z.L., H.L. and B.W. (Bin Wu); supervision, B.W. (Bin Wu); funding acquisition, B.W. (Bin Wu). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 42571531 and 42274226) and the Guangdong Basic and Applied Basic Research Foundation (No. 2023A1515012487).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The research findings are a component of the SDGSAT-1 Open Science Program, which is conducted by the International Research Center of Big Data for Sustainable Development Goals (CBAS). The data utilized in this study is sourced from SDGSAT-1 and provided by CBAS.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bennie, J.; Davies, T.W.; Duffy, J.P.; Inger, R.; Gaston, K.J. Contrasting trends in light pollution across Europe based on satellite observed night time lights. Sci. Rep. 2014, 4, 3789. [Google Scholar] [CrossRef]
  2. Zielinska-Dabkowska, K.M.; Schernhammer, E.S.; Hanifin, J.P.; Brainard, G.C. Reducing nighttime light exposure in the urban environment to benefit human health and society. Science 2023, 380, 1130–1135. [Google Scholar] [CrossRef]
  3. Johnston, A.S.A.; Kim, J.; Harris, J.A. Widespread influence of artificial light at night on ecosystem metabolism. Nat. Clim. Change 2025, 15, 1371–1377. [Google Scholar] [CrossRef]
  4. Hu, Y.; Zhang, Y. Global nighttime light change from 1992 to 2017: Brighter and more uniform. Sustainability 2020, 12, 4905. [Google Scholar] [CrossRef]
  5. Li, D.; Zhao, X.; Li, X. Remote sensing of human beings—A perspective from nighttime light. Geo-Spat. Inf. Sci. 2016, 19, 69–79. [Google Scholar] [CrossRef]
  6. Li, G.; Cao, Y.; Fang, C.; Sun, S.; Qi, W.; Wang, Z.; He, S.; Yang, Z. Global urban greening and its implication for urban heat mitigation. Proc. Natl. Acad. Sci. USA 2025, 122, e2417179122. [Google Scholar] [CrossRef]
  7. Falchi, F.; Cinzano, P.; Duriscoe, D.; Kyba, C.C.M.; Elvidge, C.D.; Baugh, K.; Portnov, B.A.; Rybnikova, N.A.; Furgoni, R. The new world atlas of artificial night sky brightness. Sci. Adv. 2016, 2, e1600377. [Google Scholar] [CrossRef] [PubMed]
  8. Katabaro, J.M.; Yan, Y.; Hu, T.; Yu, Q.; Cheng, X. A review of the effects of artificial light at night in urban areas on the ecosystem level and the remedial measures. Front. Public Health 2022, 10, 969945. [Google Scholar] [CrossRef] [PubMed]
  9. Friulla, L.; Varone, L. Artificial Light at Night (ALAN) as an emerging urban stressor for tree phenology and physiology: A review. Urban Sci. 2025, 9, 14. [Google Scholar] [CrossRef]
  10. Kuang, W.; Dou, Y. Investigating the patterns and dynamics of urban green space in China’s 70 major cities using satellite remote sensing. Remote Sens. 2020, 12, 1929. [Google Scholar] [CrossRef]
  11. Zhang, B.; Xie, G.-d.; Li, N.; Wang, S. Effect of urban green space changes on the role of rainwater runoff reduction in Beijing, China. Landsc. Urban Plan. 2015, 140, 8–16. [Google Scholar] [CrossRef]
  12. Derdouri, A.; Murayama, Y.; Morimoto, T.; Wang, R.; Haji Mirza Aghasi, N. Urban green space in transition: A cross-continental perspective from eight Global North and South cities. Landsc. Urban Plan. 2025, 253, 105220. [Google Scholar] [CrossRef]
  13. Georgescu, M.; Morefield, P.E.; Bierwagen, B.G.; Weaver, C.P. Urban adaptation can roll back warming of emerging megapolitan regions. Proc. Natl. Acad. Sci. USA 2014, 111, 2909–2914. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Wang, Y.; Ding, N.; Yang, X. Assessing the contributions of urban green space indices and spatial structure in mitigating urban thermal environment. Remote Sens. 2023, 15, 2414. [Google Scholar] [CrossRef]
  15. Yao, Z.; Liu, J.; Zhao, X.; Long, D.; Wang, L. Spatial dynamics of aboveground carbon stock in urban green space: A case study of Xi’an, China. J. Arid. Land 2015, 7, 350–360. [Google Scholar] [CrossRef]
  16. Liu, Z.; Liu, L.; Li, Y.; Li, X. Influence of urban green space landscape pattern on river water quality in a highly urbanized river network of Hangzhou city. J. Hydrol. 2023, 621, 129602. [Google Scholar] [CrossRef]
  17. Ward Thompson, C.; Roe, J.; Aspinall, P.; Mitchell, R.; Clow, A.; Miller, D. More green space is linked to less stress in deprived communities: Evidence from salivary cortisol patterns. Landsc. Urban Plan. 2012, 105, 221–229. [Google Scholar] [CrossRef]
  18. Hong, X.-C.; Zhang, D.-Y.; Hu, F.-B.; Guo, L.-H.; Liu, J.; Guo, H. Does urban green space form influence the spatial pattern of noise complaints? Sustain. Cities Soc. 2025, 130, 106506. [Google Scholar] [CrossRef]
  19. Fuller, R.A.; Irvine, K.N.; Devine-Wright, P.; Warren, P.H.; Gaston, K.J. Psychological benefits of greenspace increase with biodiversity. Biol. Lett. 2007, 3, 390–394. [Google Scholar] [CrossRef] [PubMed]
  20. De Ridder, K.; Adamec, V.; Bañuelos, A.; Bruse, M.; Bürger, M.; Damsgaard, O.; Dufek, J.; Hirsch, J.; Lefebre, F.; Pérez-Lacorzana, J.M.; et al. An integrated methodology to assess the benefits of urban green space. Sci. Total Environ. 2004, 334–335, 489–497. [Google Scholar] [CrossRef] [PubMed]
  21. Iwanicki, G.; Ściężor, T.; Tabaka, P.; Kotarba, A.Z.; Kunz, M.; Daab, D.; Kołton, A.; Kołomański, S.; Dłużewska, A.; Skorb, K. Integrating sustainable lighting into urban green space management: A case study of light pollution in Polish urban parks. Sustainability 2025, 17, 7833. [Google Scholar] [CrossRef]
  22. Ye, Y.; Tong, C.; Dong, B.; Huang, C.; Bao, H.; Deng, J. Alleviate light pollution by recognizing urban night-time light control area based on computer vision techniques and remote sensing imagery. Ecol. Indic. 2024, 158, 111591. [Google Scholar] [CrossRef]
  23. Lian, X.; Jiao, L.; Zhong, J.; Jia, Q.; Liu, J.; Liu, Z. Artificial light pollution inhibits plant phenology advance induced by climate warming. Environ. Pollut. 2021, 291, 118110. [Google Scholar] [CrossRef] [PubMed]
  24. Zheng, Q.; Teo, H.C.; Koh, L.P. Artificial light at night advances spring phenology in the united states. Remote Sens. 2021, 13, 399. [Google Scholar] [CrossRef]
  25. Shivanna, K.R. Impact of light pollution on nocturnal pollinators and their pollination services. Proc. Ind. Nat. Sci. Acad. 2022, 88, 626–633. [Google Scholar] [CrossRef]
  26. Ditmer, M.A.; Stoner, D.C.; Carter, N.H. Estimating the loss and fragmentation of dark environments in mammal ranges from light pollution. Biol. Conserv. 2021, 257, 109135. [Google Scholar] [CrossRef]
  27. Le Tallec, T.; Hozer, C.; Perret, M.; Théry, M. Light pollution and habitat fragmentation in the grey mouse lemur. Sci. Rep. 2024, 14, 1662. [Google Scholar] [CrossRef]
  28. Chen, B.; Wu, S.; Song, Y.; Webster, C.; Xu, B.; Gong, P. Contrasting inequality in human exposure to greenspace between cities of Global North and Global South. Nat. Commun. 2022, 13, 4636. [Google Scholar] [CrossRef]
  29. Wu, B.; Huang, H.; Wang, Y.; Shi, S.; Wu, J.; Yu, B. Global spatial patterns between nighttime light intensity and urban building morphology. Int. J. Appl. Earth Obs. Geoinf. 2023, 124, 103495. [Google Scholar] [CrossRef]
  30. Zheng, Q.; Seto, K.C.; Zhou, Y.; You, S.; Weng, Q. Nighttime light remote sensing for urban applications: Progress, challenges, and prospects. ISPRS J. Photogramm. Remote Sens. 2023, 202, 125–141. [Google Scholar] [CrossRef]
  31. Wu, B.; Song, Z.; Wu, Q.; Wu, J.; Yu, B. A vegetation nighttime condition index derived from the triangular feature space between nighttime light intensity and vegetation index. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5618115. [Google Scholar] [CrossRef]
  32. Sánchez de Miguel, A.; Bennie, J.; Rosenfeld, E.; Dzurjak, S.; Gaston, K.J. Environmental risks from artificial nighttime lighting widespread and increasing across Europe. Sci. Adv. 2022, 8, eabl6891. [Google Scholar] [CrossRef]
  33. Garrett, J.K.; Donald, P.F.; Gaston, K.J. Skyglow extends into the world’s Key Biodiversity Areas. Anim. Conserv. 2020, 23, 153–159. [Google Scholar] [CrossRef]
  34. Koen, E.L.; Minnaar, C.; Roever, C.L.; Boyles, J.G. Emerging threat of the 21st century lightscape to global biodiversity. Glob. Change Biol. 2018, 24, 2315–2324. [Google Scholar] [CrossRef] [PubMed]
  35. Bennie, J.; Duffy, J.P.; Davies, T.W.; Correa-Cano, M.E.; Gaston, K.J. Global trends in exposure to light pollution in natural terrestrial ecosystems. Remote Sens. 2015, 7, 2715–2730. [Google Scholar] [CrossRef]
  36. Gaston, K.J.; Duffy, J.P.; Bennie, J. Quantifying the erosion of natural darkness in the global protected area system. Conserv. Biol. 2015, 29, 1132–1141. [Google Scholar] [CrossRef]
  37. Zheng, Z.; Wu, Z.; Chen, Y.; Guo, G.; Cao, Z.; Yang, Z.; Marinello, F. Africa’s protected areas are brightening at night: A long-term light pollution monitor based on nighttime light imagery. Glob. Environ. Change 2021, 69, 102318. [Google Scholar] [CrossRef]
  38. Levin, N.; Kyba, C.C.M.; Zhang, Q.; Sánchez de Miguel, A.; Román, M.O.; Li, X.; Portnov, B.A.; Molthan, A.L.; Jechow, A.; Miller, S.D.; et al. Remote sensing of night lights: A review and an outlook for the future. Remote Sens. Environ. 2020, 237, 111443. [Google Scholar] [CrossRef]
  39. Guo, H.; Dou, C.; Liang, D.; Fu, B.; Chen, H.; Zou, Z.; Huang, P.; Li, X.; Chen, F.; Han, C.; et al. The SDGSAT-1 mission and its role in monitoring SDG indicators. Remote Sens. Environ. 2025, 328, 114885. [Google Scholar] [CrossRef]
  40. Huang, H.; Wu, B.; Wang, Y.; Yu, B.; Huang, H.; Zhang, W. Towards building floor-level nighttime light exposure assessment using SDGSAT-1 GLI data. ISPRS J. Photogramm. Remote Sens. 2025, 223, 375–397. [Google Scholar] [CrossRef]
  41. Wang, Y.; Huang, H.; Wu, B. Evaluating the potential of SDGSAT-1 glimmer imagery for urban road detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 785–794. [Google Scholar] [CrossRef]
  42. Guo, H.; Dou, C.; Chen, H.; Liu, J.; Fu, B.; Li, X.; Zou, Z.; Liang, D. SDGSAT-1: The world’s first scientific satellite for sustainable development goals. Sci. Bull. 2023, 68, 34–38. [Google Scholar] [CrossRef]
  43. Li, J.; Wang, Y.; Huang, H.; Zhang, Z.; Wu, B. Low and uneven rural road lighting coverage in Africa. Commun. Earth Environ. 2025, 6, 960. [Google Scholar] [CrossRef]
  44. Liu, S.; Zhou, Y.; Wang, F.; Wang, S.; Wang, Z.; Wang, Y.; Qin, G.; Wang, P.; Liu, M.; Huang, L. Lighting characteristics of public space in urban functional areas based on SDGSAT-1 glimmer imagery:A case study in Beijing, China. Remote Sens. Environ. 2024, 306, 114137. [Google Scholar] [CrossRef]
  45. Shen, Y.; Sun, F.; Che, Y. Public green spaces and human wellbeing: Mapping the spatial inequity and mismatching status of public green space in the Central City of Shanghai. Urban For. Urban Green. 2017, 27, 59–68. [Google Scholar] [CrossRef]
  46. Fan, P.; Xu, L.; Yue, W.; Chen, J. Accessibility of public urban green space in an urban periphery: The case of Shanghai. Landsc. Urban Plan. 2017, 165, 177–192. [Google Scholar] [CrossRef]
  47. Liu, S.; Wang, C.; Chen, Z.; Li, W.; Zhang, L.; Wu, B.; Huang, Y.; Li, Y.; Ni, J.; Wu, J.; et al. Efficacy of the SDGSAT-1 Glimmer Imagery in measuring sustainable development goal indicators 7.1.1, 11.5.2, and target 7.3. Remote Sens. Environ. 2024, 305, 114079. [Google Scholar] [CrossRef]
  48. Huang, H.; Wang, Y.; Li, J.; Zhang, Z.; Zhang, W.; Huang, H.; Wu, B. A linear structure-constrained denoising method for enhancing SDGSAT-1 GLI data. Remote Sens. Environ. 2026; in revision. [Google Scholar]
  49. Wu, B.; Wang, Y.; Huang, H.; Liu, S.; Yu, B. Potential of SDGSAT-1 nighttime light data in extracting urban main roads. Remote Sens. Environ. 2024, 315, 114448. [Google Scholar] [CrossRef]
  50. Lin, A.; Chen, B.; Xu, J.; Zhang, Z.; Lu, G.; Zhang, D. DS-TransUNet: Dual swin transformer U-Net for medical image segmentation. IEEE Trans. Instrum. Meas. 2022, 71, 4005615. [Google Scholar] [CrossRef]
  51. Zhang, J.; Qin, Q.; Ye, Q.; Ruan, T. ST-Unet: Swin Transformer boosted U-Net with Cross-Layer Feature Enhancement for medical image segmentation. Comput. Biol. Med. 2023, 153, 106516. [Google Scholar] [CrossRef]
  52. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar]
  53. Shafiq, M.; Gu, Z. Deep residual learning for image recognition: A survey. Appl. Sci. 2022, 12, 8972. [Google Scholar] [CrossRef]
  54. Özdemir, Ö.; Sönmez, E.B. Weighted cross-entropy for unbalanced data with application on COVID X-ray images. In Proceedings of the 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), Istanbul, Turkey, 15–17 October 2020; pp. 1–6. [Google Scholar]
  55. Shelhamer, E.; Long, J.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  56. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  57. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  58. Sheerin, D.J.; Hiltbrunner, A. Molecular mechanisms and ecological function of far-red light signalling. Plant Cell Environ. 2017, 40, 2509–2529. [Google Scholar] [CrossRef]
  59. Chibani, K.; Gherli, H.; Fan, M. The role of blue light in plant stress responses: Modulation through photoreceptors and antioxidant mechanisms. Front. Plant Sci. 2025, 16, 1554281. [Google Scholar] [CrossRef]
  60. Li, P.; Cheng, H.; Kumar, V.; Lupala, C.S.; Li, X.; Shi, Y.; Ma, C.; Joo, K.; Lee, J.; Liu, H.; et al. Direct experimental observation of blue-light-induced conformational change and intermolecular interactions of cryptochrome. Commun. Biol. 2022, 5, 1103. [Google Scholar] [CrossRef]
  61. Kaiser, E.; Weerheim, K.; Schipper, R.; Dieleman, J.A. Partial replacement of red and blue by green light increases biomass and yield in tomato. Sci. Hortic. 2019, 249, 271–279. [Google Scholar] [CrossRef]
  62. Brelsford, C.C.; Trasser, M.; Paris, T.; Hartikainen, S.M.; Robson, T.M. Understorey light quality affects leaf pigments and leaf phenology in different plant functional types. Physiol. Plant 2022, 174, e13723. [Google Scholar] [CrossRef] [PubMed]
  63. Athanasiadou, M.; Schulz, M.; Meyhöfer, R. The effect of blue and UV light-emitted diodes (LEDs) on the disturbance of the whitefly natural enemies Macrolophus pygmaeus and Encarsia formosa. Biol. Control 2024, 199, 105663. [Google Scholar] [CrossRef]
  64. Owens, A.C.S.; Lewis, S.M. The impact of artificial light at night on nocturnal insects: A review and synthesis. Ecol. Evol. 2018, 8, 11337–11358. [Google Scholar] [CrossRef] [PubMed]
  65. Alva, A.; Brown, E.; Evans, A.; Morris, D.; Dunning, K. Dark Sky Parks: Public policy that turns off the lights. J. Environ. Plan. Manag. 2025, 68, 907–934. [Google Scholar] [CrossRef]
  66. Shi, X.; Kocifaj, M.; Li, X.; Li, D.; Li, J. Impact of atmospheric effect and observation geometry on the directional distribution of blooming effect in VIIRS night-time light images. Remote Sens. Environ. 2025, 331, 115017. [Google Scholar] [CrossRef]
  67. Tan, X.; Zhu, X.; Chen, J.; Chen, R. Modeling the direction and magnitude of angular effects in nighttime light remote sensing. Remote Sens. Environ. 2022, 269, 112834. [Google Scholar] [CrossRef]
Figure 1. Study Area. (a) The SDGSAT-1 GLI data covering Shanghai. (b,c) display zoomed-in views of the SDGSAT-1 GLI and Google Earth images, respectively, corresponding to the red box in (a).
Figure 1. Study Area. (a) The SDGSAT-1 GLI data covering Shanghai. (b,c) display zoomed-in views of the SDGSAT-1 GLI and Google Earth images, respectively, corresponding to the red box in (a).
Remotesensing 18 00732 g001
Figure 2. Example of the image–label samples. Each column shows the different UGS types of samples, including (a) green land, (b) forest patches, (c) forest belts, (d) street trees, and (e) other green spaces.
Figure 2. Example of the image–label samples. Each column shows the different UGS types of samples, including (a) green land, (b) forest patches, (c) forest belts, (d) street trees, and (e) other green spaces.
Remotesensing 18 00732 g002
Figure 3. The overall architecture of UGS-STUNet.
Figure 3. The overall architecture of UGS-STUNet.
Remotesensing 18 00732 g003
Figure 4. ST block (a) and residual block (b).
Figure 4. ST block (a) and residual block (b).
Remotesensing 18 00732 g004
Figure 5. FFM (a) and FEM (b).
Figure 5. FFM (a) and FEM (b).
Remotesensing 18 00732 g005
Figure 6. The extracted UGS in Shanghai. The red boxes represent the two magnified regions.
Figure 6. The extracted UGS in Shanghai. The red boxes represent the two magnified regions.
Remotesensing 18 00732 g006
Figure 7. The intensity histograms of RGB bands for each type of UGS.
Figure 7. The intensity histograms of RGB bands for each type of UGS.
Remotesensing 18 00732 g007
Figure 8. The distributions of B/G ratio (a) and G/R ratio (b).
Figure 8. The distributions of B/G ratio (a) and G/R ratio (b).
Remotesensing 18 00732 g008
Figure 9. The histograms of B/G and G/R ratios for each type of UGS.
Figure 9. The histograms of B/G and G/R ratios for each type of UGS.
Remotesensing 18 00732 g009
Figure 10. Nighttime light maps of four large green spaces in Shanghai. (a) Botanical Garden, (b) Shanghai Century Park, (c) Binjiang Forest Park, and (d) Xijiao Manor. The green wireframe denotes the boundaries of the respective parks.
Figure 10. Nighttime light maps of four large green spaces in Shanghai. (a) Botanical Garden, (b) Shanghai Century Park, (c) Binjiang Forest Park, and (d) Xijiao Manor. The green wireframe denotes the boundaries of the respective parks.
Remotesensing 18 00732 g010
Figure 11. Visualization results of UGS segmentation for different methods. (a) Original image, (b) ground truth via visual interpretation, (c) FCN, (d) U-Net, (e) DeeplabV3+, and (f) UGS-STUNet.
Figure 11. Visualization results of UGS segmentation for different methods. (a) Original image, (b) ground truth via visual interpretation, (c) FCN, (d) U-Net, (e) DeeplabV3+, and (f) UGS-STUNet.
Remotesensing 18 00732 g011
Table 1. Radiometric calibration coefficients.
Table 1. Radiometric calibration coefficients.
BandGainOffsetBand Width (μm)
R0.00001027440.00000992530.294
G0.00000417790.00000608400.106
B0.00000701190.00001367540.102
Table 2. Comparison of UGS segmentation results of different models.
Table 2. Comparison of UGS segmentation results of different models.
MethodPrecision (%)Recall (%)F1 Score (%)
FCN75.6967.8771.57
U-Net78.0376.6977.35
DeeplabV3+80.1278.2479.17
UGS-STUNet85.7281.8383.73
Table 3. Ablation study of different modules.
Table 3. Ablation study of different modules.
MethodPrecision (%)Recall (%)F1 Score (%)
Baseline83.1579.3381.19
Baseline + Residual block84.3781.2482.77
Baseline + Residual block + FFM85.4781.4283.39
Baseline + Residual block + FEM85.4981.4683.42
Baseline + Residual block + FFM + FEM85.7281.8383.73
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, Y.; Lu, Z.; Liu, H.; Wang, B.; Xu, Y.; Zhang, Z.; Li, J.; Wu, B. Mapping Spectral Composition of Nighttime Lighting in Urban Green Spaces Using SDGSAT-1 NTL Data and Google Earth Imagery. Remote Sens. 2026, 18, 732. https://doi.org/10.3390/rs18050732

AMA Style

Yuan Y, Lu Z, Liu H, Wang B, Xu Y, Zhang Z, Li J, Wu B. Mapping Spectral Composition of Nighttime Lighting in Urban Green Spaces Using SDGSAT-1 NTL Data and Google Earth Imagery. Remote Sensing. 2026; 18(5):732. https://doi.org/10.3390/rs18050732

Chicago/Turabian Style

Yuan, Yuan, Zhiqiang Lu, Hongbo Liu, Boyang Wang, Yanni Xu, Zhirong Zhang, Jiahuan Li, and Bin Wu. 2026. "Mapping Spectral Composition of Nighttime Lighting in Urban Green Spaces Using SDGSAT-1 NTL Data and Google Earth Imagery" Remote Sensing 18, no. 5: 732. https://doi.org/10.3390/rs18050732

APA Style

Yuan, Y., Lu, Z., Liu, H., Wang, B., Xu, Y., Zhang, Z., Li, J., & Wu, B. (2026). Mapping Spectral Composition of Nighttime Lighting in Urban Green Spaces Using SDGSAT-1 NTL Data and Google Earth Imagery. Remote Sensing, 18(5), 732. https://doi.org/10.3390/rs18050732

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop