Deep Feature Migration for Real-Time Mapping of Urban Street Shading Coverage Index Based on Street-Level Panorama Images

: Urban street shadows can provide essential information for many applications, such as the assessment and protection of ecology and environment, livability evaluation, etc. In this research, we propose an effective and rapid method to quantify the diurnal and spatial changes of urban street shadows, by taking Beijing city as an example. In the method, we explore a novel way of transferring street characteristics to semantically segment street-level panoramic images of Beijing by using DeepLabv3+. Based on the segmentation results, the shading situation is further estimated by projecting the path of the sun in a day onto the semantically segmented ﬁsheye photos and applying our ﬁrstly deﬁned shading coverage index formula. Experimental results show that in several randomly selected sampling regions in Beijing, our method can successfully detect more than 83% of the shading changes compared to the ground truth. The results of this method contribute to the study of urban livability and the evaluation of human life comfort. The quantitative evaluation method of the shading coverage index proposed in this research has certain promotion signiﬁcance and can be applied to shading-related research in other cities.


Introduction
Shadows are dark areas formed due to the occlusion of light from a light source [1], and there are often shadows on city streets, which are mainly generated by roadside buildings, trees and other urban landscapes blocking sunlight. Urban street shadows are of great significance in ecological environment evaluation and protection, livability evaluation and human life comfort evaluation. Studying urban street shadows will provide important parameters for the above applications. For example, people engage in many social and leisure activities on the streets [2]. In summer, excess heat could affect pedestrians' comfort and lead to great health risks [3,4]. Studies have found that extremely high temperatures can increase mortality and morbidity in cities around the world [3,5,6], and it can effectively alleviate the effects of high temperatures in the shadows [7,8]. An accurate understanding of street shading and its changes is of great significance for reducing the harm caused by extreme heat and extreme weather to humans. Furthermore, the street is also a necessary element for the urban space [9], and exploring the shadow situation of a street can better understand the characteristics of a city [10].
Street shadows affect cities in many aspects, such as street thermal environments [7,10], residents' health [5], tourism [11,12], urban planning [2,13], architectural design [13], etc. In this regard, some researchers have also done some exploration. For instance, Sun et al. [7] explored the relationship between urban street shadows and thermal vulnerability and proposed a research framework for assessing urban thermal vulnerability. Several other studies have also confirmed the health effects of outdoor shade, that increasing urban shade can reduce the chance of prolonged exposure to heat and ultraviolet rays, thereby reducing the risk of skin cancer [14,15]; Chen and Ng [10] verified that increasing urban shaded areas and public spaces can lead to more outdoor exercise. The results showed that in tourist cities, foreign tourists are more sensitive to the thermal environment and more eager for shaded spaces in hot summer than residents [16]. From the perspective of student travel, it is proved that urban sidewalk shadows are positively correlated with students' summer travel willingness [17]. Similarly, Rodríguez-Algeciras et al. [18] used the level of shading as one of the research variables to assess urban livability. The results showed that shaded public open spaces can enhance living comfort and livability in cities.
The methods of shadow detection have been well explored. In small scenes such as a single street, researchers have made direct experimental observations on shadowed areas [19], which can extract the area and variation of shadows in detail. Some researchers use professional cameras to take fisheye photographs and calculate the sky view factors to measure shadows [20]. The shading conditions obtained by the above two methods can only represent the conditions of a specific small area, and exploring the shading of a large area requires a large amount of manually collected data or fisheye photos [21]. In addition, the sky view factor can be used to calculate shading coverage conditions, but it cannot calculate the dynamic changes of shadows. Since the sun's path in the sky changes from day to day, shading changes at the same time of day [22]. Numerous two-dimensional [10,23] or three-dimensional [24] models are developed to calculate urbanscale shadow conditions [25,26]. For example, researchers quantified the height/width (H/W) ratio of urban streets to evaluate shadows, and street canyons with a larger H/W ratio generally have higher shading levels [23,27]. However, these types of calculations usually simplify the structure of the street canyon by assuming some parameters and conditions, without taking into account the effects of trees. Moreover, satellite images are also used to quantify shading conditions. For instance, Moro et al. [28] integrated satellite imageries and geographic information system (GIS) simulation tools for the analysis of shading profiles. Nonetheless, the image data captured by satellites cannot clearly capture more details of the streets on the ground [29,30]. The use of light detection and ranging (LiDAR) technology can effectively describe the geometrical details of urban areas [21,31] to calculate the shading conditions, but the cost of LiDAR technology is relatively high, and it is not suitable for application in a large space [31].
With the rapid development of computer vision and sensor techniques, researchers began to use street-level imageries and deep learning-based methods to explore shadows. Some scholars have proposed a method to automatically calculate the sky view factor using street view images, which effectively avoids the labor cost of manual calculation [25]. Sky view factors obtained with street view images have been shown to have high accuracy [32,33], providing a solid basis for city-level shading. By overlaying the sun's path at a given location at different times with the corresponding fisheye images, it is possible to simulate whether the light reaching the ground is blocked by objects, such as buildings [12,34,35]. Currently, the evaluation of solar duration [32,35] and solar radiation [34,35] can be achieved using a combination of deep learning and imagery. They did not pay attention to the fine-grained variation of shadows on the diurnal and spatial scales and lacked the precise time, location and temporal variation of shadows in a certain spatial range. To the best of our knowledge, shade coverage usually refers to objects in space that block sunlight (such as buildings or trees, etc.) [19,35], but there is no index that directly quantifies shade coverage. However, so far, studies using semantic segmentation for shading estimation require the manual shadow labeling of data and training of models to achieve a certain segmentation accuracy [31,32,34]. This method is time-consuming and expensive [36,37]. In this regard, we explore a way of transferring street characteristics to semantically segment street-level panoramic images by using DeepLabv3+ and quantify Remote Sens. 2022, 14, 1796 3 of 16 the diurnal and spatial changes of shading coverage in our research. Specifically, the contributions of our research are as follows: (1) For the first time, we design an efficient shadow extraction method based on deep feature transfer, which can save the time overhead of semantic segmentation model retraining and achieve stable shadow extraction results.
(2) We innovatively define and calculate the shading coverage index to provide the foundation for the follow-up research.
(3) For the first time, we map the diurnal and spatial distribution of shading coverage in Beijing at the scales of kilometer grid and road sections, respectively.

Study Area
Our study area is Beijing, China, with the north latitude range of 39 • 49 . Beijing has a warm temperate semi-humid continental monsoon climate, with high temperatures and the largest rain in summer [36]. As the capital of China, Beijing is the center of Chinese politics, economy, culture, education, technological innovation and international exchange. As of the end of November 2020, the permanent population of Beijing was about 21.893 million. We study the area within the sixth ring road of Beijing ( Figure 1) which has a total area of 2267 km 2 . This area includes most of the urban built-up area and a small part of the urban-rural fringe; 12 of the 16 administrative divisions in Beijing are partially or entirely located within the sixth ring road [37]. expensive [36,37]. In this regard, we explore a way of transferring street characteristics to semantically segment street-level panoramic images by using DeepLabv3+ and quantify the diurnal and spatial changes of shading coverage in our research. Specifically, the contributions of our research are as follows: (1) For the first time, we design an efficient shadow extraction method based on deep feature transfer, which can save the time overhead of semantic segmentation model retraining and achieve stable shadow extraction results.
(2) We innovatively define and calculate the shading coverage index to provide the foundation for the follow-up research.
(3) For the first time, we map the diurnal and spatial distribution of shading coverage in Beijing at the scales of kilometer grid and road sections, respectively.

Study Area
Our study area is Beijing, China, with the north latitude range of 39°36′4.32″-41°02′26.91″ and the east longitude range of 115°47′55.78″-117°19′58.49″. Beijing has a warm temperate semi-humid continental monsoon climate, with high temperatures and the largest rain in summer [36]. As the capital of China, Beijing is the center of Chinese politics, economy, culture, education, technological innovation and international exchange. As of the end of November 2020, the permanent population of Beijing was about 21.893 million. We study the area within the sixth ring road of Beijing ( Figure 1) which has a total area of 2267 km 2 . This area includes most of the urban built-up area and a small part of the urban-rural fringe; 12 of the 16 administrative divisions in Beijing are partially or entirely located within the sixth ring road [37].

Data Source
Three main types of data sources were collected and used in this study. The road centerline within the sixth ring road in Beijing is extracted from AutoNavi's road data. Baidu Street View (BSV) panoramas were obtained from the application program interface (API) of the Baidu Web Service. Administrative boundary vector data were provided by the Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences (http://www.aircas.cas.cn, 5 January 2021). The spatial data coordinate system adopts the unified geographic coordinate system GCS_WGS_1984.

Data Source
Three main types of data sources were collected and used in this study. The road centerline within the sixth ring road in Beijing is extracted from AutoNavi's road data. Baidu Street View (BSV) panoramas were obtained from the application program interface (API) of the Baidu Web Service. Administrative boundary vector data were provided by the Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences (http: //www.aircas.cas.cn, 5 January 2021). The spatial data coordinate system adopts the unified geographic coordinate system GCS_WGS_1984.

Research Framework
The framework of this research is shown in Figure 2, which is divided into three parts. Firstly, we obtain the road network data and BSV panoramas as data sources. On this basis, Remote Sens. 2022, 14, 1796 4 of 16 a series of processes are carried out, such as semantic segmentation based on deep feature migration, fisheye image conversion, and the projection of the sun path. This series of processes can determine whether a place is in shadow. Secondly, we conduct experiments to verify the accuracy of the sky pixels' extraction and solar obstruction determination. Finally, according to our proposed formula, we calculate shading coverage in the study area and visualize its diurnal and spatial distribution by kilometer grid and the road section dimension.

Research Framework
The framework of this research is shown in Figure 2, which is divided into three parts. Firstly, we obtain the road network data and BSV panoramas as data sources. On this basis, a series of processes are carried out, such as semantic segmentation based on deep feature migration, fisheye image conversion, and the projection of the sun path. This series of processes can determine whether a place is in shadow. Secondly, we conduct experiments to verify the accuracy of the sky pixels' extraction and solar obstruction determination. Finally, according to our proposed formula, we calculate shading coverage in the study area and visualize its diurnal and spatial distribution by kilometer grid and the road section dimension.

Identifying Shadows
Along with the road network in the study area, the street view sampling points were generated according to the distance interval of 50 m, and the latitude and longitude of all the sampling points were counted. We use the API of the Baidu map service to obtain panoramic images of sampling points. The Baidu panoramic street view image is generated from the eight original images captured by the cameras on a mobile vehicle [38].
The sky extraction is a necessary step in the shadow identification of street canyons using panoramas. We use the DeepLabv3+ model [39] trained on the Cityscapes dataset [40] to segment panoramic images to extract the sky, and Figure 3 displays the network structure based on DeepLabv3+. DeepLabv3+ combines the Spatial pyramid pooling (SPP) module and encode-decoder structure [39]. The SPP module is able to encode multi-scale contextual information while the latter module can capture sharper object boundaries by gradually recovering the spatial information [41]. Different from the previous semantic segmentation datasets, the Cityscape dataset is a street scene with a high diversity of land object categories. It includes about 50 different urban street scenes to increase the variety of street types, and images are collected over several months, with seasons spanning spring, summer and fall [40]. Due to the wide coverage of scene types in the cityscape dataset, the transferability and generality of the model for extracting deep features are improved.

Identifying Shadows
Along with the road network in the study area, the street view sampling points were generated according to the distance interval of 50 m, and the latitude and longitude of all the sampling points were counted. We use the API of the Baidu map service to obtain panoramic images of sampling points. The Baidu panoramic street view image is generated from the eight original images captured by the cameras on a mobile vehicle [38].
The sky extraction is a necessary step in the shadow identification of street canyons using panoramas. We use the DeepLabv3+ model [39] trained on the Cityscapes dataset [40] to segment panoramic images to extract the sky, and Figure 3 displays the network structure based on DeepLabv3+. DeepLabv3+ combines the Spatial pyramid pooling (SPP) module and encode-decoder structure [39]. The SPP module is able to encode multi-scale contextual information while the latter module can capture sharper object boundaries by gradually recovering the spatial information [41]. Different from the previous semantic segmentation datasets, the Cityscape dataset is a street scene with a high diversity of land object categories. It includes about 50 different urban street scenes to increase the variety of street types, and images are collected over several months, with seasons spanning spring, summer and fall [40]. Due to the wide coverage of scene types in the cityscape dataset, the transferability and generality of the model for extracting deep features are improved.
Since shadows from objects of low height (e.g., below a person's height) hardly affect human comfort, we crop each panorama image to preserve the portion above the street view vehicle's horizontal viewing angle. This study needs to determine the positional relationship between the sun and the occluded pixels, and for convenience, we convert the panoramic image to a fisheye image. Essentially, a fisheye image is a representation of an azimuthal projection onto a plane tangent to the hemisphere [32]. As shown in Figure 4, projecting a panoramic image from a cylinder to an azimuthal coordinate system to generate a fisheye image is a common geometric transformation [42,43]. In geometric transformation, each pixel in the panoramic image uniquely corresponds to a pixel in the fisheye image. This research uses the method [35] to convert the image by assuming that x and y are the coordinates of the panoramic image. The W and H are the width and height of the cropped panorama, respectively. Then, the radius of the fisheye image is converted into r 0 = W/2π, the height and width of the fisheye image are both W/π, then the coordinates corresponding to the center of the fisheye image are C x , C y , which is calculated with Equation (1):  Since shadows from objects of low height (e.g., below a person's height) hardly affect human comfort, we crop each panorama image to preserve the portion above the street view vehicle's horizontal viewing angle. This study needs to determine the positional relationship between the sun and the occluded pixels, and for convenience, we convert the panoramic image to a fisheye image. Essentially, a fisheye image is a representation of an azimuthal projection onto a plane tangent to the hemisphere [32]. As shown in Figure 4, projecting a panoramic image from a cylinder to an azimuthal coordinate system to generate a fisheye image is a common geometric transformation [42,43]. In geometric transformation, each pixel in the panoramic image uniquely corresponds to a pixel in the fisheye image. This research uses the method [35] to convert the image by assuming that and are the coordinates of the panoramic image. The and are the width and height of the cropped panorama, respectively. Then, the radius of the fisheye image is converted into = /2π, the height and width of the fisheye image are both /π, then the coordinates corresponding to the center of the fisheye image are ( , ), which is calculated with Equation (1): The coordinate ( , ) and polar coordinate ( , ) of each pixel in the fisheye image is related to the corresponding panoramic image coordinate ( , ) as follows, , and are calculated by Equations (2)-(4), respectively:  Since the center of the panorama is a true north direction in the real scene, the fisheye image obtained by the above geometric transformation is opposite to the direction of the sun path. In order to project the sun path later, we flip each fisheye image horizontally.
We convert the sun's altitude and azimuth to a polar representation and project the The coordinate x f , y f and polar coordinate (r, θ) of each pixel in the fisheye image is related to the corresponding panoramic image coordinate x p , y p as follows, x p , θ and y p are calculated by Equations (2)-(4), respectively: where Since the center of the panorama is a true north direction in the real scene, the fisheye image obtained by the above geometric transformation is opposite to the direction of the sun path. In order to project the sun path later, we flip each fisheye image horizontally.
We convert the sun's altitude and azimuth to a polar representation and project the sun's path onto a fisheye image, as shown in Figure 5. The sun's position is estimated by the algorithm (https://www.pvlighthouse.com.au/calculators, 1 March 2022). The center of the fisheye image is at a 90 • sun elevation, and the top pixel has both a 0 • sun elevation and azimuth [34]. If the sun's position falls on the sky pixel, it means that the area is directly exposed to the sunlight; otherwise, the area is in shadow. Since the center of the panorama is a true north direction in the real scene, the fisheye image obtained by the above geometric transformation is opposite to the direction of the sun path. In order to project the sun path later, we flip each fisheye image horizontally.
We convert the sun's altitude and azimuth to a polar representation and project the sun's path onto a fisheye image, as shown in Figure 5. The sun's position is estimated by the algorithm (https://www.pvlighthouse.com.au/calculators, 1 March 2022). The center of the fisheye image is at a 90° sun elevation, and the top pixel has both a 0° sun elevation and azimuth [34]. If the sun's position falls on the sky pixel, it means that the area is directly exposed to the sunlight; otherwise, the area is in shadow.

Shading Coverage Index
We propose the shading coverage index to evaluate shadow coverage conditions in a spatial range, as described in Equation (5): where SCI (shading coverage index) denotes the shadow intensity at a certain moment in a spatial range, is the total area in the shadow, and means the total area exposed to the sun. This research calculates the shading coverage at the scales of the kilometer grid, the administrative district and the road section, respectively.

Shading Coverage Index
We propose the shading coverage index to evaluate shadow coverage conditions in a spatial range, as described in Equation (5): where SCI (shading coverage index) denotes the shadow intensity at a certain moment in a spatial range, A s is the total area in the shadow, and A e means the total area exposed to the sun. This research calculates the shading coverage at the scales of the kilometer grid, the administrative district and the road section, respectively.

Experiments
For a more precise extraction of the sky pixels, we compare the performance of the different basic deep convolutional neural network (DCNN) architectures of DeepLabv3+ trained on the Cityscapes dataset, including MobileNet-v3, MobileNet-v2, Xception65, and Xception71. Two datasets are used to verify the effectiveness of feature migration. We manually labeled 15 panoramas as ground truth and use the Cambridge-driving Labeled Video Database (CamVid) to test the transferability of the extracted features in the model. The class pixel accuracy (CPA) and intersection-over-union (IoU) are used to assess the segmentation effects. As shown in Equations (6) and (7), the CPA means the proportion of all pixels correctly labeled in a category, and IoU is the ratio of correctly classified pixels to the sum of ground truth pixels and pixels predicted to belong to that class. [41], namely: where TP, FP and FN are the number of true positive, false positive, and false negative, respectively. The DeepLabv3+ with MobileNet-v2 and Xception65 have higher accuracies in terms of segmentation (Table 1). In addition to the quality assessment, it was also important to test the time cost for every panorama image. The time cost of the model with the MobileNet-v2 backbone is about less than 7 s than Xception65. In the end, we use the DeepLabv3+ model based on MobileNet-v2 to segment the panoramic image. From Table 1, we can see that the DeepLabv3+ with MobileNet-v2 backbone trained on the Cityscapes dataset has good generalization ability. In other words, even with a completely new dataset, our model transfers the features of individual classes well. To test the reliability and usefulness of our method, we randomly select 100 results to compare with the shading situation in BSV. The experiment shows that our method's results are more than 83% consistent with the shading situation in BSV. As shown in Figure 6, the correct prediction results are divided into four cases. In the first, panorama images are collected in winter, there are no leaves on the trees blocking the light, and BSV shows the current site is exposed to sunlight. Moreover, we explore shading on streets in summer, for which we assume light can be blocked by tree canopies. Sixteen sample sites belong to this case. The second case is that our predicted results are the same as the actual shadow situation in BSV and exposed to the sun. Fourteen sample sites belong to the third case, and the weather was cloudy when the panorama images were collected. The data collection time cannot be determined according to the position of the sun or the angle between the shadow of buildings, tree trunks, fences and other landscapes and the true north direction. Thus, these 14 sites are removed. In the fourth case, the ground truth of shadows in BSV agrees with our predictions. Overall, after comparison, the shadow extraction results of 72 sites are consistent with the ground truth of shadows in BSV and our predicted results. north direction. Thus, these 14 sites are removed. In the fourth case, the ground truth of shadows in BSV agrees with our predictions. Overall, after comparison, the shadow extraction results of 72 sites are consistent with the ground truth of shadows in BSV and our predicted results.

Diurnal and Spatial Distribution of Shading Coverage Index at 1-km Grid
The positions without panoramic images were excluded, and a total of 163,433 sample points were calculated in the entire study area. The model we used for feature migration can restore the density of the deciduous trees in winter to the dense segmentation results in summer, so we keep panoramas without leaves in winter. We map the site-level diurnal and spatial shadow areas at 1-h intervals on 1 August 2018. This study proposes an index to measure shadow intensity, that is, within a certain spatial range, the ratio of shadow area to all areas is defined as the shading coverage index at a certain moment. With the help of Equation (5), we draw the shading coverage index on a grid scale of 1 km based on the results of shadow points every 1 h and visualize it in three-dimension to explore the diurnal and spatial variation of shadow intensity. We set the specific spatial range as 1 km × 1 km, and 1 km × 1 km is the approximate average range of pedestrians in a city [44]. Figure 7 shows the distribution of the shading coverage index of the 1 km grid per hour from 8: 00 to 18:00. It can be found that the shading coverage index of the central region is higher than the periphery. Such a result is reasonable for the following reasons. First, the road network in the city center is denser (Figure 1), which results in more calculated samples and shadows. Secondly, in general, the areas with higher building density and plot ratio have greater shadow intensity, resulting in the shading coverage index value in the central area remaining at a high level. As can be seen from Figure 6 and Table  1 (right), from 10: 00 to 15:00, most of the cells in the study area are "red", and the index value is between 0 and 0.4, indicating that more than half of the sample points are exposed to sunlight. The shading coverage index after 17:00 is significantly different from before. This has to do with sunsets, where the zenith angle of the sun at 17:00 and 18:00 is small to a certain angle, and most of the light is blocked by tall buildings and creates shadows. The solar zenith angle is the angle between the sun's rays and the vertical [45]. As a result, the range of shadows is greatly increased. This situation is also verified in statistical data,

Diurnal and Spatial Distribution of Shading Coverage Index at 1-km Grid
The positions without panoramic images were excluded, and a total of 163,433 sample points were calculated in the entire study area. The model we used for feature migration can restore the density of the deciduous trees in winter to the dense segmentation results in summer, so we keep panoramas without leaves in winter. We map the site-level diurnal and spatial shadow areas at 1-h intervals on 1 August 2018. This study proposes an index to measure shadow intensity, that is, within a certain spatial range, the ratio of shadow area to all areas is defined as the shading coverage index at a certain moment. With the help of Equation (5), we draw the shading coverage index on a grid scale of 1 km based on the results of shadow points every 1 h and visualize it in three-dimension to explore the diurnal and spatial variation of shadow intensity. We set the specific spatial range as 1 km × 1 km, and 1 km × 1 km is the approximate average range of pedestrians in a city [44]. Figure 7 shows the distribution of the shading coverage index of the 1 km grid per hour from 8: 00 to 18:00. It can be found that the shading coverage index of the central region is higher than the periphery. Such a result is reasonable for the following reasons. First, the road network in the city center is denser (Figure 1), which results in more calculated samples and shadows. Secondly, in general, the areas with higher building density and plot ratio have greater shadow intensity, resulting in the shading coverage index value in the central area remaining at a high level. As can be seen from Figure 6 and Table 1 (right), from 10: 00 to 15:00, most of the cells in the study area are "red", and the index value is between 0 and 0.4, indicating that more than half of the sample points are exposed to sunlight. The shading coverage index after 17:00 is significantly different from before. This has to do with sunsets, where the zenith angle of the sun at 17:00 and 18:00 is small to a certain angle, and most of the light is blocked by tall buildings and creates shadows. The solar zenith angle is the angle between the sun's rays and the vertical [45]. As a result, the range of shadows is greatly increased. This situation is also verified in statistical data, as shown in Table 2 Table 2 (right), and the shading coverage index at 16:00 had a mean of 0.403 and a median of 0.395. At 17:00, the mean and median shadow indices rose to 0.848 and 0.938, respectively. Figure 7. The three-dimensional map of shading coverage index in the 1 km grid. The x-and y-axes, respectively, represent the latitude and longitude, the z-axis represents the value of shading coverage index. The shading index can also be evaluated on the scale of road segments. The sample sites in the shadow at each moment are aggregated to the nearest road section, and then the number of sample sites in the shadow is divided by all the site numbers on that road section to represent the shading coverage index at the scale of a road section. Figure 8 shows the shading coverage index values of all road sections in the study area, calculated by Equation (5). From the visualization results, shadow coverage on the street decreases first and then increases during the day. The shading coverage index values are higher at 8:00-9:00 and 17:00-18:00 compared to other times. The zenith angles of the sun at 8:00 and 18:00 are 57.045 • and 77.581 • , respectively, and "dark pink" areas are more likely to appear in shadows from buildings and vegetation. When the solar zenith angle is low (such as 22.129 • ), the shading coverage index value of the building is small, and it blocks less light due to the upright nature of the building. Only the area under the vegetation canopy is shaded. As can be seen from the statistics in Table 2 (left), both the mean and median values of the shading coverage index are the smallest. The shading coverage index shows significant differences among the different grades of roads. A comparison of the different grades of roads shows that the shade index of highways and arterial roads is "lighter pink" than that of the slip roads because highways and urban arterial roads are generally wider than slip roads. Taking each administrative region of Beijing as a unit, we count the shadi index changes of each region over time, to test the shading coverage index di the scale of the administrative district ( Figure 9). The shading coverage ind cantly higher than other administrative regions in Dongcheng and Xiche These results are consistent with the results of the 1 km grid, indicating that coverage index is higher in the central urban area than in the outer areas. T coverage index curve of Changping District is quite different from other ad regions. This is because Changping District not only has relatively sparse ro has more streets in the east-west direction than in the north-south direction, an streets are more likely to be exposed to direct sunlight between 17:00-18:00.

Shading Coverage Index at the Scale of Administrative District
Taking each administrative region of Beijing as a unit, we count the shading coverage index changes of each region over time, to test the shading coverage index distribution at the scale of the administrative district ( Figure 9). The shading coverage index is significantly higher than other administrative regions in Dongcheng and Xicheng districts. These results are consistent with the results of the 1 km grid, indicating that the shading coverage index is higher in the central urban area than in the outer areas. The shading coverage index curve of Changping District is quite different from other administrative regions. This is because Changping District not only has relatively sparse roads but also has more streets in the east-west direction than in the north-south direction, and east-west streets are more likely to be exposed to direct sunlight between 17:00-18:00. cantly higher than other administrative regions in Dongcheng and Xicheng districts. These results are consistent with the results of the 1 km grid, indicating that the shading coverage index is higher in the central urban area than in the outer areas. The shading coverage index curve of Changping District is quite different from other administrative regions. This is because Changping District not only has relatively sparse roads but also has more streets in the east-west direction than in the north-south direction, and east-west streets are more likely to be exposed to direct sunlight between 17:00-18:00.

Shading Coverage Index Applications
In this current study, we explore a way of transferring street characteristics of the Cityscapes dataset to semantically segment street-level panoramic images of Beijing by using DeepLabv3+ and propose an effective and rapid method to quantify the diurnal and spatial changes in shadows. The research has the following characteristics. Firstly, being different from other deep learning-based methods, this study explores the feasibility and reliability of applying a deep learning transfer strategy in shadow mapping of urban street scenes. Therefore, on the basis of shadow mapping that meets certain requirements, we can save a lot of sample labeling and model training time costs compared with other nontransfer strategies. The shadow recognition method, which is based on feature migration applied in this research, can be transferred to other cities covered by street view images. The experimental results on the CamVid dataset have preliminarily confirmed this, with entirely new city scenes; the DeepLabv3+ model we selected responds well to the sky class (Table 1).
Secondly, this study provides an idea for estimating shadows from urban street panoramas to promote the development of related macro and micro research. The results of the grid scale and the administrative area scale are consistent with the research on the urban built environment structure, such as land use [46,47] and the distribution of street trees [48]. Moreover, our results on shading levels and changes can be verified from previous research results on the shading effect in local areas in Beijing [49]. For example, the shading changes of the Central Business District (CBD) in our results have the same spatial and diurnal laws as the research on the shading effect and radiation in the CBD based on remote sensing data sources [49,50]. From a macro perspective, the shading coverage distribution in a 1 km grid and administrative areas will help urban planners and managers to make relevant decisions. Urban planners and managers can determine greening schemes according to the shading coverage index and its distribution to further alleviate the urban heat island effect. This method can provide researchers in the fields of environment, health, tourism, transportation and urban planning with a method to quantify the characteristics of other urban shadows, helping them to complete related research faster and more efficiently. From a micro perspective, the diurnal and spatial distribution of shading coverage in road segments can be used for optimal route recommendations. According to our research results, when the shading coverage index at the grid scale of 1 km is higher than the corresponding road segment scale, it may mean that there are road segments with high shadow coverage within 1 km 2 . The research shows that when alternative routes are available, such as the best shade route, the shortest route is not the best choice for pedestrians [51]. Distance and time are indeed important factors in determining route choice, but this choice is more dependent on the characteristics of other alternative routes [52]. Pedestrians are generally willing to take a safer, more comfortable, or more interesting route, as long as the chosen route remains within a reasonable range relative to the shortest route [53,54]. That is, the results of our method can help pedestrians find the shady route with the most shadow coverage between the starting point and the ending point, to obtain a better travel experience in hot weather.
To be specific, as shown in the figure below ( Figure 10), we compare the shadow amount and shading coverage index with two paths at noon. Path #1 has a higher shadow amount and shading coverage index than Path #2, which is the shortest path. There are 36 and 8 sample sites in shadow for Path #1 and Path #2, respectively. The shading coverage indexes of Path #1 and Path #2 are 0.54 and 0.13, respectively. Additionally, Path #1 is about 230 m longer than Path #2. The results show that the path with a higher shading coverage index (Path #1) only requires pedestrians to travel 230 m more to add 28 shadow sites compared to the path with a shorter distance (Path #2). This means that the higher shading coverage index path provided in this research can guide pedestrians to select a more comfortable route, particularly, it can avoid too much harm to human beings caused by hot weather.

Limitations and Future Consideration
There are still some limitations that should be addressed in future applications. Fi this method is only applicable to areas where BSV can be obtained and the majority BSV panoramas analyzed in this study were taken in 2013, 2017, 2018, and 2020, wh

Limitations and Future Consideration
There are still some limitations that should be addressed in future applications. First, this method is only applicable to areas where BSV can be obtained and the majority of BSV panoramas analyzed in this study were taken in 2013, 2017, 2018, and 2020, which may lead to outdated data. Second, as this is a study on the quantity of shadows, no assessment of shadow quality can be derived from this study, for example, evaluating the light transmittance of different shading elements [55]. In the future, researchers can focus on the combination of the "quality" and "quantity" of shading. Future research can also combine multiple image data sources to mine shadow features that are not limited to urban streets.

Conclusions
This research proposes a novel and effective shadow estimation method, which designs an index to quantify the shadows of urban streets by exploring a method of using DeepLabv3+ to transfer street features in Cityscapes data to the semantic segmentation of street panoramic images in Beijing. Furthermore, the diurnal and spatial changes of shadow are visually displayed on different scales. The experimental results show that the diurnal and spatial consistency between the shadow estimation method based on deep feature transfer and the actual shadow distribution reaches 83%. The diurnal and spatial variation of the shading coverage index provides an important reference for evaluating the construction of shading facilities. Through experiments comparing the distribution of shadow indices in road sections, it is also proved that our method can provide an important basis for pedestrians to find routes with better shading conditions, thereby reducing harm to people in hot weather.