Exploring Relationships between Boltzmann Entropy of Images and Building Classification Accuracy in Land Cover Mapping

Remote sensing images are important data sources for land cover mapping. As one of the most important artificial features in remote sensing images, buildings play a critical role in many applications, such as population estimation and urban planning. Classifying buildings quickly and accurately ensures the reliability of the above applications. It is known that the classification accuracy of buildings (usually indicated by a comprehensive index called F1) is greatly affected by image quality. However, how image quality affects building classification accuracy is still unclear. In this study, Boltzmann entropy (an index considering both compositional and configurational information, simply called BE) is employed to describe image quality, and the potential relationships between BE and F1 are explored based on images from two open-source building datasets (i.e., the WHU and Inria datasets) in three cities (i.e., Christchurch, Chicago and Austin). Experimental results show that (1) F1 fluctuates greatly in images where building proportions are small (especially in images with building proportions smaller than 1%) and (2) BE has a negative relationship with F1 (i.e., when BE becomes larger, F1 tends to become smaller). The negative relationships are confirmed using Spearman correlation coefficients (SCCs) and various confidence intervals via bootstrapping (i.e., a nonparametric statistical method). Such discoveries are helpful in deepening our understanding of how image quality affects building classification accuracy.


Introduction
Land cover datasets are important for studies on various subjects, including climate change [1], biodiversity conservation [2], ecosystem assessment [3] and urbanization assessment [4].In recent decades, a variety of land cover datasets have been developed from remote sensing images with resolutions ranging from meters to kilometers, such as FROM-GLC10 [5], GLC 30 [6], MODIS [7] and GLC-2000 [8].As one of the most common artificial features in land cover datasets, buildings play a critical role in population estimation [9], urban planning [10] and many other applications [11][12][13].High-quality building data ensure the reliability of such applications.Usually, land cover building data are acquired from remote sensing images using automatic classification methods, and the classification accuracy of buildings is greatly affected by image quality.Therefore, some researchers have conducted a variety of studies on how image quality affects the applications of images [14][15][16][17].More precisely, Roberts et al. [14] point out that remote sensing image quality plays an important role in assessing the image fusion process and can help in exploring which fusion method incorporates more texture information while retaining spectral information.Xia et al. [15] noticed that image quality is the key to successful image application, which provides the basis for classification, segmentation, etc. Li et al. [16] found that image quality assessment is widely used in applications such as image denoising, image deblurring and image fusion, and they conducted factor analysis and Entropy 2023, 25, 1182 2 of 14 cluster analysis to assess the robustness of 21 commonly used non-reference image quality evaluation metrics in terms of accuracy, monotonicity and consistency.Bishop et al. [17] noted that image quality can be used to assess detection and classification performance.However, no explicit or quantitative relationships have been found between image quality and building classification accuracy.
There are a variety of no-reference metrics describing image quality.Among them, some metrics (e.g., root of mean square error metric (RMSE) [18], average gradient (AG) [19], signal-to-noise ratio (SNR) [20], Shannon entropy [21]) consider compositional information, while others (e.g., metrics based on the gray-level co-occurrence matrix [22], metrics based on the Sobel gradient [23], Boltzmann entropy [24]) consider configurational information.It should be explained here that the compositional information of a remote sensing image captures the composition of the image (i.e., the permutation of pixels), while configurational information captures the distribution of pixels in the image (i.e., the combination of pixels).Recently, some studies have been conducted to explore relationships between image properties and classification accuracy [25][26][27][28], such as those between land pattern and accuracy.It is reported that configurational information is very important to image quality [29].Recently, some researchers have evaluated the existing metrics for describing configurational information, and a mathematical model between Boltzmann entropy (simply called BE in this study) and lossless compression ratio has been constructed [30].In fact, Boltzmann entropy has full thermodynamic consistency [31], and it has been regarded as the most powerful metric for describing the configurational information of remote sensing images [32].
Regarding the methods for the building classification of remote sensing images, many traditional methods are available, such as K-Nearest Neighbors (KNNs) [33], support vector machines (SVMs) [34], random forest (RF) [35], etc.However, with the improvement in spatial resolution in remote sensing images, these traditional metrics are gradually being replaced by deep-learning-based methods, such as U-Net [36], SegNet [37] and DeepLabV3+ [38].In this study, Boltzmann entropy is selected as the most appropriate metric for image quality, while DeepLabV3+ is employed as a suitable method for building classification.In addition, F1 (i.e., a popular and widely used metric for evaluating building classification) is employed as the accuracy index.We aim to explore the potential relationships between the Boltzmann entropy of land cover remote sensing images and F1.The remainder of this article is as follows.Section 2 introduces the experimental data and metrics/methods for exploring the relationships.Section 3 presents and analyzes the experimental results.Some discussions and conclusions are given in Section 4.

Building Image Datasets
Benefiting from the rapid development of remote sensing techniques and methods, some aerial very-high-resolution (VHR) building image datasets have been developed.Among them, four open-source datasets can be easily accessed, i.e., the Massachusetts [39], ISPRS [40], Inria [41] and WHU [42] datasets.The Massachusetts dataset includes 151 images with a size of 1500*1500 pixels and a spatial resolution of 1 m.It has been reported that the quality of this dataset is relatively poor (e.g., the average noise level of images is relatively high) [43].The ISPRS dataset is small and only covers a 13 km 2 area, which leads to few samples of buildings.The Inria and WHU datasets are newly developed ones with a spatial resolution of around 0.3 m covering more than 400 km 2 .To better explore the potential relationships between the Boltzmann entropy of images and building classification accuracy, the Massachusetts and ISPRS datasets are not employed.In this study, building images of three cities (i.e., Christchurch, Chicago and Austin) from the WHU and Inria datasets are employed for the experiments.An overview of the images of these four cities is shown in Figure 1.All the large images are clipped into small images with a size of 512*512 pixels for better training and testing results.

Boltzmann Entropy as a Metric for Image Quality
Entropy is a notion originating from thermodynamics to describe the lack of order that exists in a thermodynamic system [44,45], and Boltzmann proposed an explanation of entropy based on statistical physics.More precisely, a quantitative relationship model between entropy and the possible number of microstates in a system has been constructed [46]: where  represents the Boltzmann entropy of a system;  is the possible number of microstates;  is the Boltzmann constant (1.38*10 −23 J/K).
Although the theoretical basis for using Boltzmann entropy to describe the configurational information of remote sensing images has been proven to exist, no feasible method for calculating entropy was proposed for a long period of time.This is because macrostates and their corresponding microstates are hard to define [47,48].Recently, some

Boltzmann Entropy as a Metric for Image Quality
Entropy is a notion originating from thermodynamics to describe the lack of order that exists in a thermodynamic system [44,45], and Boltzmann proposed an explanation of entropy based on statistical physics.More precisely, a quantitative relationship model between entropy and the possible number of microstates in a system has been constructed [46]: where S represents the Boltzmann entropy of a system; W is the possible number of microstates; k B is the Boltzmann constant (1.38*10 −23 J/K).
Although the theoretical basis for using Boltzmann entropy to describe the configurational information of remote sensing images has been proven to exist, no feasible method for calculating entropy was proposed for a long period of time.This is because macrostates and their corresponding microstates are hard to define [47,48].Recently, some researchers have proposed a feasible method through a landscape mosaic represented by gradients [29].
The key objective of such a method is to determine the macrostates of images and calculate the number of microstates.
Before illustrating the details of macrostates and microstates, the size of the original images should be explained.In this study, C and L are two variables representing the height and length of images (i.e., the number of pixels in the row and column of the image).
In terms of macrostates, Gao et al. [29] gave two detailed definitions: (1) images with reduced resolution; and (2) images with a resolution of (C − 1) * (L − 1).Generally, the macrostates of images can be described by three parameters: maximum values, minimum values and averages.In practice, the size of a remote sensing image (i.e., C * L) is large.One solution is to separate a large image into small macroscopic units.For example, in Figure 2a, an original image is resampled by a moving a 2*2 window, and in this way, a variety of macroscopic units can be generated.
Entropy 2023, 25, x FOR PEER REVIEW 4 of 14 researchers have proposed a feasible method through a landscape mosaic represented by gradients [29].The key objective of such a method is to determine the macrostates of images and calculate the number of microstates.Before illustrating the details of macrostates and microstates, the size of the original images should be explained.In this study,  and  are two variables representing the height and length of images (i.e., the number of pixels in the row and column of the image).
In terms of macrostates, Gao et al. [29] gave two detailed definitions: (1) images with reduced resolution; and (2) images with a resolution of  − 1 *  − 1 .Generally, the macrostates of images can be described by three parameters: maximum values, minimum values and averages.In practice, the size of a remote sensing image (i.e.,  * ) is large.One solution is to separate a large image into small macroscopic units.For example, in Figure 2a, an original image is resampled by a moving a 2*2 window, and in this way, a variety of macroscopic units can be generated.Under a macroscopic unit, a macrostate can be generated via upscaling, and then various microstates can be acquired via downscaling (see Figure 2b).In the example shown in Figure 2b, two multisets of microstates with the same maximum value, minimum value and average (i.e., 5, 2 and 15/4) can be generated via downscaling.Of all these microstates, only one has the same macroscopic unit.The details of determining multisets and the corresponding permutations for each multiset can be found in the paper by Gao et al. [29].
The total number of microstates for an original image () is the product of the numbers of microstates ( ) for all individual macroscopic units: Under a macroscopic unit, a macrostate can be generated via upscaling, and then various microstates can be acquired via downscaling (see Figure 2b).In the example shown in Figure 2b, two multisets of microstates with the same maximum value, minimum value and average (i.e., 5, 2 and 15/4) can be generated via downscaling.Of all these microstates, only one has the same macroscopic unit.The details of determining multisets and the corresponding permutations for each multiset can be found in the paper by Gao et al. [29].
The total number of microstates for an original image (W) is the product of the numbers of microstates (W u ) for all individual macroscopic units: where W u,j is the W u calculated for the jth macroscopic unit.The formula for the calculation of W u is as follows: Entropy 2023, 25, 1182 5 of 14 where k is the number of multisets for a given microstate; M i is the number of permutations for each multiset.Therefore, the calculation formula of Boltzmann entropy for remote sensing images is as follows: where S is the relative Boltzmann entropy.In this study, the values of Boltzmann entropy refer to relative values, also called configurational entropy by Cushman [49].

DeepLabV3+ and F1 for the Classification of Buildings
DeepLabV3+ is a semantic segmentation model based on a convolutional neural network proposed by Google Brain [38].This model improves segmentation accuracy and calculation efficiency by introducing new modules and techniques (e.g., encoderdecoder structure and conditional randomness) when retaining the dilated convolution and multi-scale feature fusion of the original DeepLab series models (see Figure 3).
where  is the relative Boltzmann entropy.In this study, the values of Boltzmann entropy refer to relative values, also called configurational entropy by Cushman [49].

DeepLabV3+ and F1 for the Classification of Buildings
DeepLabV3+ is a semantic segmentation model based on a convolutional neural network proposed by Google Brain [38].This model improves segmentation accuracy and calculation efficiency by introducing new modules and techniques (e.g., encoder-decoder structure and conditional randomness) when retaining the dilated convolution and multiscale feature fusion of the original DeepLab series models (see Figure 3).
In terms of specific implementation, DeepLabV3+ uses deep neural networks such as ResNet as its encoding part, and uses the ASPP (Atrous Spatial Pyramid Pooling) module to further improve the receptive field.In the decoder part, DeepLabV3+ uses a series of deconvolution layers and bilinear interpolation layers combined with skip connection to fuse multi-scale features.In addition, DeepLabV3+ also uses conditional random fields to further optimize the segmentation results, resulting in more accurate pixel-level segmentation results.The main contribution of DeepLab V3+ is an efficient and accurate semantic segmentation model.The design of its hollow convolution and encoder-decoder structure enables the model to understand the semantic information of the image, and the innovations made in the deconvolution and bilinear interpolation allow the model to better incorporate multi-scale features.F1 is a comprehensive index commonly used in classification models which comprehensively considers the "precision" and "recall rate" of the model.In the binary classification problem, "precision" () refers to the proportion of the number of positive samples In terms of specific implementation, DeepLabV3+ uses deep neural networks such as ResNet as its encoding part, and uses the ASPP (Atrous Spatial Pyramid Pooling) module to further improve the receptive field.In the decoder part, DeepLabV3+ uses a series of deconvolution layers and bilinear interpolation layers combined with skip connection to fuse multi-scale features.In addition, DeepLabV3+ also uses conditional random fields to further optimize the segmentation results, resulting in more accurate pixel-level segmentation results.The main contribution of DeepLab V3+ is an efficient and accurate semantic segmentation model.The design of its hollow convolution and encoder-decoder structure enables the model to understand the semantic information of the image, and the innovations made in the deconvolution and bilinear interpolation allow the model to better incorporate multi-scale features.
F1 is a comprehensive index commonly used in classification models which comprehensively considers the "precision" and "recall rate" of the model.In the binary classification problem, "precision" (p) refers to the proportion of the number of positive samples correctly classified to the number of all positive samples predicted by the model, while "recall rate" (r) refers to the proportion of the number of positive samples correctly classified to the total number of actual positive samples.The formula for calculating F1 is as follows [50]: Entropy 2023, 25, 1182 6 of 14

A Strategy for Exploring the Potential Relationships between Boltzmann Entropy and F1 of Images
F1 for building classification is mainly affected by the differences within the buildings themselves (intra-class differences) and the differences between buildings and nonbuildings (inter-class differences).Intra-class differences are mainly due to the diverse spectrum and texture, various scales and complex spatial structures of buildings.For example, building roofs have a variety of colors and shapes and are covered by water towers, signal stations, vegetation and other objects.Inter-class differences refer to the spectral and spatial structure differences between buildings and non-buildings, such as the relatively regular shape of buildings compared with natural features.When the proportion of buildings in an image is low, the spectral and spatial structure difference inside buildings is small, which means that the classification accuracy is more affected by inter-class differences (see Figure 4a).When the proportion of buildings in an image is high, the classification accuracy is more affected by intra-class differences.Overall, when exploring potential relationships between the Boltzmann entropy of images and F1, the proportions of buildings in images should be considered.
"recall rate" () refers to the proportion of the number of positive samples correctly classified to the total number of actual positive samples.The formula for calculating F1 is as follows [50]:

A Strategy for Exploring the Potential Relationships between Boltzmann Entropy and F1 of Images
F1 for building classification is mainly affected by the differences within the buildings themselves (intra-class differences) and the differences between buildings and nonbuildings (inter-class differences).Intra-class differences are mainly due to the diverse spectrum and texture, various scales and complex spatial structures of buildings.For example, building roofs have a variety of colors and shapes and are covered by water towers, signal stations, vegetation and other objects.Inter-class differences refer to the spectral and spatial structure differences between buildings and non-buildings, such as the relatively regular shape of buildings compared with natural features.When the proportion of buildings in an image is low, the spectral and spatial structure difference inside buildings is small, which means that the classification accuracy is more affected by inter-class differences (see Figure 4a).When the proportion of buildings in an image is high, the classification accuracy is more affected by intra-class differences.Overall, when exploring potential relationships between the Boltzmann entropy of images and F1, the proportions of buildings in images should be considered.Based on this, a strategy is proposed here by us: (1) Analyzing the effects of building proportions on F1; (2) Exploring relationships between Boltzmann entropy and F1.

Analyzing Effects of Buildings Proportions on F1
Based on images of three cities (i.e., Christchurch, Chicago and Austin) from the WHU and Inria datasets, the image frequencies by building proportions are shown in Figure 5.It is found that most images have small building proportions (<=35%), while those with large building proportions are few.This is because buildings are often found around artificial and natural elements.In downtown areas, buildings are often separated by roads and other facilities (e.g., greenbelts), while in the suburbs, farmlands and forests dominate.
In Figure 6, the relationships between building proportions and F1 are given.It is found that when the proportions are small, the fluctuation in F1 is between 0 and 1.With the gradual increase in building proportion, the gap of fluctuation becomes smaller  Based on this, a strategy is proposed here by us: (1) Analyzing the effects of building proportions on F1; (2) Exploring relationships between Boltzmann entropy and F1.

Analyzing Effects of Buildings Proportions on F1
Based on images of three cities (i.e., Christchurch, Chicago and Austin) from the WHU and Inria datasets, the image frequencies by building proportions are shown in Figure 5.It is found that most images have small building proportions (<=35%), while those with large building proportions are few.This is because buildings are often found around artificial and natural elements.In downtown areas, buildings are often separated by roads and other facilities (e.g., greenbelts), while in the suburbs, farmlands and forests dominate.
In Figure 6, the relationships between building proportions and F1 are given.It is found that when the proportions are small, the fluctuation in F1 is between 0 and 1.With the gradual increase in building proportion, the gap of fluctuation becomes smaller quickly.In particular, when the proportion is larger than 20%, the F1 of images is larger than 0.8.In fact, some factors (e.g., image classifier performance and inaccurate image labeling) will lead to the misclassification of buildings.Regardless of whether the proportion of buildings in an image is large or small, pixels incorrectly classified by these factors exist.In those images with small building proportions, the total numbers of building pixels are small, indicating these incorrectly classified pixels have more impact on F1.
quickly.In particular, when the proportion is larger than 20%, the F1 of images is larger than 0.8.In fact, some factors (e.g., image classifier performance and inaccurate image labeling) will lead to the misclassification of buildings.Regardless of whether the proportion of buildings in an image is large or small, pixels incorrectly classified by these factors exist.In those images with small building proportions, the total numbers of building pixels are small, indicating these incorrectly classified pixels have more impact on F1.  quickly.In particular, when the proportion is larger than 20%, the F1 of images is larger than 0.8.In fact, some factors (e.g., image classifier performance and inaccurate image labeling) will lead to the misclassification of buildings.Regardless of whether the proportion of buildings in an image is large or small, pixels incorrectly classified by these factors exist.In those images with small building proportions, the total numbers of building pixels are small, indicating these incorrectly classified pixels have more impact on F1.Therefore, to better explore potential relationships between Boltzmann entropy and F1, obvious fluctuation in F1 in images with a specific building proportion should be avoided.To solve this problem, images with very small building proportions are abandoned in the subsequent experiments for exploring potential relationships.The analysis of how images Entropy 2023, 25, 1182 8 of 14 with small building proportions affect the relationships between BE and F1 is presented in Section 4.2.

Relationships between Boltzmann Entropy and F1: Experimental Results
All the scattered data between Boltzmann entropy and F1 based on images of three cities are given in Figure 7. Just as we mentioned in Section 3.1, building proportions have effects on F1.To better analyze the relationships, the trends between Boltzmann entropy and F1 under different building proportions are highlighted in different colors in Figure 7. Therefore, to better explore potential relationships between Boltzmann entropy and F1, obvious fluctuation in F1 in images with a specific building proportion should be avoided.To solve this problem, images with very small building proportions are abandoned in the subsequent experiments for exploring potential relationships.The analysis of how images with small building proportions affect the relationships between BE and F1 is presented in Section 4.2.

Relationships between Boltzmann Entropy and F1: Experimental Results
All the scattered data between Boltzmann entropy and F1 based on images of three cities are given in Figure 7. Just as we mentioned in Section 3.1, building proportions have effects on F1.To better analyze the relationships, the trends between Boltzmann entropy and F1 under different building proportions are highlighted in different colors in Figure 7. Correlation analysis is employed to analyze the overall trends, and it is found that the values of Spearman's correlation coefficients (SCCs) between BE and F1 in three cities are −0.448,−0.259 and −0.252, respectively.As the pairs (BE, F1) are not independent observations, statistical tests are not suitable for this study.To further confirm the possible relationships, various confidence intervals of SCCs are calculated using bootstrapping (i.e., a nonparametric statistical method).It should be noted that when calculating confidence intervals, the number of iterations is set as 1000 (i.e., 1000 sets of paired resamples are generated from the original paired observations), and the size of paired resamples is the same as that of the original paired observations.In addition, the extracted elements are replaced after each sampling.The results of confidence intervals are presented in Table 1.The width of a confidence interval can reflect the degree of uncertainty in our estimate Correlation analysis is employed to analyze the overall trends, and it is found that the values of Spearman's correlation coefficients (SCCs) between BE and F1 in three cities are −0.448,−0.259 and −0.252, respectively.As the pairs (BE, F1) are not independent observations, statistical tests are not suitable for this study.To further confirm the possible relationships, various confidence intervals of SCCs are calculated using bootstrapping (i.e., a nonparametric statistical method).It should be noted that when calculating confidence intervals, the number of iterations is set as 1000 (i.e., 1000 sets of paired resamples are generated from the original paired observations), and the size of paired resamples is the same as that of the original paired observations.In addition, the extracted elements are replaced after each sampling.The results of confidence intervals are presented in Table 1.The width of a confidence interval can reflect the degree of uncertainty in our estimate of the Spearman's correlation coefficient.We can see that the calculated confidence intervals are not very wide (the maximum width is around 0.1), and all the lower and upper bounds of confidence intervals are negative.Such results imply that BE does have a negative relationship with F1.

Effects of Neglecting Images with Small Building Proportions
The scattered data of images with very small building proportions (i.e., smaller than 1%) are shown in Figure 9, and the SCCs and various confidence intervals are calculated (see Table 2).In terms of SCCs presenting the overall trends, it is found that SCCs change from −0.395, −0.217 and −0.221 to −0.448, −0.259 and −0.252 in three cities if images with building proportions smaller than 1% are removed.In fact, the change is not very large, and in both two situations, negative relationships are all found under various confidence intervals.

Effects of Neglecting Images with Small Building Proportions
The scattered data of images with very small building proportions (i.e., smaller than 1%) are shown in Figure 9, and the SCCs and various confidence intervals are calculated (see Table 2).In terms of SCCs presenting the overall trends, it is found that SCCs change from −0.395, −0.217 and −0.221 to −0.448, −0.259 and −0.252 in three cities if images with building proportions smaller than 1% are removed.In fact, the change is not very large, and in both two situations, negative relationships are all found under various confidence intervals.

Conclusions
Buildings are important artificial features in remote sensing images and play important roles in many applications.Classifying buildings quickly and accurately is the first step for detailed applications.It has been noticed by many researchers that building classification accuracy is greatly affected by image quality.However, the potential relationships between image quality and building classification accuracy have not been modeled.
In this study, remote sensing images from two open-source datasets (i.e., WHU and Inria) in three cities (i.e., Christchurch, Chicago and Austin) are employed, and potential relationships between Boltzmann entropy (an image quality index considering both compositional and configurational information) and F1 (a comprehensive metric considering both precision and recall rate) are explored.Experimental results show that F1 fluctuates greatly in images with small building proportions (especially in images with building proportions smaller than 1%).In addition, negative relationships between BE and F1 are found using correlation analysis (the SCCs are −0.448,−0.259 and −0.252, respectively, in the three cities).Such negative relationships are confirmed by various confidence intervals calculated using bootstrapping in both situations (i.e., with and without images that have building proportions smaller than 1%).The maximum width of all confidence intervals is around 0.1, and all lower and upper bounds of confidence intervals are negative.From the above results, we may conclude that the Boltzmann entropy of remote sensing images does have a negative relationship with F1 (i.e., when BE becomes larger, the F1 tends to become smaller).These discoveries are helpful to improve our understanding of how image quality affects building classification accuracy.
There are some limitations of this study.The first is that the employed images are only very-high-resolution ones, and the image object type is restricted to buildings.Thus, we cannot conclude whether potential relationships exist for images with different resolutions and objects (e.g., roads).Secondly, the effects of the size of clipped images on the classification are not considered in this study.In fact, this may have impacts on the exploration of subsequent relationships.
25,  x FOR PEER REVIEW 3 of 14 building images of three cities (i.e., Christchurch, Chicago and Austin) from the WHU and Inria datasets are employed for the experiments.An overview of the images of these four cities is shown in Figure1.All the large images are clipped into small images with a size of 512*512 pixels for better training and testing results.

Figure 1 .
Figure 1.Overview of building images from WHU and Inria datasets.

Figure 1 .
Figure 1.Overview of building images from WHU and Inria datasets.

Figure 2 .
Figure 2.Macrostates, microstates and macroscopic units of an example image (modified from[29]).(a) The resampling process for acquiring macroscopic units; (b) the microstate and all possible microstates for a macroscopic unit.

Figure 2 .
Figure 2. Macrostates, microstates and macroscopic units of an example image (modified from [29]).(a) The resampling process for acquiring macroscopic units; (b) the microstate and all possible microstates for a macroscopic unit.

Figure 4 .
Figure 4. Two example images with different proportions of buildings.(a) The proportion of buildings is 2%; (b) the proportion of buildings is 27%.

Figure 4 .
Figure 4. Two example images with different proportions of buildings.(a) The proportion of buildings is 2%; (b) the proportion of buildings is 27%.

Figure 5 .
Figure 5. Frequencies of images with different building proportions.Figure 5. Frequencies of images with different building proportions.

Figure 5 .
Figure 5. Frequencies of images with different building proportions.Figure 5. Frequencies of images with different building proportions.

Figure 5 .
Figure 5. Frequencies of images with different building proportions.

Figure 7 .
Figure 7. Scattered data of images with building proportions larger than 1%.

Figure 7 .
Figure 7. Scattered data of images with building proportions larger than 1%.

Figure 8 .
Figure 8.Comparison of modeled relationships based on Boltzmann entropy.(a) Relationships between Boltzmann entropy according to resampling (S R ) and compression ratio (C R ) [30]; (b) relationships between BE and F1 (Christchurch); (c) relationships between BE and F1 (Chicago); (d) relationships between BE and F1 (Austin).

Table 2 .
Results for SCCs and various confidence intervals of images with building proportions smaller than 1%.

Table 2 .
Results for SCCs and various confidence intervals of images with building proportions smaller than 1%.