Spatial Perspectives toward the Recommendation of Remote Sensing Images Using the INDEX Indicator, Based on Principal Component Analysis

: Progress in the development of sensor technology has increased the speed and convenience of remote sensing (RS) image acquisition. As the volume of RS images steadily increases, the challenge is no longer in producing and acquiring an RS image, but in ﬁnding a particular image from numerous RS images that precisely meets user application needs. Some spatial measuring methods speciﬁc to the recommendation of RS images have been proposed and could be used to score and sort RS images according to users’ requests. Our previous study introduced two measuring methods, namely, available space (AS) and image extension (IE), which have similar results but complementary e ﬀ ects for spatially ranking recommended images. The AS indicator could cover the inadequacies of the IE indicator in some cases and vice versa. The current study combines these two indicators using principal component analysis and produced a new indicator called INDEX, which we used in the RS image spatial recommendation. The ranking results were measured using a normalized discounted cumulative gain (NDCG) and several other statistic criteria. The results indicate that users are more satisﬁed with the recommendations of the INDEX indicator than those of AS, IE and Hausdor ﬀ distance for single RS image type selections which is the most common scenario for RS image applications. When dealing with hybrid RS image types, the INDEX indicator performs very closely to the dominant IE indicator, yet maintaining the characteristics of the AS indicator. on


Introduction
In recent years, the ability to acquire remote sensing data has been improved to an unprecedented level. Evidently, handling, storing, managing and making the best usage of this tremendous volume of data is a massive challenge [1]. The correct use of remote sensing images has been proven to be an effective solution for resolving real world problems, for example, Liu and Di [2] attempted to introduce the latest theory, methods, and applications to manage, exploit, and analyze remote sensing big data. To optimize the efficiency of the geospatial service in a flood response decision-making system, a parallel agent-as-a-service (P-AaaS) method is proposed and implemented in a cloud computing platform [3]. The P-AaaS method includes parallel architecture and mechanism for adjusting the computational resources and the execution algorithm. Given that this method considers multi-scale and dynamic-state characteristics, the sub-pixel land-cover change detection with the use of different resolution images is addressed [4]. A novel approach based on back propagation neural network with different resolution images is proposed to solve the problem of mixed pixels in change detection.
Ding [5] presents the association rules-based coastal land use spatial sequence model (ARCLUSSM) to mine the sequential pattern of land use with interesting associations in the sea-land direction of a coastal zone. ARCLUSSM is a good application of remote sensing big data and focuses on land use in the sea-land direction and sequential relationship between land-use types. As the varieties and volume of remote sensing (RS) images is growing with an overwhelming speed, users need an efficient and effective way to find the RS images that are best for their applications, and the design of recommendation methods therefore become an important topic. Despite the fact that there are many factors to consider, an essential and necessary requirement is the spatial perspective, i.e., the area of interests (AOI) must be visible in the RS image. This location-based constraint narrows the data range according to the location of the demand. The studies of [6] [7] are location-based recommendation and use point patterns to represent user preferences. Different from simply using the point patterns, Wang [8] further considers the sequential influence of locations.
Two primary challenges in the supply of RS imagery are the selection of candidate images that match the spatial relationships given by the search constraints based on an AOI and the recommendation of optimal spatial rankings for the candidate images. These issues were addressed in our previous study, which proposed the location-based RS image finding engine (LIFE) search engine framework for RS images [9]. This framework includes RS image metadata databases and a recommendation engine. Figure 1 shows the recommendation is based on results of the two proposed quantitative parameters, namely, the available space (AS) and image extension (IE), according to a user-defined AOI. All the qualified images are ranked by the proposed indicators respectively.
Remote Sens. 2020, 11, x FOR PEER REVIEW 2 of 34 of mixed pixels in change detection. Ding [5] presents the association rules-based coastal land use spatial sequence model (ARCLUSSM) to mine the sequential pattern of land use with interesting associations in the sea-land direction of a coastal zone. ARCLUSSM is a good application of remote sensing big data and focuses on land use in the sea-land direction and sequential relationship between land-use types. As the varieties and volume of remote sensing (RS) images is growing with an overwhelming speed, users need an efficient and effective way to find the RS images that are best for their applications, and the design of recommendation methods therefore become an important topic. Despite the fact that there are many factors to consider, an essential and necessary requirement is the spatial perspective, i.e., the area of interests (AOI) must be visible in the RS image. This locationbased constraint narrows the data range according to the location of the demand. The studies of [6] [7] are location-based recommendation and use point patterns to represent user preferences. Different from simply using the point patterns, Wang [8] further considers the sequential influence of locations. Two primary challenges in the supply of RS imagery are the selection of candidate images that match the spatial relationships given by the search constraints based on an AOI and the recommendation of optimal spatial rankings for the candidate images. These issues were addressed in our previous study, which proposed the location-based RS image finding engine (LIFE) search engine framework for RS images [9]. This framework includes RS image metadata databases and a recommendation engine. Figure 1 shows the recommendation is based on results of the two proposed quantitative parameters, namely, the available space (AS) and image extension (IE), according to a user-defined AOI. All the qualified images are ranked by the proposed indicators respectively. For given RS image datasets, the AS and IE indicators can effectively compare the spatial conditions between the AOI and RS images and separately ranking the candidate images. As the ranking behaviors of these two parameters are different, the proposed representative indicator in this research intends to combine the qualities of the AS and IE indicators, such that their supplementary characteristics will be retained to facilitate user selection among the RS image search results. Models that can merge multiple variables must be developed to merge the highly correlated AS and IE spatial ranking parameters. Given the experimental platform, AOIs, and candidate image databases from our previous study, we augmented the LIFE framework by introducing a method to linearly merge the AS and IE spatial ranking parameters, thereby deriving a new spatial ranking indicator named For given RS image datasets, the AS and IE indicators can effectively compare the spatial conditions between the AOI and RS images and separately ranking the candidate images. As the ranking behaviors of these two parameters are different, the proposed representative indicator in this research intends to combine the qualities of the AS and IE indicators, such that their supplementary characteristics will be retained to facilitate user selection among the RS image search results. Models that can merge multiple variables must be developed to merge the highly correlated AS and IE spatial ranking parameters. Given the experimental platform, AOIs, and candidate image databases from our previous study, we augmented the LIFE framework by introducing a method to linearly merge the AS and IE spatial ranking parameters, thereby deriving a new spatial ranking indicator named INDEX. After comparing the model of factor analysis (FA) and principal component analysis (PCA), PCA was chosen as the method to merge the AS and IE indicators. More discussion about the reasons for choosing PCA is included in the next section.
This paper focuses on the recommendation of remote sensing images from the spatial perspective. In addition to the required topological constraint, i.e., contains, the selection of RS images may be also based on other factors, e.g., spectrum, imagery ground resolution, cloud coverage, and temporal constraints. By specifying an acceptable range of values according to the application requirements, e.g., a specific spectrum band, ground resolution higher than 2 m, the acquisition time between 2017 to 2019, cloud coverage less than 30%, we can easily use the database management system to reduce the number of candidate images. As the spatial coverage of RS images are usually arbitrary and an area may be covered by multiple RS image, the spatial recommendation strategies must take the "difference in degree" between the AOI and candidate RS images into consideration, not only the topological relationship of "contains." The proposed indicators were therefore developed for ranking candidate images from spatial consideration. The recommendation strategies of other factors are beyond the scope of this research, but can be easily added to the RS imagery selection module.
The remainder of this paper is organized as follows. Section 2 reviews and compares models that can be used to merge the AS and IE indicators. Section 3 states the problem examined in this research and explains our proposed solution. Section 4 introduces the definition of the INDEX indicator and the procedure designed. Section 5 presents our major findings of the experimental data. Section 6 presents the discussions about findings and future works. Lastly, Section 7 provides the conclusions of this study.

Related Work
The objective of this paper is to combine the indicators proposed in our previous study [9] and uses a variable reduction method to endow a single indicator with the characteristics of both indicators. For RS image users without expertise and experience in handling complicated platforms or archives, an effective spatial ranking and recommendation indicator may help in finding useful images, which may consume the most of precious time in RS image applications. This section describes the definitions of proposed spatial ranking indicators and variable reduction methods.

RS Image Ranking Indicators
Despite the fact that many ranking mechanisms for spatial data have been proposed, the majority of these proposals rank images only according to the combinations and calculations from predefined geographical location attributes of the data [10][11][12]. In the LIFE framework, we proposed the AS and IE indicators, which measure the extensibility and centrality of data, respectively. These indicators are defined as follows.

AS Indicator
The idea of the AS indicator was inspired by the consideration of neighborhood area of AOI that is often neglected by conventional ordering methods, but may be very useful for providing additional reference. Originating from the idea of distance buffers [13], the AS indicator uses the maximum buffering distance (MBD) as the basis for measuring the additional spatial coverage of data adjacent to AOI in each RS image (see Definition 1). Instead of choosing a fixed distance buffer, our proposed method uses a dynamically defined boundary and MBD, meaning the distance buffer is determined according to the relative location between an AOI and an RS image, and every image will have its own AS indicator value after AOI is specified. .
The values of the AS and IE indicators are based on the respective measurements of the extensibility and centrality of a candidate image relative to AOI. Our previous study shows that each of the AS and IE indicators presents a unique set of behaviors and characteristics for the ranking of the RS images and they can also supplement each other. For example, IE can be used to rank two RS images, whose AS values are not distinguishable by AS alone and vice versa ( Figure 2).

Figure 2.
Image extension (IE)-based ranking is necessary for the RS images that have the same area of interests (AOI) and available space (AS).

•
Exploratory factor analysis Exploratory factor analysis (EFA) is a technique to find the essential structure of diversified observational variables and perform processing and dimensionality reduction. Therefore, EFA could synthesize variables with complicated relationships into a few core factors. Spearman [19] proposed a single intellectual factor for EFA. However, further research indicates Spearman's single factor theory is inadequate due to lack of diversity and applicability. Thurstone [20] challenged the popular single factor theory assumption and proposed multiple factor analysis, which broke through the limitations in terms of the number of factors. The purpose of EFA is therefore to reduce the dimensions of the original space and turn the original data into a space with numerous compact variables [21].

Confirmatory Factor Analysis
Confirmatory factor analysis (CFA) is a statistical approach to verify the known factor structures or assumptive theory. The purpose of CFA is to use the actual data collected to verify the consistency of previously assumed factor structures [22].
CFA and EFA are similar to each other in many aspects, because they are both linear statistical models that assume a normal distribution in the sample space. Both methods also involve the discovery of latent structures and variable measurement. However, these two methods differ in various important aspects. EFA is mainly used to search for unknown structures and factors in data, whereas CFA is used to validate theoretically deduced factors. For example, Featherman and Pavlou [23] used a CFA model that was deduced theoretically instead of derived from their data. For CFA, a model must be predefined before factor validation can be performed, whereas EFA probes for structures that connect the factors within a data set [24]. PCA is a technique for analyzing and simplifying data sets. The formal description of PCA, as proposed by Pearson [25], is to find the line or surface nearest to a sample in the sample space, thereby effectively reducing the data dimension. The main idea of PCA is to analyze the characteristic properties of a covariance matrix to obtain the principal components of data (eigenvectors) and their weights (eigenvalues), by retaining the lower-order principal component (corresponding to the maximum eigenvalue). The high-order principal component (corresponding to the minimum eigenvalue) is abandoned to reduce the data set dimension while preserving the maximum data set variation. More discussion about this approach can be found in [26,27].
The PCA process is as follows.
1. Suppose that we have n independent observations on the p-element random vector x denoted by x1, x2, . . . , n. The deviation matrix X is defined as follows: x k.

2.
The covariance matrix S is defined as follows: The equation that solves the correlation matrix S is S − λI p = 0, where λ is the eigenvalue and p eigenvalues could be obtained. r j=1 λ j p j=1 λ j ≥ 0.85, λ j ≥ λ j+1 Thus, we have retained 85% of the data variation and confirmed the r value. For each λ j , j = 1, 2, . . . , r, the equation Sc j = λ j c j is solved to obtain the eigenvector c j . 4.
The principal component matrix C is defined as follows: The coefficient matrix Z of the principal component is defined as follows:

Comparison of FA and PCA
According to the FA model defined in [28], Figure 3 shows the different roles of FA and PCA respectively play in our works. The FA (left part of Figure 3) is for discovering the latent factors and their weights (denoted by b1, b2, b3 and b4), including centrality and extensibility for RS image spatial selections, which have formed the foundations of AS and IE indicators from our previous study. The PCA (right part of Figure 3) adopted in this research is to combine the two observations with calculated weights (denoted by w1 and w2) into a new indicator, for more efficient and effective RS image spatial recommendations.
This study aims to integrate the two indicators proposed by the LIFE framework from our previous study through variable reduction. EFA is used to determine the latent variables that have an overall effect on the data, thereby summarizing and simplifying the data. CFA is subsequently used to validate the known variable structures and models. Apart from the reduction of variables, this research also This study aims to integrate the two indicators proposed by the LIFE framework from our previous study through variable reduction. EFA is used to determine the latent variables that have an overall effect on the data, thereby summarizing and simplifying the data. CFA is subsequently used to validate the known variable structures and models. Apart from the reduction of variables,

Problem Statement
The core idea of the INDEX indicator is inspired by the shortcomings of the AS and IE indicators that are used for ranking recommendations. Our previous study found that the ranking behaviors of AS and IE differ completely from each other under certain AOI/RS image conditions. If the two indicators have contradicting suggestions, a dilemma would arise as to which one should be chosen for ranking. The two indicators have their respective significance. Previous research on normalized discounted cumulative gain (NDCG) indicates that both of them can be given a certain degree of approval. Therefore, we do not attempt to recommend to the user which indicator to use between the two. Instead, we attempt to propose an integrated solution that provides a new indicator that combines the characteristics of both indicators, thereby enabling users to only manage a single consideration. The demanding of RS image participation in domains and applications has also resulted in the establishment of RS image cloud-based platforms such as GEE. The addition of RS image recommendation capability enables such platforms to provide more precise recommendation and reduce the time and efforts of domain users in finding the best images for their applications.
The selection of remote sensing images may include multiple types of constraint, and spatial constraints are a necessary consideration. By using the proposed ranking indicators, users can acquire a sorted list of recommended results that meet their spatial conditions for specific areas of interest. For increasingly complex remote sensing image products, comprehensive and effective spatial conditions recommendation can facilitate the acquisition of potential suitable images for users without expert knowledge, which is the first step in starting remote sensing image applications.

New Indicator Design Based on PCA
This section describes the design of the INDEX indicator and its relative role in the LIFE framework. According to the discussions in previous sections, the INDEX indicator must retain the characteristics of AS and IE indicators, yet deliver better spatial recommendations for RS image users. In order to merge the AS and IE indicators discussed in Section 2.1 properly, PCA was adopted as the method of producing the INDEX indicator according to the discussion in Section 2.2. The fundamental definition and the workflow of calculation of the INDEX indicator was discussed in Section 4.1. In Section 4.2, the expanded LIFE framework was explored.

The INDEX Indicator
Our objective is to combine the AS and IE indicators from our previous study. However, the obtained AS and IE values should first be normalized, because the domains of the non-normalized AS and IE ranges are quite divergent and difficult to control. Therefore, we normalized the AS and IE indicators and used PCA to merge them.
Remote Sens. 2020, 12, 1277 8 of 33 PCA seeks to reduce the number of variables and it can simultaneously reflect inter-variable relationships. To use PCA to merge the AS and IE indicators, the AS and IE eigenvectors were first obtained. These eigenvectors were subsequently multiplied by their normalized values and combined to form the INDEX indicator. The detailed procedure for combining the AS and IE indicators into the INDEX indicator is described in Figure 4 and the following pseudo-code below.

3.
Obtain the MEA of rsi for aoi where C is the vector of the principal components, W denotes the transformation matrix, and X denotes the vector of the original data. 6.

END
Steps 1 to 3 define AS and IE.
Step 4 normalizes AS and IE and their normalized values are denoted as N_AS and N_IE, respectively.
Step 6 identifies the eigenvectors of AS and IE through Step 5. Moreover, the INDEX is subsequently formed by the multiplication of N_AS and N_IE with their corresponding eigenvectors.
Let x1 and x2 be the vectors of the N_AS and N_IE, respectively. By PCA calculation, we obtain the eigenvector c1 and c2.
Therefore, INDEX = (w 1 x 1 + w 2 x 2 ) × N_AS+ (w 3 x 1 + w 4 x 2 ) × N_IE. A demonstration of PCA related statistics was discussed in Figure 5, after the AOI (no. 62, the orange polygon showed in Figure 5) was selected, the image database returns nine images (the red and blue rectangle) that satisfy the conditions of containing the AOI. For each pair of AOI and candidate images, the values of AS, IE, Hausdorff distance, and the INDEX indicators were calculated. where C is the vector of the principal components, W denotes the transformation matrix, and X denotes the vector of the original data. 6. Select c1 and c2 as the eigenvector of N_AS and N_IE. INDEX = c1 × N_AS + c2 × N_IE.

END
Steps 1 to 3 define AS and IE.
Step 4 normalizes AS and IE and their normalized values are denoted as N_AS and N_IE, respectively.
Step 6 identifies the eigenvectors of AS and IE through Step 5. Moreover, the INDEX is subsequently formed by the multiplication of N_AS and N_IE with their corresponding eigenvectors.
Let x1 and x2 be the vectors of the N_AS and N_IE, respectively. By PCA calculation, we obtain the eigenvector c1 and c2.
Therefore, INDEX = (w1x1 + w2x2) × N_AS+ (w3x1 + w4x2) × N_IE. A demonstration of PCA related statistics was discussed in Figure 5, after the AOI (no. 62, the orange polygon showed in Figure 5) was selected, the image database returns nine images (the red and blue rectangle) that satisfy the conditions of containing the AOI. For each pair of AOI and candidate images, the values of AS, IE, Hausdorff distance, and the INDEX indicators were calculated. The parameter calculation data of the AOI (62) is shown in Figure 6. These data included nine records with attributes such as aoiId, imageId, normalized AS, and normalized IE, Hausdorff distance, the INDEX indicator and eigenvector of PCA calculation (c1, c2), etc. Figure 7 shows the relevant statistics of PCA calculations for the INDEX indicator for the AOI in Figure 5, including covariance matrix, eigenvalue, eigenvector, factor loadings, factor scores, and cumulative variances (%). The eigenvalues of the AS and IE indicator were 0.170 and 0.064 and the The parameter calculation data of the AOI (62) is shown in Figure 6. These data included nine records with attributes such as aoiId, imageId, normalized AS, and normalized IE, Hausdorff distance, the INDEX indicator and eigenvector of PCA calculation (c1, c2), etc.
Remote Sens. 2020, 11, x FOR PEER REVIEW 11 of 34 Figure 6. PCA calculation data of the AOI example in Figure 5.  Figure 8 shows the three modules in this system. The first module is the RS image database module, which contains six data sets with varying sizes of RS images. The second module is the LIFE framework, which performs RS image spatial ranking according to the AOI input from the user and generates two different types of spatial recommendations that correspond to the AS and IE indicators. The third module is the INDEX generator described in this study.     Figure 8 shows the three modules in this system. The first module is the RS image database module, which contains six data sets with varying sizes of RS images. The second module is the LIFE framework, which performs RS image spatial ranking according to the AOI input from the user and generates two different types of spatial recommendations that correspond to the AS and IE indicators. The third module is the INDEX generator described in this study.  Figure 8 shows the three modules in this system. The first module is the RS image database module, which contains six data sets with varying sizes of RS images. The second module is the LIFE framework, which performs RS image spatial ranking according to the AOI input from the user and generates two different types of spatial recommendations that correspond to the AS and IE indicators. The third module is the INDEX generator described in this study. Figure 8 shows the three modules in this system. The first module is the RS image database module, which contains six data sets with varying sizes of RS images. The second module is the LIFE framework, which performs RS image spatial ranking according to the AOI input from the user and generates two different types of spatial recommendations that correspond to the AS and IE indicators. The third module is the INDEX generator described in this study.

Experimental Evaluations
In order to deliver comprehensive discussions and comparisons among AS, IE, INDEX and Hausdorff distance indicators, four parts of experiments were discussed in this section. In Section 5.1, we compared the ranking behaviors of AS, IE and the INDEX indicators, with 1000 simulated images of different image sizes. In Section 5.2, we developed a user score collection platform with 10,000 simulated images of six types of RS image format. We discussed the user preferences for four indicators based on collected user scores for each pair of image and AOI. In Section 5.3, we used the NDCG method to compare the results for 4 indicators among six simulated RS image formats. We also calculated the improvement rates of the INDEX indicator with respect to the other three indicators for the six RS image formats by NDCG results. In Section 5.4, we used five ranking evaluation criteria based on the user scores and corresponding precision and recall, comparing the performance of the four indicators.

Analyses of Spatial Recommendation Indicator Behaviors
This section chooses a single AOI to compare the ranking behaviors of the three proposed indicators (AS, IE and INDEX). All the simulated images conform to the setting conditions, where the spatial extent of images entirely contain the spatial extent of the AOI (refer to Figure 9). As the AS, IE and INDEX are correlated, their ranking behaviors are separately evaluated in the following three subsections. The ranking results of Hausdorff distance will be further added in Section 5.2 to Section 5.4.
A 10 km ×10 km AOI was first specified, then 1000 images of different sizes completely containing the AOI were randomly simulated. The magnification of the size of the simulated images ranged from 1.1 to 3.0, with the interval of 0.1. Four indicators, AS, IE and INDEX indicator, were calculated for each of 1000 simulated images ( Figure 9). Figure 10 summarizes the figures and tables and their major focuses in the following discussions.
5.1.1. AS Indicator Score According to the Sorting Pattern Figure 11 shows the distribution of IE after the AS indicator score-based sorting. The AS indicator score of tested images maintains a flat growth at lower values and climbs at the end ( Figure 11), mainly because the AS indicator score is normalized with respect to the maximum and minimum AS indicator scores of the images. When a larger image provides a higher AS indicator score, the AS indicator score of smaller images will be compressed and the AS indicator score of the latter will not change significantly, thereby resulting in a gradual change. Further analysis shows that the average size ratio of the top 100 AS-ranked images from the 1000 tested images is 2.45. Thus, a larger image is considerably favorable in AS-based sorting. By contrast, Figure 11 also shows that many images with lower AS indicator scores have excellent IE counterparts. Consequently, the two indicators may have different ranking results regarding optimal image recommendation. Figure 12 limits the results to only the top 100 AS-ranked images. The IE indicator score of the topmost AS-ranked image does not rank first, and due to fluctuations, it will appear that the corresponding IE indicator values of the two adjacent AS-ranked images (minor discrepancy in the AS indicator score) are subject to a dramatic change. Only 35 of the top 100 AS-ranked images are among the top 100 IE-ranked images, and only three of the top 10 AS-ranked images are on the list of the top 100 IE-ranked images. Moreover, none of the top 10 AS-ranked images are also top 10 IE-ranked images. Since no consistency occurs in the top-ranked results based on the sorting of the two indicators, the image recommendation based on the AS indicator alone cannot guarantee that images with the optimal IE indicator score can be recommended, and vice versa. That is, a mechanism is necessary to assist in the selection of images, by considering both types of ranking preference.
Hausdorff distance indicators, four parts of experiments were discussed in this section. In Section 5.1, we compared the ranking behaviors of AS, IE and the INDEX indicators, with 1000 simulated images of different image sizes. In Section 5.2, we developed a user score collection platform with 10,000 simulated images of six types of RS image format. We discussed the user preferences for four indicators based on collected user scores for each pair of image and AOI. In Section 5.3, we used the NDCG method to compare the results for 4 indicators among six simulated RS image formats. We also calculated the improvement rates of the INDEX indicator with respect to the other three indicators for the six RS image formats by NDCG results. In Section 5.4, we used five ranking evaluation criteria based on the user scores and corresponding precision and recall, comparing the performance of the four indicators.

Analyses of Spatial Recommendation Indicator Behaviors
This section chooses a single AOI to compare the ranking behaviors of the three proposed indicators (AS, IE and INDEX). All the simulated images conform to the setting conditions, where the spatial extent of images entirely contain the spatial extent of the AOI (refer to Figure 9). As the AS, IE and INDEX are correlated, their ranking behaviors are separately evaluated in the following three subsections. The ranking results of Hausdorff distance will be further added in Sections 5.2 to 5.4.
A 10 km ×10 km AOI was first specified, then 1000 images of different sizes completely containing the AOI were randomly simulated. The magnification of the size of the simulated images ranged from 1.1 to 3.0, with the interval of 0.1. Four indicators, AS, IE and INDEX indicator, were calculated for each of 1000 simulated images ( Figure 9).    of different image sizes. In Section 5.2, we developed a user score collection platform with 10,000 simulated images of six types of RS image format. We discussed the user preferences for four indicators based on collected user scores for each pair of image and AOI. In Section 5.3, we used the NDCG method to compare the results for 4 indicators among six simulated RS image formats. We also calculated the improvement rates of the INDEX indicator with respect to the other three indicators for the six RS image formats by NDCG results. In Section 5.4, we used five ranking evaluation criteria based on the user scores and corresponding precision and recall, comparing the performance of the four indicators.

Analyses of Spatial Recommendation Indicator Behaviors
This section chooses a single AOI to compare the ranking behaviors of the three proposed indicators (AS, IE and INDEX). All the simulated images conform to the setting conditions, where the spatial extent of images entirely contain the spatial extent of the AOI (refer to Figure 9). As the AS, IE and INDEX are correlated, their ranking behaviors are separately evaluated in the following three subsections. The ranking results of Hausdorff distance will be further added in Sections 5.2 to 5.4.
A 10 km ×10 km AOI was first specified, then 1000 images of different sizes completely containing the AOI were randomly simulated. The magnification of the size of the simulated images ranged from 1.1 to 3.0, with the interval of 0.1. Four indicators, AS, IE and INDEX indicator, were calculated for each of 1000 simulated images ( Figure 9).     Figure 11 shows the distribution of IE after the AS indicator score-based sorting.
The AS indicator score of tested images maintains a flat growth at lower values and climbs at the end ( Figure  11), mainly because the AS indicator score is normalized with respect to the maximum and minimum AS indicator scores of the images. When a larger image provides a higher AS indicator score, the AS indicator score of smaller images will be compressed and the AS indicator score of the latter will not change significantly, thereby resulting in a gradual change. Further analysis shows that the average size ratio of the top 100 AS-ranked images from the 1000 tested images is 2.45. Thus, a larger image is considerably favorable in AS-based sorting. By contrast, Figure 11 also shows that many images with lower AS indicator scores have excellent IE counterparts. Consequently, the two indicators may have different ranking results regarding optimal image recommendation. Figure 12 limits the results to only the top 100 AS-ranked images. The IE indicator score of the topmost AS-ranked image does not rank first, and due to fluctuations, it will appear that the corresponding IE indicator values of the two adjacent AS-ranked images (minor discrepancy in the AS indicator score) are subject to a dramatic change. Only 35 of the top 100 AS-ranked images are among the top 100 IE-ranked images, and only three of the top 10 AS-ranked images are on the list of the top 100 IE-ranked images. Moreover, none of the top 10 AS-ranked images are also top 10 IE-ranked images. Since no consistency occurs in the top-ranked results based on the sorting of the two indicators, the image recommendation based on the AS indicator alone cannot guarantee that images with the optimal IE indicator score can be recommended, and vice versa. That is, a mechanism is necessary to assist in the selection of images, by considering both types of ranking preference.   After the addition of the INDEX, the INDEX and IE line charts show similar changing tendencies with evidently small amplitudes ( Figure 13). However, a clear difference from the IE line chart is that a significant climb occurs at the end of the INDEX line chart, thereby suggesting that the greater AS indicator score has a positive impact on the sorting results of the INDEX. Such a change also implies that the INDEX-based recommendation results will differ from the IE-based results. Table 1 shows the comparison of the top 10 AS-ranked images and their corresponding IE and the INDEX scores and sorting order. Although the INDEX and IE are evidently similar in the change trends of their line charts; the INDEX-based sorting results considerably differ from that of the IE-based sorting results by considering the impacts from the AS indicator score. Thus, the INDEX can be regarded as a reference for recommendation that combines both the behaviors of the AS and IE indicators, which is an outcome consistent with the design viewpoint of this study.
Remote Sens. 2020, 11, x FOR PEER REVIEW 14 of 34 After the addition of the INDEX, the INDEX and IE line charts show similar changing tendencies with evidently small amplitudes ( Figure 13). However, a clear difference from the IE line chart is that a significant climb occurs at the end of the INDEX line chart, thereby suggesting that the greater AS indicator score has a positive impact on the sorting results of the INDEX. Such a change also implies that the INDEX-based recommendation results will differ from the IE-based results. Table 1 shows the comparison of the top 10 AS-ranked images and their corresponding IE and the INDEX scores and sorting order. Although the INDEX and IE are evidently similar in the change trends of their line charts; the INDEX-based sorting results considerably differ from that of the IE-based sorting results by considering the impacts from the AS indicator score. Thus, the INDEX can be regarded as a reference for recommendation that combines both the behaviors of the AS and IE indicators, which is an outcome consistent with the design viewpoint of this study.    Figure 11, as the IE score arises, the AS indicator score fluctuates, which means that two images with similar IE indicator scores may have a sizable fall in the AS indicator scores, thereby verifying again the inconsistency between the recommendation based on the two indicators.
As for the distribution pattern, two significant stages of climb in IE distribution occur. The climb at the end indicates that higher IE indicator scores can provide good recommendations based on the consideration of centrality. However, a notable difference between Figures 11 and 14 is that many images have higher IE indicator scores, but their AS indicator scores are close to 0. Typical scenarios are images where the size of the AOI is close to the size of the images. The average size ratio of the top 100 IE-ranked images is 1.706, indicating that the IE-based recommendations are not significantly affected by image size. A total of 37 of the top 100 IE-ranked images have their AS indicators on the list of the top 100 AS-ranked images ( Figure 15). Further evaluation reveals only 1 of the top 10 IE-ranked images is on the list of the top 100 AS-ranked images and none of the top 10 IE-ranked images are on the list of the top 10 AS-ranked images.           Figure 17 shows the changes of the three indicators of the top 100 INDEX-ranked images. As the INDEX indicator score increases, the scores of AS and IE fluctuate and the AS indicator score shows a tendency to increase, thereby indicating that the higher the INDEX indicator score, the higher the AS indicator score. Comparatively speaking, the increase of IE score is less obvious, but both converge at the tail. Thus, the top INDEX-ranked images may satisfy the recommended conditions by considering the two indicators simultaneously. In Figure 14, images on the left side mostly have one indicator score higher and another lower, while those on the right side (i.e., top-ranked images according to INDEX) have better AS and IE scores. That is, the INDEX-based sorting results can effectively and automatically exclude images with only one optimal indicator score. Moreover, additional reasonable image recommendations are given by simultaneously considering the IE and AS indicators.   Figure 17 shows the changes of the three indicators of the top 100 INDEX-ranked images. As the INDEX indicator score increases, the scores of AS and IE fluctuate and the AS indicator score shows a tendency to increase, thereby indicating that the higher the INDEX indicator score, the higher the AS indicator score. Comparatively speaking, the increase of IE score is less obvious, but both converge at the tail. Thus, the top INDEX-ranked images may satisfy the recommended conditions by considering the two indicators simultaneously. In Figure 14, images on the left side mostly have one indicator score higher and another lower, while those on the right side (i.e., top-ranked images according to INDEX) have better AS and IE scores. That is, the INDEX-based sorting results can effectively and automatically exclude images with only one optimal indicator score. Moreover, additional reasonable image recommendations are given by simultaneously considering the IE and AS indicators.  Table 3 shows the top 10 INDEX-ranked images and their scoring and sorting. The range of the sorting of the AS indicator rank is from 1 to 33, while that of the IE indicator is 12 to 192. The INDEX score is significantly affected by the change of the AS indicator score. Relative to the fall in the recommendation results in the previous two tables, the INDEX-based sorting results may compensate for the deficiency in the results ranked by a single indicator, thereby finding a better combination of the AS and IE indicators.

User Scores Analysis
In order to perform effective evaluation criteria such as NDCG, a user score collection platform was implemented. The collected data is used to evaluate whether the scores of the 4 indicators match human subjects' image selection preferences. In Section 5.2.1, the specifications of simulated images are discussed. In Section 5.2.2, the system UI and workflow of the platform is introduced. Finally, the comparisons of 4 indicators and user scores is presented in Section 5.2.3.

Image Simulation for User Scoring
The test image database is simulated according to the specifications of six RS image products, namely, FORMOSAT-2, GEOEYE-1, IKONOS, QUICKBIRD, GEOES, and WORLDVIEW-1. The spatial coverage ranges from 11.3 km × 11.3 km to 30 km × 30 km. Overall, 10,000 images were  Table 3 shows the top 10 INDEX-ranked images and their scoring and sorting. The range of the sorting of the AS indicator rank is from 1 to 33, while that of the IE indicator is 12 to 192. The INDEX score is significantly affected by the change of the AS indicator score. Relative to the fall in the recommendation results in the previous two tables, the INDEX-based sorting results may compensate for the deficiency in the results ranked by a single indicator, thereby finding a better combination of the AS and IE indicators.

User Scores Analysis
In order to perform effective evaluation criteria such as NDCG, a user score collection platform was implemented. The collected data is used to evaluate whether the scores of the 4 indicators match human subjects' image selection preferences. In Section 5.2.1, the specifications of simulated images are discussed. In Section 5.2.2, the system UI and workflow of the platform is introduced. Finally, the comparisons of 4 indicators and user scores is presented in Section 5.2.3.

Image Simulation for User Scoring
The test image database is simulated according to the specifications of six RS image products, namely, FORMOSAT-2, GEOEYE-1, IKONOS, QUICKBIRD, GEOES, and WORLDVIEW-1. The spatial coverage ranges from 11.3 km × 11.3 km to 30 km × 30 km. Overall, 10,000 images were simulated based on the geographic extent of Taiwan (119.9E-122.1E and 21.8N-25.4N) (shown in Figure 18).  Figure 18).

User Score Data Collecting Platform
To evaluate the ranking behaviors of the AS, IE, Hausdorff distance and INDEX indicators, an online web-based system (http://demo4.gips.com.tw:8080/aoi/) was developed to guide testers to complete the scoring procedure via the map interface, that illustrates the spatial layout of both the candidate images and the AOI. The system UI was demonstrated in Figure 19.

User Score Data Collecting Platform
To evaluate the ranking behaviors of the AS, IE, Hausdorff distance and INDEX indicators, an online web-based system (http://demo4.gips.com.tw:8080/aoi/) was developed to guide testers to complete the scoring procedure via the map interface, that illustrates the spatial layout of both the candidate images and the AOI. The system UI was demonstrated in Figure 19.

1.
Each tester will be prompted with 30 AOIs randomly selected by the system (from the 100 AOIs in the database). A search of qualified images is performed after the tester selects an AOI. The system returns the candidate images whose spatial extent matches the constraint of containing the spatial extent of the selected AOI.

2.
The testers are required to give a score (from 1 to 10) to each pair of candidate image and AOI (the red frame in Figure 20), with the scores representing his or her preference about the candidate images. For example, Figure 20 shows the scores of four images with respect to the same AOI from a tester. The scoring of the candidate images of each AOI from 1 to 10 does not need to be in sequence and two candidate images may be given the same score. For example, AOI is located at the center of the first image, hence the tester gives 9 points. In the latter cases, the position of the AOI is relatively close to the boundary of the image, so only 6 and 4 points are obtained. 3.
The scores of the candidate images of each AOI are stored in the system for further analyses.

User Score Data Collecting Platform
To evaluate the ranking behaviors of the AS, IE, Hausdorff distance and INDEX indicators, an online web-based system (http://demo4.gips.com.tw:8080/aoi/) was developed to guide testers to complete the scoring procedure via the map interface, that illustrates the spatial layout of both the candidate images and the AOI. The system UI was demonstrated in Figure 19. 1. Each tester will be prompted with 30 AOIs randomly selected by the system (from the 100 AOIs in the database). A search of qualified images is performed after the tester selects an AOI. The system returns the candidate images whose spatial extent matches the constraint of containing the spatial extent of the selected AOI. 2. The testers are required to give a score (from 1 to 10) to each pair of candidate image and AOI (the red frame in Figure 20), with the scores representing his or her preference about the  Figure 20 shows the scores of four images with respect to the same AOI from a tester. The scoring of the candidate images of each AOI from 1 to 10 does not need to be in sequence and two candidate images may be given the same score. For example, AOI is located at the center of the first image, hence the tester gives 9 points. In the latter cases, the position of the AOI is relatively close to the boundary of the image, so only 6 and 4 points are obtained. 3. The scores of the candidate images of each AOI are stored in the system for further analyses.

Comparisons of the User Scores for Four Indicators
For a randomly selected AOI, Table 4 shows the top 10 images ranked by the average scores given by multiple testers. The scores of the four indicators and the testers' score are listed separately.

Comparisons of the User Scores for Four Indicators
For a randomly selected AOI, Table 4 shows the top 10 images ranked by the average scores given by multiple testers. The scores of the four indicators and the testers' score are listed separately. Four of the top five IE/INDEX-ranked images are consistent with the top five images ranked by the testers' scores, but the two slightly differ in the sorting results. The INDEX-ranked images are obviously affected by the AS indicator score. However, only two of the top 5 AS-ranked images are consistent with the recommendation from the IE and INDEX. When the number of the recommended images increases to six, five of the top six IE/INDEX-ranked images are consistent with testers' choices. When the number of the recommended images further increases to 10, nine of the top 10 recommended images based on the IE and INDEX are consistent with testers' choices. Thus, the recommendation results based on the two indicators are extremely close to those based on the testers' selection behavior.
The inference can be made that, although the testers may not select images by only considering the additional available space as the AS indictor suggests, it is still favorable to consider both the demands of centrality and additional space, as the results of the INDEX indicator suggest. For a randomly selected AOI, Table 4 shows the top 10 images ranked by the average scores given by multiple testers. The scores of the four indicators and the testers' score are listed separately. Four of the top five IE/INDEX-ranked images are consistent with the top five images ranked by the testers' scores, but the two slightly differ in the sorting results. The INDEX-ranked images are obviously affected by the AS indicator score. However, only two of the top 5 AS-ranked images are consistent with the recommendation from the IE and INDEX. When the number of the recommended images increases to six, five of the top six IE/INDEX-ranked images are consistent with testers' choices. When the number of the recommended images further increases to 10, nine of the top 10 recommended images based on the IE and INDEX are consistent with testers' choices. Thus, the recommendation results based on the two indicators are extremely close to those based on the testers' selection behavior. The inference can be made that, although the testers may not select images by only considering the additional available space as the AS indictor suggests, it is still favorable to consider both the demands of centrality and additional space, as the results of the INDEX indicator suggest.  (7) 1.000 (1) 8713.785 (1) Figure 20. Screenshot of the Scoring System.

Comparisons of the User Scores for Four Indicators
For a randomly selected AOI, Table 4 shows the top 10 images ranked by the average scores given by multiple testers. The scores of the four indicators and the testers' score are listed separately. Four of the top five IE/INDEX-ranked images are consistent with the top five images ranked by the testers' scores, but the two slightly differ in the sorting results. The INDEX-ranked images are obviously affected by the AS indicator score. However, only two of the top 5 AS-ranked images are consistent with the recommendation from the IE and INDEX. When the number of the recommended images increases to six, five of the top six IE/INDEX-ranked images are consistent with testers' choices. When the number of the recommended images further increases to 10, nine of the top 10 recommended images based on the IE and INDEX are consistent with testers' choices. Thus, the recommendation results based on the two indicators are extremely close to those based on the testers' selection behavior. The inference can be made that, although the testers may not select images by only considering the additional available space as the AS indictor suggests, it is still favorable to consider both the demands of centrality and additional space, as the results of the INDEX indicator suggest.  Figure 21 shows the relationships between the scores of four indicators of all images and testers' scores, with x-and y-axes respectively representing the tester's score and the average scores of the four indicators. The scores of the four indicators increase with the testers' scores, thereby indicating that (in principle) all four indicators can provide reasonable recommended reference for image selection. However, further analysis shows that the outcome can be subdivided into three stages with the increase of testers' scores. In the first stage (testers' scores between 1 to 4), the four indicator scores are positively correlated with the testers' scores, thereby suggesting the indicator scores can serve as a sorting reference. When the tester's scores are between 4 and 7, the changes of all the four indicator scores are gradual. The four indicators therefore do not provide recommended results similar to those ranked by the testers' scores. When the testers' scores are between 7 and 10, the four indicator scores are positively correlated again. Moreover, the increases of the IE and INDEX indicator scores are relatively evident, which is an outcome that suggests that the testers' preferences may be better represented by the two indicators. As the results of the INDEX are also close to the tester's behaviors, we argue that the INDEX indicator is an appropriate choice that combines both the advantages of its two counterparts. Despite the fact that the AS indicator scores appear to increase gradually with the increase of the testers' scores, the changes in the third stage is not significant which may suggest testers' decisions are not considering the size of the additional available space. 24 Figure 21 shows the relationships between the scores of four indicators of all images and testers' scores, with x-and y-axes respectively representing the tester's score and the average scores of the four indicators. The scores of the four indicators increase with the testers' scores, thereby indicating that (in principle) all four indicators can provide reasonable recommended reference for image selection. However, further analysis shows that the outcome can be subdivided into three stages with the increase of testers' scores. In the first stage (testers' scores between 1 to 4), the four indicator scores are positively correlated with the testers' scores, thereby suggesting the indicator scores can serve as a sorting reference. When the tester's scores are between 4 and 7, the changes of all the four indicator scores are gradual. The four indicators therefore do not provide recommended results similar to those ranked by the testers' scores. When the testers' scores are between 7 and 10, the four indicator scores are positively correlated again. Moreover, the increases of the IE and INDEX indicator scores are relatively evident, which is an outcome that suggests that the testers' preferences may be better represented by the two indicators. As the results of the INDEX are also close to the tester's behaviors, we argue that the INDEX indicator is an appropriate choice that combines both the advantages of its two counterparts. Despite the fact that the AS indicator scores appear to increase gradually with the increase of the testers' scores, the changes in the third stage is not significant which may suggest testers' decisions are not considering the size of the additional available space. 24 Figure 21 shows the relationships between the scores of four indicators of all images and testers' scores, with x-and y-axes respectively representing the tester's score and the average scores of the four indicators. The scores of the four indicators increase with the testers' scores, thereby indicating that (in principle) all four indicators can provide reasonable recommended reference for image selection. However, further analysis shows that the outcome can be subdivided into three stages with the increase of testers' scores. In the first stage (testers' scores between 1 to 4), the four indicator scores are positively correlated with the testers' scores, thereby suggesting the indicator scores can serve as a sorting reference. When the tester's scores are between 4 and 7, the changes of all the four indicator scores are gradual. The four indicators therefore do not provide recommended results similar to those ranked by the testers' scores. When the testers' scores are between 7 and 10, the four indicator scores are positively correlated again. Moreover, the increases of the IE and INDEX indicator scores are relatively evident, which is an outcome that suggests that the testers' preferences may be better represented by the two indicators. As the results of the INDEX are also close to the tester's behaviors, we argue that the INDEX indicator is an appropriate choice that combines both the advantages of its two counterparts. Despite the fact that the AS indicator scores appear to increase gradually with the increase of the testers' scores, the changes in the third stage is not significant which may suggest testers' decisions are not considering the size of the additional available space. 16 Figure 21 shows the relationships between the scores of four indicators of all images and testers' scores, with x-and y-axes respectively representing the tester's score and the average scores of the four indicators. The scores of the four indicators increase with the testers' scores, thereby indicating that (in principle) all four indicators can provide reasonable recommended reference for image selection. However, further analysis shows that the outcome can be subdivided into three stages with the increase of testers' scores. In the first stage (testers' scores between 1 to 4), the four indicator scores are positively correlated with the testers' scores, thereby suggesting the indicator scores can serve as a sorting reference. When the tester's scores are between 4 and 7, the changes of all the four indicator scores are gradual. The four indicators therefore do not provide recommended results similar to those ranked by the testers' scores. When the testers' scores are between 7 and 10, the four indicator scores are positively correlated again. Moreover, the increases of the IE and INDEX indicator scores are relatively evident, which is an outcome that suggests that the testers' preferences may be better represented by the two indicators. As the results of the INDEX are also close to the tester's behaviors, we argue that the INDEX indicator is an appropriate choice that combines both the advantages of its two counterparts. Despite the fact that the AS indicator scores appear to increase gradually with the increase of the testers' scores, the changes in the third stage is not significant which may suggest testers' decisions are not considering the size of the additional available space.

Evaluation Methodology
Discounted cumulative gain (DCG) is a method for measuring the quality of a ranking. The NDCG is obtained via the normalization of DCG, which is a considerably common method for gauging search engine performance [29].
The equation for calculating the G vector in DCG calculations is shown in Equation (5), where the parameter b is used to define the index at which the relevance reduction operation will begin. In this experiment, b was set as 2. For example, if the relevance vector is <1,3,5,7> and b is set as 3, then the result of DCG [4] will be 1 + 3 + + ( ). .

Evaluation Methodology
Discounted cumulative gain (DCG) is a method for measuring the quality of a ranking. The NDCG is obtained via the normalization of DCG, which is a considerably common method for gauging search engine performance [29].
The equation for calculating the G vector in DCG calculations is shown in Equation (5), where the parameter b is used to define the index at which the relevance reduction operation will begin. In this experiment, b was set as 2. For example, if the relevance vector is <1,3,5,7> and b is set as 3, then the result of DCG [4] will be 1 + 3 + 5 log 3 3 + 7 log 3 4 .
calculates the relevance of the top k, as shown in Equation (6). The ideal discounted cumulative gain (IDCG) refers to the DCG values in the ideal ranking list.
For example, the ideal ranking list of a ranking result with three items and a vector <4,1,3> is <4,3,1>, and the NDCG [3] of this ranking result is shown in Equation (7).

Data Preprocessing
To ensure that the scores of each tester have a consistent benchmark and remove unreasonable scores, the collected scores were preprocessed according to the following procedure.

Normalization of User Scoring Data
The system allows for a scoring range of 1 to 10, but the recognized standards of each tester may not be equal. For example, Table 6 shows that Tester A may score images from 2 to 9, whereas Tester B may score images from 3 to 7. If the original scoring data is directly used in the NDCG calculations, then the results may be inaccurate and insufficiently objective.

Definition 5. Normalization of Tester Scores.
The original score of the tester is denoted as Origin(tester, score). The minimum and maximum scores of the tester are denoted as Min(tester, score) and Max(tester, score), respectively. Thus, the normalized score of the tester, denoted as Normalization(tester, score) is defined in Equation (8).
For example, Image 3 corresponding to the same AOI received scores of 8 and 6 from Testers A and B, respectively (Table 5). After normalization, the scores became 0.86 and 0.75 (Table 6).

Deletion of Unreasonable Scoring Data
After the testers' data were normalized, unreasonable scoring data were filtered out in this step. Furthermore, the average and standard deviation of the scores of each tester on each AOI-Image pair were calculated. Given a 95% confidence level, a user's score will be removed if it exceeds 1.96 times the standard deviation: rating scores (the score) > mean (average value) + 1.96 × stdev (standard deviation). Thus, all unreasonable scores for each image were deleted using this equation. Figure 22 is an NDCG line chart of the AS, IE, INDEX and Hausdorff distance indicators at K values of 1 to 9. The user is most satisfied with the IE recommendation results, followed by the INDEX, and then the AS and the Hausdorff distance at last. Previous studies have revealed that users are more interested in the degree of centering of the image than neighboring information. Therefore, we are convinced that obtaining such a result is relatively reasonable. The INDEX is a combination of IE and AS indicators and the degree of satisfaction of the recommended images is somewhere in between, which is an outcome that we believe to be acceptable. Compared with using AS completely, users are more satisfied with the INDEX because they simultaneously consider, which is an aspect that users are most interested in.

Relationship between NDCG and RS Image Types
Only when an image completely contains the entire AOI will it be marked as a candidate image. Relative to a counterpart with a lower image coverage, the RS image type with larger image coverage has a higher chance of containing the target AOI, thereby more candidate images would be selected. Figure 23 shows the number of NDCG evaluations of 100 AOIs on each type of images when k = 3. When k = 3, each NDCG result indicates that the AOI has at least three candidate images to calculate the NDCG. The number of NDCG(k = 3) evaluations on GEOES (30 km × 30 km) and FORMOSA (24 km × 24 km) are evidently more than those summarized from other RS image type with smaller coverage, like WorldView-1(3 qualified), QUICKBIRD(6 qualified), GEOEYE-1(2 qualified) and IKONOS (0 qualified). When the RS image type with a relative smaller coverage, the number of candidate image is inevitably minimal, and at this time the recommendation mechanism may be of no help, because the user can make very few choices. However, with a larger coverage, the number of candidate images is relatively large. At that point, it is an effective recommendation mechanism for users to select proper RS images quickly. Therefore, the proposed INDEX indicator can fully demonstrate its value for RS image type with larger coverage.

Relationship between NDCG and RS Image Types
Only when an image completely contains the entire AOI will it be marked as a candidate image. Relative to a counterpart with a lower image coverage, the RS image type with larger image coverage has a higher chance of containing the target AOI, thereby more candidate images would be selected. Figure 23 shows the number of NDCG evaluations of 100 AOIs on each type of images when k = 3. When k = 3, each NDCG result indicates that the AOI has at least three candidate images to calculate the NDCG. The number of NDCG(k = 3) evaluations on GEOES (30 km × 30 km) and FORMOSA (24 km × 24 km) are evidently more than those summarized from other RS image type with smaller coverage, like WorldView-1(3 qualified), QUICKBIRD(6 qualified), GEOEYE-1(2 qualified) and IKONOS (0 qualified). When the RS image type with a relative smaller coverage, the number of candidate image is inevitably minimal, and at this time the recommendation mechanism may be of no help, because the user can make very few choices. However, with a larger coverage, the number of candidate images is relatively large. At that point, it is an effective recommendation mechanism for users to select proper RS images quickly. Therefore, the proposed INDEX indicator can fully demonstrate its value for RS image type with larger coverage. km × 24 km) are evidently more than those summarized from other RS image type with smaller coverage, like WorldView-1(3 qualified), QUICKBIRD(6 qualified), GEOEYE-1(2 qualified) and IKONOS (0 qualified). When the RS image type with a relative smaller coverage, the number of candidate image is inevitably minimal, and at this time the recommendation mechanism may be of no help, because the user can make very few choices. However, with a larger coverage, the number of candidate images is relatively large. At that point, it is an effective recommendation mechanism for users to select proper RS images quickly. Therefore, the proposed INDEX indicator can fully demonstrate its value for RS image type with larger coverage.

Comparison of NDCG of Each Type of RS Image
The NDCG results discussed in Section 5.3.3 are performed with 10,000 simulated images of six types of RS image merger. In this section, the recommendations of the RS image types are provided individually for calculating the NDCG. The differences are discussed in the following section. The y-axis and the x-axis in Figure 24 indicate the NDCG and the k value, respectively. The NDCG results discussed in Section 5.3.3 are performed with 10,000 simulated images of six types of RS image merger. In this section, the recommendations of the RS image types are provided individually for calculating the NDCG. The differences are discussed in the following section. The yaxis and the x-axis in Figure 24 indicate the NDCG and the k value, respectively.  Figure 24a shows the NDCG of the four indicators for the GEOES, the largest coverage of the six RS image types. The NDCG of AS and IE are stably superior to those of IE and Hausdorff distance. Figure 24b shows the NDCG of the four indicators for the WorldView-1 RS image type. The NDCG of the four indicators are different when k = 1. Figure 24c shows the NDCG of the four indicators of QUICKBIRD. The NDCG of the INDEX is the best among the four indicators, and when the k value  Figure 24a shows the NDCG of the four indicators for the GEOES, the largest coverage of the six RS image types. The NDCG of AS and IE are stably superior to those of IE and Hausdorff distance. Figure 24b shows the NDCG of the four indicators for the WorldView-1 RS image type. The NDCG of the four indicators are different when k = 1. Figure 24c shows the NDCG of the four indicators of QUICKBIRD. The NDCG of the INDEX is the best among the four indicators, and when the k value is 1~4, the NDCG of the INDEX are superior to those of the other three indicators. Figure 24d shows the NDCG of the four indicators of GEOEYE-1. As the image coverage of the image type is relatively small (15.2 km × 15.2 km), the k value of NDCG is at most 3, and only a few candidate images are available for calculating the NDCG with increasing k value. When k = 1, the NDCG of the INDEX are superior to those of the other three indicators. Otherwise, they are not inferior to those of the other three indicators. Figure 24e shows the NDCG of the four indicators on the FORMOSAT-2, which has the second largest image size of the six image types. The NDCG of the INDEX is still the best regardless of the k value comparing with other three indicators. Figure 24f shows the IKONOS image type, which has the smallest image coverage among six types of RS image. With very few candidate images, the four indicators are identical in the NDCG.
The major reason of the AS indicator performs well in {GEOS, FORMOSAT-2} datasets is for their relatively large coverage, providing more additional information for a specific AOI. Hence, the AS performs well in large coverage RS-Image datasets. While adding the INDEX into comparison, we found that the INDEX performed better than AS. The INDEX retains the advantages of the AS in large coverage RS image types and considers the centrality simultaneously. This design makes the INDEX better than the AS.
The IE performs better than the AS in {QUICKBIRD, GEOEYE-1} datasets. The reason is that the users prefer to select an image that just contains the AOI, and such a situation is more likely to occur for RS image with small coverage. At this point, the centrality will greatly dominate the user's choice preferences, so the IE performs better than the AS in the medium and small coverage RS image types. Even so, the INDEX outperforms the IE in QUICKBIRD. In GEOEYE-1, the INDEX and the IE have the similar performance. This shows that INDEX successfully retains the characteristics of the IE, and after adding considerations of additional information, it performs better than the IE in QUICKBIRD. Overall, the NDCG of the INDEX with images recommended separately on each RS image type is superior to that with images combined for recommendation on all the included RS image types. The NDCG of the INDEX are best on three of the six image types, are acceptable on any of the other three types, and by no means obtain the worst results.
When the images of all the participated RS image types are combined for recommendation, the NDCG of the IE is the best. However, when an individual RS image type is recommended separately, the NDCG of the INDEX are generally superior to those of IE and other indicators.

Evaluations of the INDEX Improvement Rates According to RS Image Type
A total of 100 AOIs with candidate images were used for calculating the average NDCG of the four indicators on each of the five RS image types (IKONOS image of smaller coverage can only perform k = 2 NDCG for 100 AOIs). The observation indicated that the NDCG of the INDEX are better than those of AS, IE and Hausdorff distance for every RS image type (k = 3). The average INDEX-based improvement rate of AS can reach 0.76% on the GEOEYE-1 type (Figure 25

Other Evaluation Criteria Demonstrations
According to the definitions of precision and recall in previous studies, we considered user scores of 6 or higher as correct recommendations. A confusion matrix is shown in Figure 26, where | | represents the images with a rating of 6 or higher for each AOI in the simulated RS image database. Accordingly, we calculated the precision and recall for each AOI and obtained a graphical data output for the P-R curve, AP, and mAP for each indicator (i.e., AS, IE, Hausdorff distance, and INDEX), according to the specified k values. To illustrate the recommendation effect of each indicator, we calculated the precision@k and recall@k of each AOI and averaged the results with respect to k to obtain the P-R curve and AP for each recommendation indicator. According to the definitions provided in previous studies [30], after acquiring the P-R curve, the AP for each recommendation indicator was determined by calculating the area under the curve (AUC). The mAP was obtained by averaging the AP values for a single query [31]. In this study, the AP values of 100 AOIs were averaged. Figure 28 shows the process for calculating the P-R curve, AP, and mAP. According to Figure 28, the four leftmost recommendation indicators (AS, IE, Hausdorff distance, and INDEX) in the comparison were tested against the 100 AOIs and 10,000 simulated remote sensing images, covering six specifications in the simulated image database. The number of candidate images for scoring was determined based on the condition of fully containing the AOI (e.g., AOI1 was contained

Other Evaluation Criteria Demonstrations
According to the definitions of precision and recall in previous studies, we considered user scores of 6 or higher as correct recommendations. A confusion matrix is shown in Figure 26, where |R| represents the images with a rating of 6 or higher for each AOI in the simulated RS image database.

Other Evaluation Criteria Demonstrations
According to the definitions of precision and recall in previous studies, we considered user scores of 6 or higher as correct recommendations. A confusion matrix is shown in Figure 26, where | | represents the images with a rating of 6 or higher for each AOI in the simulated RS image database. Accordingly, we calculated the precision and recall for each AOI and obtained a graphical data output for the P-R curve, AP, and mAP for each indicator (i.e., AS, IE, Hausdorff distance, and INDEX), according to the specified k values. To illustrate the recommendation effect of each indicator, we calculated the precision@k and recall@k of each AOI and averaged the results with respect to k to obtain the P-R curve and AP for each recommendation indicator. According to the definitions provided in previous studies [30], after acquiring the P-R curve, the AP for each recommendation indicator was determined by calculating the area under the curve (AUC). The mAP was obtained by averaging the AP values for a single query [31]. In this study, the AP values of 100 AOIs were averaged. Figure 28 shows the process for calculating the P-R curve, AP, and mAP. According to Figure 28, the four leftmost recommendation indicators (AS, IE, Hausdorff distance, and INDEX) in the comparison were tested against the 100 AOIs and 10,000 simulated remote sensing images, covering six specifications in the simulated image database. The number of candidate images for scoring was determined based on the condition of fully containing the AOI (e.g., AOI1 was contained Accordingly, we calculated the precision and recall for each AOI and obtained a graphical data output for the P-R curve, AP, and mAP for each indicator (i.e., AS, IE, Hausdorff distance, and INDEX), according to the specified k values. To illustrate the recommendation effect of each indicator, we calculated the precision@k and recall@k of each AOI and averaged the results with respect to k to obtain the P-R curve and AP for each recommendation indicator. According to the definitions provided in previous studies [30], after acquiring the P-R curve, the AP for each recommendation indicator was determined by calculating the area under the curve (AUC). The mAP was obtained by averaging the AP values for a single query [31]. In this study, the AP values of 100 AOIs were averaged. Figure 28 shows the process for calculating the P-R curve, AP, and mAP. According to Figure 28, the four leftmost recommendation indicators (AS, IE, Hausdorff distance, and INDEX) in the comparison were tested against the 100 AOIs and 10,000 simulated remote sensing images, covering six specifications in the simulated image database. The number of candidate images for scoring was determined based on the condition of fully containing the AOI (e.g., AOI1 was contained in nine images). Next, the precision and recall values were calculated for all k values and expressed as Precision@k and Recall@k, respectively. For AOI1, which contained nine images, we calculated Precision@9 and Recall@9. Because k was limited to a maximum of nine in the scope provided in a previous study, this condition was followed. For example, although AOI100 was contained in 13 candidate images, Precision@9 and Recall@9 were calculated. The AP values for AOI1-AOI100 were calculated and averaged, to determine the mAP value. To calculate the AP values for the four recommendation parameters for each AOI query condition, we averaged the precision and recall values for various k values. Averaged precision and averaged recall are defined as Figure 27.
Remote Sens. 2020, 11, x FOR PEER REVIEW 27 of 34 in nine images). Next, the precision and recall values were calculated for all k values and expressed as Precision@k and Recall@k, respectively. For AOI1, which contained nine images, we calculated Precision@9 and Recall@9. Because k was limited to a maximum of nine in the scope provided in a previous study, this condition was followed. For example, although AOI100 was contained in 13 candidate images, Precision@9 and Recall@9 were calculated. The AP values for AOI1-AOI100 were calculated and averaged, to determine the mAP value. To calculate the AP values for the four recommendation parameters for each AOI query condition, we averaged the precision and recall values for various k values. Averaged precision and averaged recall are defined as Figure 27.
. Averaged precision and averaged recall were expressed as P@k and R@k, respectively, as shown in Figure 28 ((P@1,R@1), (P@2,R@2), (P@3,R@3), ..., (P@9,R@9)). The averaged precision and averaged recall values were used to plot the P-R curves of the four recommendation parameters for 100 AOI query conditions, and the corresponding AP values were obtained by calculating the AUC area under curve). In order to enrich the content of indicators evaluations and comparisons, we expanded five criteria of evaluation methods in our experiment. Figure 29 illustrates the experimental system architecture and steps. The architecture includes user rating collection module and the evaluation module. In the user rating collection module, the ground truth of each AOI was averaged from the total number of times each AOI was scored in the system (e.g., AOI1 was tested by nine users). The evaluation module calculated AS, IE, the Hausdorff distance, and the INDEX indicator. The Precision, Recall, P-R curve, AP, and mAP values were derived from each tested AOI and spatial recommendation parameters (i.e., AS, IE, Hausdorff distance, INDEX), as shown in Figure 29   Averaged precision and averaged recall were expressed as P@k and R@k, respectively, as shown in Figure 28 ((P@1,R@1), (P@2,R@2), (P@3,R@3), . . . , (P@9,R@9)). The averaged precision and averaged recall values were used to plot the P-R curves of the four recommendation parameters for 100 AOI query conditions, and the corresponding AP values were obtained by calculating the AUC area under curve).
Remote Sens. 2020, 11, x FOR PEER REVIEW 27 of 34 in nine images). Next, the precision and recall values were calculated for all k values and expressed as Precision@k and Recall@k, respectively. For AOI1, which contained nine images, we calculated Precision@9 and Recall@9. Because k was limited to a maximum of nine in the scope provided in a previous study, this condition was followed. For example, although AOI100 was contained in 13 candidate images, Precision@9 and Recall@9 were calculated. The AP values for AOI1-AOI100 were calculated and averaged, to determine the mAP value. To calculate the AP values for the four recommendation parameters for each AOI query condition, we averaged the precision and recall values for various k values. Averaged precision and averaged recall are defined as Figure 27.
. Averaged precision and averaged recall were expressed as P@k and R@k, respectively, as shown in Figure 28 ((P@1,R@1), (P@2,R@2), (P@3,R@3), ..., (P@9,R@9)). The averaged precision and averaged recall values were used to plot the P-R curves of the four recommendation parameters for 100 AOI query conditions, and the corresponding AP values were obtained by calculating the AUC area under curve). In order to enrich the content of indicators evaluations and comparisons, we expanded five criteria of evaluation methods in our experiment. Figure 29 illustrates the experimental system architecture and steps. The architecture includes user rating collection module and the evaluation module. In the user rating collection module, the ground truth of each AOI was averaged from the total number of times each AOI was scored in the system (e.g., AOI1 was tested by nine users). The evaluation module calculated AS, IE, the Hausdorff distance, and the INDEX indicator. The Precision, Recall, P-R curve, AP, and mAP values were derived from each tested AOI and spatial recommendation parameters (i.e., AS, IE, Hausdorff distance, INDEX), as shown in Figure 29   In order to enrich the content of indicators evaluations and comparisons, we expanded five criteria of evaluation methods in our experiment. Figure 29 illustrates the experimental system architecture and steps. The architecture includes user rating collection module and the evaluation module. In the user rating collection module, the ground truth of each AOI was averaged from the total number of times each AOI was scored in the system (e.g., AOI 1 was tested by nine users). The evaluation module calculated AS, IE, the Hausdorff distance, and the INDEX indicator. The Precision, Recall, P-R curve, AP, and mAP values were derived from each tested AOI and spatial recommendation parameters (i.e., AS, IE, Hausdorff distance, INDEX), as shown in Figure 29 denoted as Remote Sens. 2020, 11, x FOR PEER REVIEW in nine images). Next, the precision and recall v as Precision@k and Recall@k, respectively. For Precision@9 and Recall@9. Because k was limite previous study, this condition was followed. F candidate images, Precision@9 and Recall@9 we calculated and averaged, to determine the mA recommendation parameters for each AOI que values for various k values. Averaged precision  . These results were di , Remote Sens. 2020, 11, x FOR PEER REVIEW in nine images). Next, the precision and recall va as Precision@k and Recall@k, respectively. For A Precision@9 and Recall@9. Because k was limite previous study, this condition was followed. F candidate images, Precision@9 and Recall@9 wer calculated and averaged, to determine the mA recommendation parameters for each AOI quer values for various k values. Averaged precision  . These results were di , Remote Sens. 2020, 11, x FOR PEER REVIEW in nine images). Next, the precision and recall va as Precision@k and Recall@k, respectively. For A Precision@9 and Recall@9. Because k was limited previous study, this condition was followed. Fo candidate images, Precision@9 and Recall@9 were calculated and averaged, to determine the mA recommendation parameters for each AOI quer values for various k values. Averaged precision a  In order to enrich the content of indicators criteria of evaluation methods in our experime architecture and steps. The architecture includes module. In the user rating collection module, the total number of times each AOI was scored in th evaluation module calculated AS, IE, the Hau Precision, Recall, P-R curve, AP, and mAP valu recommendation parameters (i.e., AS, IE, Hausdo as , , , , . These results were dis , Remote Sens. 2020, 11, x FOR PEER REVIEW in nine images). Next, the precision and recall val as Precision@k and Recall@k, respectively. For A Precision@9 and Recall@9. Because k was limited previous study, this condition was followed. For candidate images, Precision@9 and Recall@9 were calculated and averaged, to determine the mAP recommendation parameters for each AOI query values for various k values. Averaged precision an  In order to enrich the content of indicators criteria of evaluation methods in our experimen architecture and steps. The architecture includes module. In the user rating collection module, the total number of times each AOI was scored in the evaluation module calculated AS, IE, the Hau Precision, Recall, P-R curve, AP, and mAP value recommendation parameters (i.e., AS, IE, Hausdor as , , , , . These results were disc , Remote Sens. 2020, 11, x FOR PEER REVIEW in nine images). Next, the precision and recall valu as Precision@k and Recall@k, respectively. For AO Precision@9 and Recall@9. Because k was limited previous study, this condition was followed. For candidate images, Precision@9 and Recall@9 were calculated and averaged, to determine the mAP recommendation parameters for each AOI query values for various k values. Averaged precision an   Figure 30 shows the averaged precision curves for the four recommendation indicators, based on 100 AOIs for k = 1-9. According to the figure, INDEX and IE exhibited equally optimal performance, followed by AS and then the Hausdorff distance.  Figure 31 shows the averaged recall curves for the four recommendation indicators based on 100 AOIs for k = 1-9. According to the figure, IE exhibited the optimal performance, followed by INDEX, AS, and then the Hausdorff distance.  Figure 30 shows the averaged precision curves for the four recommendation indicators, based on 100 AOIs for k = 1-9. According to the figure, INDEX and IE exhibited equally optimal performance, followed by AS and then the Hausdorff distance.  Figure 30 shows the averaged precision curves for the four recommendation indicators, based on 100 AOIs for k = 1-9. According to the figure, INDEX and IE exhibited equally optimal performance, followed by AS and then the Hausdorff distance.  Figure 31 shows the averaged recall curves for the four recommendation indicators based on 100 AOIs for k = 1-9. According to the figure, IE exhibited the optimal performance, followed by INDEX, AS, and then the Hausdorff distance.  Figure 31 shows the averaged recall curves for the four recommendation indicators based on 100 AOIs for k = 1-9. According to the figure, IE exhibited the optimal performance, followed by INDEX, AS, and then the Hausdorff distance.  Figure 32 shows the averaged precision-recall curves for the four recommendation indicators, based on 100 AOIs for k = 1-9. According to the referenced studies, the higher the AUC (area under curve) of the P-R curve, the more favorable the performance is. The figure indicates that IE and INDEX exhibited similar performance and were superior to AS and the Hausdorff distance. The Hausdorff distance had the lowest AUC, which represents the worst performance among four indicators in P-R curve evaluations.

Average Precision Evaluation
The averaged AUC of the P-R curves based on 100 AOIs were obtained for the four recommendation indicators. Figure 33 shows a histogram of AP values for the four recommendation indicators. According to the definition provided in a previous study, AP is calculated from the AUC of the P-R curve; a higher AP value indicates a more favorable discrimination. The AP results revealed that the performance of INDEX was superior to those of IE, AS, and the Hausdorff distance.  Figure 32 shows the averaged precision-recall curves for the four recommendation indicators, based on 100 AOIs for k = 1-9. According to the referenced studies, the higher the AUC (area under curve) of the P-R curve, the more favorable the performance is. The figure indicates that IE and INDEX exhibited similar performance and were superior to AS and the Hausdorff distance. The Hausdorff distance had the lowest AUC, which represents the worst performance among four indicators in P-R curve evaluations.  Figure 32 shows the averaged precision-recall curves for the four recommendation indicators, based on 100 AOIs for k = 1-9. According to the referenced studies, the higher the AUC (area under curve) of the P-R curve, the more favorable the performance is. The figure indicates that IE and INDEX exhibited similar performance and were superior to AS and the Hausdorff distance. The Hausdorff distance had the lowest AUC, which represents the worst performance among four indicators in P-R curve evaluations.

Average Precision Evaluation
The averaged AUC of the P-R curves based on 100 AOIs were obtained for the four recommendation indicators. Figure 33 shows a histogram of AP values for the four recommendation indicators. According to the definition provided in a previous study, AP is calculated from the AUC of the P-R curve; a higher AP value indicates a more favorable discrimination. The AP results revealed that the performance of INDEX was superior to those of IE, AS, and the Hausdorff distance.

Average Precision Evaluation
The averaged AUC of the P-R curves based on 100 AOIs were obtained for the four recommendation indicators. Figure 33 shows a histogram of AP values for the four recommendation indicators. According to the definition provided in a previous study, AP is calculated from the AUC of the P-R curve; a higher AP value indicates a more favorable discrimination. The AP results revealed that the performance of INDEX was superior to those of IE, AS, and the Hausdorff distance.

mean Average Precision Evaluation
The AP values of the four recommendation indicators were based on 100 AOIs, after which the results were averaged. Figure 34 shows a histogram of the mAP values of the four recommendation indicators. The results of mAP and AP were similar: the performance ranking orders of the indicators were the same, and they differed only in numbers. According to the mAP results, INDEX was the most optimal indicator, followed by IE, AS, and the Hausdorff distance.

Discussions
In this study, a new indicator INDEX was proposed that intends to provide a single ranking recommendation, based on the AS and IE indicator proposed in our previous work. In order to demonstrate its integrated characteristics inherited from the AS and IE indicators and relatively comprehensive spatial recommendation ability, four parts of experiments and evaluations from three perspectives of fundamental behaviors, user scores of preferences and ranking evaluation criteria were developed and discussed in Section 5.
In the first part of the experiment, the INDEX indicator demonstrated its superiority for providing better recommendation, when compared to those based on only AS or IE indicators. While both AS and IE have their shortcomings, the distinguished characteristics of the INDEX indicator are

mean Average Precision Evaluation
The AP values of the four recommendation indicators were based on 100 AOIs, after which the results were averaged. Figure 34 shows a histogram of the mAP values of the four recommendation indicators. The results of mAP and AP were similar: the performance ranking orders of the indicators were the same, and they differed only in numbers. According to the mAP results, INDEX was the most optimal indicator, followed by IE, AS, and the Hausdorff distance.

mean Average Precision Evaluation
The AP values of the four recommendation indicators were based on 100 AOIs, after which the results were averaged. Figure 34 shows a histogram of the mAP values of the four recommendation indicators. The results of mAP and AP were similar: the performance ranking orders of the indicators were the same, and they differed only in numbers. According to the mAP results, INDEX was the most optimal indicator, followed by IE, AS, and the Hausdorff distance.

Discussions
In this study, a new indicator INDEX was proposed that intends to provide a single ranking recommendation, based on the AS and IE indicator proposed in our previous work. In order to demonstrate its integrated characteristics inherited from the AS and IE indicators and relatively comprehensive spatial recommendation ability, four parts of experiments and evaluations from three perspectives of fundamental behaviors, user scores of preferences and ranking evaluation criteria were developed and discussed in Section 5.
In the first part of the experiment, the INDEX indicator demonstrated its superiority for providing better recommendation, when compared to those based on only AS or IE indicators. While both AS and IE have their shortcomings, the distinguished characteristics of the INDEX indicator are

Discussions
In this study, a new indicator INDEX was proposed that intends to provide a single ranking recommendation, based on the AS and IE indicator proposed in our previous work. In order to demonstrate its integrated characteristics inherited from the AS and IE indicators and relatively comprehensive spatial recommendation ability, four parts of experiments and evaluations from three perspectives of fundamental behaviors, user scores of preferences and ranking evaluation criteria were developed and discussed in Section 5.
In the first part of the experiment, the INDEX indicator demonstrated its superiority for providing better recommendation, when compared to those based on only AS or IE indicators. While both AS and IE have their shortcomings, the distinguished characteristics of the INDEX indicator are its integrated capability for finding images that meet both the preference of extensibility (AS) and centrality (IE), especially in highly ranked RS images.
In the second part of the experiment, the recommendations from the proposed indicators were compared with the quantitative scores collected from human subjects. The results of Table 4 demonstrate the high consistency between users' image selection preferences and the recommendations of INDEX and IE indicators (80% consistent with user score ranking among the top five images). According to the conclusions of the previous study, users are more satisfied with the IE-based recommendations. However, the result shows that images with a high IE score may have an extremely poor AS score. To avoid this shortcoming, the INDEX indicator can neutralize the dominant effect of IE indicator and deliver balanced recommendations by including the consideration of AS characteristics.
In the third part of the experiment, NDCG and other well-known evaluation methods were introduced for evaluating the performance of the four indicators. The result of NDCG shows that the INDEX indicator outperforms AS, IE and Hausdorff distance. We also evaluated the performances of four indicators for each type of simulated image, by the NDCG of the available k value. The NDCG of the INDEX indicator leaded in three of six types of image and showed equivalent ability with IE indicator for the other three types of image. The improvement rates of average NDCG were also calculated for the INDEX indicator over the other three indicators for each simulated image type. Five evaluation methods, including precision, recall, P-R curve (precision-recall curve), AP (average precision) and mAP (mean average precision) were further used to evaluate the performance of the 4 indicators. In Precision, Recall and P-R Curve evaluations, the INDEX indicator performs very close to the IE indicator and outperforms AS indicator and Hausdorff distance. In AP and mAP evaluations, the histograms demonstrate consistent results with NDCG and other evaluations.
For delivering a consistent and solid foundation of comparisons and results with the previous study, the results of the INDEX indicator and AS and IE indicator were compared using identical datasets (10,000 simulated real RS images) and methods (precision, recall and NDCG) from our previous study. We intended to deliver unbiased comparisons for the INDEX indicator with AS and IE indicators.
The following lists the major findings and evaluation in this research: 1.
In terms of the spatial consideration, the centrality appears to be the dominant factor when considering the selection of RS images, but only considering that the IE indicator may result in recommending images with superior centrality, but poor extensibility. The INDEX indicator demonstrates its capability to resolve this issue by providing ranking recommendation that consider both spatial perspectives.

2.
RS image coverage has large impacts on the scores of the AS and INDEX indicator. When images with relatively different levels of spatial coverages are simultaneously considered, images with larger spatial coverages have higher chances of receiving better ranking scores. This may bring negative impacts to the recommendation. To avoid such dominant influence, further examination on the impacts of spatial coverage is necessary in the future.

3.
In the operation model of GEE, users are provided with a variety of powerful tools for their applications from the abundant volume of remote sensing images archives. It enables the domain users to quickly develop their workflows based on their knowledge of remote sensing images, including the search, interpretation, processing and visualization. With the fast variety and volume growth of remote sensing images that practically accumulate every day, one of the criteria of using remote sensing images begins with efficiently finding the right images. From spatial perspectives, the ranking mechanism proposed in this paper presents a way to provide a single ranking result based on the formalization of RS image selection knowledge. We believe this ranking and recommendation mechanism can serve as good complimentary aids to current RS image cloud-based platforms like GEE. With the use of catalogue service based on standardized metadata, such platforms can further expand their capability of services.

4.
Only spatial perspectives are considered in this research. Other constraints are absolutely necessary to be integrated with spatial constraints for narrowing down the list of candidate images and even provide reasonable recommendations. Similar to what has been addressed in this research, some of the constraints, e.g., temporal constraints, can also be examined from the perspective of "degree of difference". The integrated model of spatial and temporal perspectives would require more in-depth examination in the future.

5.
Machine learning has revealed its outstanding ability in remote sensing related big data applications such as feature detections and classifications [32]. In RS image ranking and recommendation problem, machine learning may be another effective approach in the future.

Conclusions
Given the rapid increase in the quantity of RS images, the ability to search for the most appropriate RS images should receive more research attention. Therefore, a ranking mechanism is required to help users rank RS images. In our previous study, we proposed the LIFE framework with the AS and IE indicators for ranking and recommending RS images to users according to user-defined AOIs. In this work, we refined our previous efforts by combining the strengths of the AS and IE indicators into a single indicator called the INDEX. PCA is used for reducing the number of variables and retains the features of variables. We used PCA to combine AS and IE, thereby making the result meaningful. This experiment indicated that the INDEX indicator is positively correlated with user scores. When the users provide high scores to a candidate RS-image, the corresponding INDEX indicators will also be relatively high. The statistical data analysis in Section 5.3.3 reveals that the INDEX can improve IE's lack of pattern recognition ability in intermediate scores and improve the performance of AS in the last ascending period. Thereafter, we used the NDCG calculations to verify our work. The current outcomes verify those of previous studies. The user's interest in centrality is higher than that in neighbor information. The NDCG results indicated that the ranking of users' satisfaction with the indicators is in the following order: IE, INDEX, and AS. However, we found that the INDEX performs best on the same image platform. If the user wants to pick an image from all the platforms, IE is the best of the four indicators, as judged by the NDCG of the INDEX. However, if the user uses a single image platform, the INDEX would be better in most cases.
The contributions of this study are as follows.

1.
We merged the highly correlated AS and IE ranking parameters for the RS images through PCA calculations. PCA was used to calculate reasonable values for the coefficients of the merging equation.

2.
The high rank part (best 10%) of AS and IE diverse drastically for different considerations behind each of two indicators. The INDEX indicator can effectively converge the high rank part which users care most and deliver better recommendations than either AS or IE indicators.

3.
We performed the NDCG and other ranking evaluation methods from user scores representing selection preferences for RS image applications. The results verified the applicability and superiority of the INDEX indicator and demonstrate better spatial recommendations for the entire LIFE mechanism.