Following the guidelines developed for the analysis of multispectral remotely sensed images, the classification issue can be approached using different principles depending on the methods for measuring the spectral matching or the spectral similarity: the deterministic-empirical methods and the stochastic approaches [

23]. The deterministic measures include the spectral angle, the Euclidean distance and the cross-correlation of spectral vectors in the hyperspectral space. The stochastic measures evaluate the statistical distributions of spectral reflectance values of the targeted region of interests. Within this framework, a large variety of classification methods that can be grouped from different perspectives [

24].

#### 2.3.3. Spectral Similarity

The approach proposed in this paper is based on measuring the spectral variations in a 3D color space where reference endmembers are a theoretical “white” snow and a theoretical “black” target. The parameters estimated in this vector system are the spectral angle defined by [

28] and the Euclidean distance [

21], respectively calculated considering white and black references. While the parameter based on the Spectral Similarity (SS) represents an independent spectral feature, the Euclidean distance of the vector can be defined as a brightness-dependent feature. The involvement of all the three-color components will support the increase of surface types that can be discriminated: snow, shadowed snow and not snow. The proposed approach (

Figure 1) was developed in the R programming environment [

29].

The first step consists of rearranging the three-color components of each pixel into a new two-dimensional vector space, mathematically defined as follows:

The spectral angle

$\theta $ in Equation (3) represents the relative proportion of the three-pixel components (

${P}_{R}$,

${P}_{G}$ and

${P}_{B}$) in relationship to the reference composition (

${R}_{R}\text{}=\text{}1$,

${R}_{G}\text{}=\text{}1$ and

${R}_{B}\text{}=\text{}1$). The angle varies from zero, which can be associated with a “flat” behavior of colors (R = G = B), to

$\frac{\pi}{2}$, referring to a very dissimilar behavior from the theoretical “white” reference.

The spectral distance

$\Delta $ in Equation (4) is conversely an estimation of the vector length in the RGB space. It can range from 0 (black) to 1.73 (white) and it can be associated with the Euclidean distance from a “black” reference RGB composition (

${R}_{R}\text{}=\text{}0$,

${R}_{G}\text{}=\text{}0$ and

${R}_{B}\text{}=\text{}0$). While this parameter is sensitive to the brightness of colors, the spectral angle is invariant with brightness [

23]. The outcome of this step consists in the frequency counting of pixels considering the two spectral components with a 0.05 resolution. Furthermore, the total number of included pixels (

${f}_{tot}$) and the area included in the cluster perimeter (

${P}_{f}$) were estimated.

The second step of the procedure consists in discriminating clusters from the obtained frequency distribution, and a watershed algorithm [

30] can support this segmentation phase. Each cluster was fitted with a normal distribution in order to retrieve modes (defined by

${\mu}_{\Delta}$ and

${\mu}_{\theta}$) and deviations (

${\sigma}_{\Delta}$ and

${\sigma}_{\theta}$). If clusters are very close to each other, they can be combined in one larger group depending on their probability to be discriminated using the Mahalanobis distance. The criteria adopted for the definition of the cluster perimeter was based on the pixel frequency

$f\left(\Delta \prime ,\theta \prime \right)$ higher than the Poisson error of the adjacent pixel

$f\left(\Delta ,\theta \right)$ (Equation (5)).

The procedure for the delimitation of the cluster perimeter was implemented using a per-pixel method following [

31].

The final step consists in the identification of the surface type (snow, not snow and shadowed snow). This step was defined observing the frequency distributions of pixels in the defined spectral space (

Figure 3). It was possible to detect that snow covers were generally characterized by higher

$\theta $ angles and lower

$\Delta $ values than not-snow covers. Snowed centroids (defined by

${\mu}_{\Delta}$ and

${\mu}_{\theta}$) were generally positioned where angles were higher than 0.9 and distances were lower than 0.1.

Furthermore, the range of cluster values (

${\Delta}_{max}$,

${\Delta}_{min}$,

${\theta}_{max}$ and

${\theta}_{min}$) were characterized by short distance variations compared to angles in the case of snow-covered surfaces. From this point of view, clusters with limited perimeters (

${P}_{f}<0.04$) and a high number of included pixels (

${f}_{tot}>50$ of the analyzed pixels) described surfaces with homogeneous reflective behavior, as expected for snow-covered surfaces. The second rule that can be considered includes clusters with limited perimeters (

${P}_{f}<0.04$) and consistent number of included pixels (

$10<{f}_{tot}<50$ of the analyzed pixels). The optical behavior of those clusters must be coupled to their centroid position that must have low spectral angles (

${\mu}_{\Delta}<0.5$). These constraints describe, also in this case, clusters characterized by a homogeneous spectral behavior coherent with a snow-covered surface. The third rule that completes the classification procedure consisted on estimating the range of

$\Delta $ between the defined clusters in the image and on defining a threshold (

${T}_{\Delta}$) that discriminates snow and other surface types. Two situations can occur for defining clusters above the threshold as snow-covered surface: one with multiple clusters (Equation (6)) and one with a single polygon (Equation (7)).

Once performed the classification, the amount of snow-covered surface was obtained adding the contribution of each cluster identified as snow covered. Furthermore, the quality of the final output was checked by the target area over the 10-year series of images. From this perspective, the ground control points were used to estimate eventually-induced shifting of the target view, and also to control the occurrence of adverse meteorological conditions (fog, clouds, intense raining/snowing) that could affect the image. Finally, the dataset was filtered from artifacts coupling this analysis to some basic tests about the file corruption and the image resolution.