Next Article in Journal
A Cox Proportional Hazards Model with Latent Covariates Reflecting Students’ Preparation, Motives, and Expectations for the Analysis of Time to Degree
Previous Article in Journal
Reliability Assessment via Combining Data from Similar Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Unraveling Meteorological Dynamics: A Two-Level Clustering Algorithm for Time Series Pattern Recognition with Missing Data Handling

by
Ekaterini Skamnia
,
Eleni S. Bekri
and
Polychronis Economou
*
Department of Civil Engineering, University of Patras, 265 04 Patras, Greece
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Stats 2025, 8(2), 36; https://doi.org/10.3390/stats8020036
Submission received: 29 March 2025 / Revised: 1 May 2025 / Accepted: 5 May 2025 / Published: 9 May 2025

Abstract

:
Identifying regions with similar meteorological features is of both socioeconomic and ecological importance. Towards that direction, useful information can be drawn from meteorological stations, and spread in a broader area. In this work, a time series clustering procedure composed of two levels is proposed, focusing on clustering spatial units (meteorological stations) based on their temporal patterns, rather than clustering time periods. It is capable of handling univariate or multivariate time series, with missing data or different lengths but with a common seasonal time period. The first level involves the clustering of the dominant features of the time series (e.g., similar seasonal patterns) by employing K-means, while the second one produces clusters based on secondary features. Hierarchical clustering with Dynamic Time Warping for the univariate case and multivariate Dynamic Time Warping for the multivariate scenario are employed for the second level. Principal component analysis or Classic Multidimensional Scaling is applied before the first level, while an imputation technique is applied to the raw data in the second level to address missing values in the dataset. This step is particularly important given that missing data is a frequent issue in measurements obtained from meteorological stations. The method is subsequently applied to the available precipitation time series and then also to a time series of mean temperature obtained by the automated weather stations network in Greece. Further, both of the characteristics are employed to cover the multivariate scenario.

1. Introduction

With recent developments in technology and the different sources of information available, time series data are one of the most common instances among the collected data [1]. These time series can often be highly dimensional, possibly correlated, and even if they measure similar phenomena, they may be different in length [2,3,4]. Additionally, they are usually of a considerable length, since data are collected regularly and often with high frequency. These types of data can be detected in a variety of scientific fields such as in meteorology [5], medicine [6], finance [7], and epidemiology [8], aggrandizing the importance of proceeding with an analysis of them.
Over the years, time series data have been employed in a variety of approaches and techniques. These include, among others, curve smoothing and fitting [9,10], the identification of patterns like long-term trends, cycles [11] or seasonal variation [12], forecasting [13,14], and change point or outlier detection [15,16]. Additionally, time series data are used for intervention analysis, where the effects of significant events on the studied phenomena are identified, and for clustering by identifying categories of data in a time series, or groups of time series, with similar characteristics [17].
Through the clustering of the spatiotemporal data available, which is the main interest of this paper, we aim to identify groups of time series with similar temporal patterns, i.e., cluster spatial units that exhibit similar temporal patterns, rather than modeling dependencies or forecasting future values. This distinction is critical, as clustering prioritizes exploratory analysis of inherent structures in the data, such as seasonal or trend similarities, without requiring explicit parametric assumptions about time-dependent processes. By this approach, important patterns and/or anomalies can be discovered in the structure of the data, and valuable information can be extracted [18]. Clustering is considered an unsupervised data-mining technique to organize similar data objects into groups based on their similarity. The objective is that these groups (i.e., clusters) are as dissimilar as possible, but the objects located in the same cluster have the maximum possible similarity among them [19]. The identified clusters may represent groups of data points/objects, or time series observations, as in the present paper, collected by different sensors or in various locations, exhibiting similar temporal patterns and dominant features, irrespective of spatial proximity.
In the latter case, n different univariate time series, t s i , i = 1 , , n (or ts i , i = 1 , , n for multivariate), of possibly different lengths, i.e., with possible different number of observations, are grouped in K different clusters, namely, C 1 , C 2 , …, C K , according to a similarity criterion measure. Considering that T S is the set of the n time series, i.e., T S = { t s 1 , t s 2 , , t s n } , these clusters, under a hard clustering algorithm, should satisfy the following relationships:
T S = i = 1 K C i and C i C j = for i j .
Over the years, numerous applications of time series clustering have been presented, in a diversity of domains, for instance, in medicine to detect brain activity based on image time series [20], or for diabetes prediction [21], further, in biology and psychology to identify related genes [22] or for a deeper analysis of human behavior [23]. Concerning the climate and the environment, time series clustering has been employed to discover climate indices [24], weather patterns [25], or to analyze the low-frequency variability of climate as in [26].
Regarding the clustering algorithms, there are several techniques available, although it is true that more clustering algorithms are available for static and not time series data. More specifically, there is a diversity of types of algorithms in the bibliography for static data, which can be summed up into five major categories. These are the partitioning methods, the hierarchical, the density-based methods, the grid-based methods, and finally the model-based methods [27].
Briefly, the partitioning methods attempt to partition the objects into k distinct groups. The general idea is that a relocation method iteratively reassigns data points across clusters to optimize the cluster structure. This optimization is guided by a specific criterion, with the most ordinary choice for the partition procedure being the within-cluster sum of squares (WCSS). Another alternative to derive the partitions is the so-called Graph-Theoretic Clustering [28]. Hierarchical methods, on the contrary, define a structure that resembles a tree, depicting the hierarchical relationship between objects [29]. More specifically, the latter can be divisive or agglomerative based on the merging direction (i.e., a top–down or a bottom–up merge). Throughout density-based clustering, clusters are formed based on the idea that a cluster in a data space is a contiguous region of high point density. These clusters are distinguished by contiguous subspaces of points with low density [30]. Grid-based algorithms divide the data space into a number of cells so that a grid structure is formed, and afterward, clusters are formed from these cells in the grid structure [31]. Lastly, model-based clustering is a more statistical approach. It requires a probabilistic model because data are assumed to have been generated from that model. To result in clustering, the parameters and components of the distribution have to be determined [32,33,34].
On the contrary, time series clustering methods are scarce and, in most cases, actually rely on methods employed on static data. According to the study of [19], the categories of the time series methods can be divided into those that are raw data based, those that are feature based, and finally those called model based. Methods appertained to the first category are those that employ the raw time series along with an appropriate distance measure. These methods can be considered analogous to those employed in static data, such as hierarchical clustering. The other two categories involve the conversion of the raw time series into a feature vector of lower dimension or model parameters correspondingly. Afterward, conventional clustering algorithms (e.g., hierarchical, partitioning, and grid based) are applied to the extracted feature vectors or model parameters.
Recently, advanced MTS clustering approaches have emerged, including LSTM-DTW hybrid models [35], as well as shape-based clustering and graph-based methods [36] that capture complex temporal dependencies. Similarly, MTS imputation has seen the development of powerful frameworks such as GAIN [37], BRITS [38], and temporal graph-based models, addressing missing data by leveraging deep learning architectures and temporal graph representations. An extended time series clustering review is presented in [18] as well as in [39], while a deep time series clustering review is in [40]. Recent additions to the multivariate time series clustering in the bibliography are [41,42,43], while multivariate time series imputation techniques, a critical step for since missing values are common, are discussed in [44,45].
It is also worth mentioning that most of the clustering algorithms rely on a properly selected distance measure. As a result, it is not a surprise that the selection of a distance measure plays a profound role in time series clustering as well. According to [18], deciding the optimal measure is a controversial issue among researchers because it generally depends on the structure of the time series, their length, and the clustering method that is employed. It also depends on the aim of clustering and, more precisely, whether it aims to find similarity in time, shape, or change [46]. Similarity in time signifies that time series vary in a similar way at each time point. Clustering concerns the similarity in shape targets to cluster time series objects that share common shape features, while in the third case, time series are clustered based on the manner in which they vary from time point to time point.
In the present work, a two-level clustering algorithm for time series, relying on univariate but also multivariate measurements, is proposed. The algorithm aims to identify similar patterns in different time series that may be different in length but share a common seasonal time period. More precisely, in the first level of the clustering procedure, the data of all the available time series are employed to identify groups of time series with similar seasonal and trend, if present, patterns. The raw data of each time series in each cluster identified in the first level are then used to further separate the available data. In this step, imputation methods are also employed to handle the missing values in each characteristic of the available time series [47]. These characteristics could be, for instance, measurements of precipitation levels, mean temperature levels, etc.
The proposed two-level algorithm is particularly designed for clustering time series with missing data and seasonal patterns. Unlike modeling techniques that focus on error-term dynamics (such as ARIMA or SARIMA), our approach emphasizes the extraction of dominant temporal features (seasonality and trends) and employs distance metrics specifically designed to handle time series misalignments (e.g., Dynamic Time Warping). This approach ensures that the clusters reflect temporal coherence rather than relying on spatial intuition or parametric relationships.
The rest of the paper is organized as follows. Section 2 presents the motivation behind this work along with its importance. In Section 3, the proposed methodology is presented in detail for both scenarios of univariate or multivariate time series. In Section 4, the proposed algorithm is applied, in the first place, to the precipitation data of Greece and to the mean temperature data. Afterwards, it is applied to both precipitation and mean temperature data from the same region, in order to cover the scenario of multivariate time series. The resulting clusters are derived purely from temporal features, enabling data-driven identification of meteorological regions without relying on predefined spatial or parametric assumptions. Finally, some concluding remarks are presented in Section 5.

2. Motivation

Identifying regions that share common meteorological features is of great socioeconomic and ecological importance for decision and policy making. Additionally, regions with similar meteorological characteristics enable better management of the available resources like water, energy, or agricultural productivity, allowing the adoption of more efficient and sustainable policies and planning. Meteorological data, such as wind and temperature, play an essential role in renewable resources such as water. As mentioned in [13], more than 75% of global greenhouse gas emissions and almost approximately 90% of all carbon dioxide emissions are caused by human activities, leading to a constantly changing climate environment. Thus, with the aid of renewable sources of energy, these percentages will be decreased in an attempt to stabilize climate change as much as possible and prevent its further effects on the Earth.
Under global climate change and the constantly increasing human pressures on aquatic ecosystems in terms of water quantity and quality, the need for studying and modeling freshwater resources plays a key role in sustainable water, energy management, and efficient decision-making [48,49,50]. A key point for identifying such regions is to recognize similar regions, i.e., regions with similar meteorological characteristics, since dealing with individual weather stations is not only time-consuming but also prone to more variation in contrast to dealing with a group of homogeneous stations [51,52,53], and often insufficient due to the poor coverage of an area they provide and the missing data that they often provide.
The limited coverage of meteorological stations in certain areas, along with the frequent occurrence of missing data, presents challenges that are difficult to overcome, particularly in remote regions or those with complex topography. For example, Greece’s varied and complicated topography, its long and complicated coastline, and the remarkably extensive island complex create a demanding and complex environment to cover extensively. At the same time, the same topography characteristics play a considerable role in the spatial distribution of precipitation and other meteorological characteristics across the country, making the adequate and reliable description of the climate and weather conditions a demanding task. Due to all these unique and particular hydroclimatic characteristics, the division of Greece into climatologically homogeneous regions [54] (with similar precipitation and temperature characteristics), comprised of a number of meteorological stations, based on the similarity of the measured monthly precipitation or other relevant meteorological time series, is of great interest since the high altitude of several mountainous remote areas, i.e., with limited accessibility, and the extensive island complex results in insufficient, totally absent, or inaccurate (i.e., with a lot of missing observations) data for many areas.

3. Methodology

In this section, a two-stage clustering technique is proposed to identify clusters of (multivariate) time series with similar patterns that share a common seasonal time period (e.g., yearly) but may be different in length (i.e., number of observations) and/or may contain different percentages of missing values. The first stage relies on the dominant features, i.e., the trend, if present, and the seasonality, of the time series while the second makes use of the raw data and the imputation of any missing value to further enhance the clustering process, providing deeper insights into the internal structure and characteristics of the data.
In terms of notation, let X ( 1 ) , , X ( m ) denote the m available -attribute time series (with a common period d for seasonality). For each X ( j ) , j = 1 , , m , there are n j available observations, a part of which may be missing. As a result, at each time point i 1 , , n j for the X ( j ) time series, the following -dimensional observation vector is available:
X i ( j ) = [ x 1 , i ( j ) , x 2 , i ( j ) , , x , i ( j ) ] ,
where x b , i ( j ) , b = 1 , , is the i t h value (possible missing) of the b t h attribute of the X ( j ) time series.
It should be mentioned that the employed clustering methods were selected initially for their interpretability and computational efficiency. The decision of not including advanced approaches for clustering or imputation was primarily driven by the computational demands of such models, which are often resource intensive and require extensive hyperparameter tuning. Additionally, a key priority of the present study was to maintain interpretability and methodological transparency, especially in the context of environmental data analysis.

3.1. Dominant Features Clustering

The first stage of clustering relies on the dominant features of the time series, i.e., the trend and the seasonal variation. The trend represents the long-term change in the values of a time series, while seasonal variation reflects regular cycles of the phenomena. In the present study, it is assumed, as already mentioned, that the available time series presents a seasonal variation with a common period. On the other hand, the trend, if present, is assumed to be adequately described by a family of functions that is capable of capturing a long-term, gradual change. Such families may include polynomial, exponential, and logistic functions.

3.1.1. Extracting Dominant Features

The trend, if present, is assumed to follow a common functional form fitted across all available time series. The estimated coefficients are stored for subsequent analysis. For instance, if the trend follows an S-shaped curve, the Pearl–Reed logistic model,
x ˜ j i t i = 10 γ j i β 0 j i + β 1 i j t i
is used, and parameters ( γ i j , β 0 i j , β 1 i j ) are estimated for i = 1 , 2 , , m and j = 1 , 2 , , k .
Several other models capture different growth dynamics. The Gompertz curve, originally developed for human mortality [55], is widely used for growth data and belongs to the Richards family of three-parameter sigmoidal models [56]:
y ( t ) = α e b e c t ,
where α is the asymptotic upper bound, b is a scaling parameter, and c is a growth-rate coefficient. A more general form is the Richards Growth Model [57]:
y ( t ) = α ( 1 + b e k t ) 1 / ( 1 m ) , m > 1 , b > 1 , k > 0 ,
which extends logistic and Gompertz models to accommodate more flexible growth patterns [58].
Simpler trends include the Linear Trend Model, based on linear regression:
y ( t ) = β 0 + β t + ϵ t ,
where ϵ t is a random error term, and Polynomial Trend Models, which introduce higher-order terms to capture acceleration or deceleration:
y ( t ) = β 0 + β t + β t 2 + + ϵ t .
While applying a common trend model across all series may seem restrictive, these models provide considerable flexibility in capturing asymmetric growth, decay, and both linear and nonlinear behaviors. Moreover, long-term environmental trends are primarily influenced by climate change, which tends to exert a similar regional impact. Consequently, adopting a common family of functions may be a reasonable and effective approach.
For the seasonal variation, the seasonal indices s ̰ j i = ( s j i 1 , s j i 2 , , s j i d ) for the jth attribute of the ith time series can be computed, for example, by averaging over all of the available values for the specific time period. For monthly recorded data, for instance, these seasonal indices, assuming a 12-month period, can be simply calculated by averaging over all the available values for that particular month, i.e., by calculating the following quantities:
s i j k = x i j k + x i j ( 12 + k ) + x i j ( 2 × ( 12 + k ) ) + count of addends
for k = 1 , 2 , , 12 , each one representing the 12 months (i.e., k = 1 representing the seasonal index for the measurements taken in January, and k = 2 in February). This procedure aggregates all available data for each month to capture recurring seasonal patterns, even if the time series has missing values or varying lengths. Again, these indices are stored and, along with the coefficient of the trend analysis, if present, are used in the following steps of the procedure.

3.1.2. Dimensionality Reduction

The coefficients of the trend analysis, if present, and the seasonal indices of the parameters can be used to determine the distance matrix between all the available k-attribute time series. These distances can then be fed, for example, to the K-means clustering algorithm to identify time series with similar dominant features. However, the K-means algorithm in large and high-dimensional datasets can not only be a challenging task but also become less efficient compared to its application to lower-dimensional data. To overcome this problem, projecting high-dimensional data into low-dimensional data is often initially adopted, and then the K-mean algorithm is applied in the reduced space [59,60,61]. Two of the most frequently used dimensionality reduction methods, regarding the input type, are the principal component analysis (PCA) and the Classical Multidimensional Scaling (CMDS), also known as principal coordinates analysis [62]. These two methods are closely related and are differentiated by the type of input they take.
PCA is a statistical method that transforms a set of observations of possibly correlated variables into a reduced, in number, set of linearly uncorrelated variables without losing a large amount of information. More specifically, from the p, let us say, original standardized variables, denoted for example as X i , i = 1 , 2 , , p —in our case, these variables correspond to the standardized version of the trend coefficients, if present, and the seasonal indices extracted from the previous step—PCA created p new variables, the so-called principal components (PCs), denoted as P C i , i = 1 , 2 , , p which are linearly uncorrelated and may be written as a linear combination of the original variables. Specifically, the jth PC can be written in the following form:
P C i = a j 1 X 1 + a j 2 X 2 + + a j p X p
where a j u ( u = 1 , 2 , , p ) are appropriate weights that quantify the contribution of the uth original variable to the ith PC. For more details on the PCA and its use in various applications, the reader is referred, among others, to [63,64,65].
It is of note that the principal components are constructed in such a manner that the first principal component accounts for the largest possible variance in the dataset (i.e., as much information as possible), the second principal component accounts for the following highest variance with the constrain that it is perpendicular to the preceding, etc. As a result, a smaller number, compared with the original number of variables, can be retained while preserving as much information as possible. The number of the principal components that are retained is usually decided by keeping those that explain a high percentage of the variance of the initial data, for example, at least 85%.
Classical Multidimensional Scaling is a member of the family of Multidimensional Scaling methods [66] that aims to discover the underlying structures, based on distance measures between objects or cases. The input for an MDS algorithm is an estimated item–item similarity, or equivalently dissimilarity, information, measured by the pairwise distances between every pair of points [67] while the output is a reduced dimensional space such that the distances among the points in the new space reflect the proximities in the original data. For this, MDS is frequently used as a 2D or 3D data visualization technique but can also be interpreted as a dimensionality reduction technique. However, it should be noted that MDS relies on the similarities or the dissimilarities of the data points, while PCA relies on the data points themselves.
More particularly, the Classical MDS attempts to find an isometry between points distributed in a higher-dimensional space and in a low-dimensional space. In short, it creates projections of the high-p dimensionality points in a r-dimensional linear space, with r < p , by trying to arrange the projections so that the distances, measured by the Euclidean distance, between pairs of them, resemble the dissimilarities between the high-dimensional points. More extensively, MDS starts with a table of dissimilarities or distances, which is converted into a proximity matrix. It then creates a centering Gram matrix G and makes a spectral decomposition of G. Finally, the appropriate number of dimensions is decided. Further information on the procedure of CMDS (and generally on MDS) can be sought in the corresponding sections in [68,69,70] as well as in [71]. As noted and proven by [70], there is a duality between the principal components analysis and principal coordinates analysis, i.e., classical MDS where the dissimilarities are given by the Euclidean distance.
Thus, the Euclidean distance can be employed in the dominant features of the available time series to calculate the necessary, for the input, distance matrix and discover similar underlying structures. For example, if only seasonal indices are extracted from the time series, the distance between the r-th (k-attribute) seasonal index and the corresponding q-th one is defined as follows:
d ( r , q ) = t = 1 d j = 1 k s j r t t s j q t t 2 ,
with k defining the number of available attributes and d the total number of distinct time points over a time period (for example, 12 for monthly measurements with yearly seasonality).
Regarding the choice between the two aforementioned dimensionality reduction techniques, it is of note that the PCA can be adopted for the case that single-attribute time series and their dominant features are under study. This is because PCA projects data points on the most advantageous subspace, retaining the majority of the information in the first PCs. On the other hand, MDS can be used when a larger number of attributes are available for each time series in order to maintain, as much as possible, the relative distances between the points and take into account the similarities or the dissimilarities for each attribute themselves and not in projected space.

3.1.3. Clustering Algorithm

The final step of the first level of the proposed algorithm is to perform the cluster analysis. The K-means clustering algorithm was selected for that reason. It is considered one of the most common clustering algorithms (also applicable to time series data), which aims to create clusters of the original data by splitting them into groups. It is a centroid-based clustering algorithm that has its origins in signal processing. In K-means, clusters are represented by their center (the so-called centroid), which corresponds to the arithmetic mean of data points assigned to the cluster, meaning that it is not necessarily a member of the dataset. Every observation of the dataset is assigned to a cluster by reducing the within-cluster sum of squares [19].
In this work, the K-means clustering algorithm is performed by using the principal components selected as mentioned (i.e., selecting those explaining at least 85% of the variance of the initial data) concerning the univariate case, or the matrix with w columns whose rows give the coordinates of the points chosen to represent the dissimilarities (multivariate case). This procedure reveals the (first level) clusters based on the dominant features of the time series, i.e., the trend, if present, and the seasonal variation.
The w columns forming the matrix that is employed as an input in the K-means algorithm are derived by initially finding the appropriate number of dimensions in the lower-dimensional subspace. The optimal number of dimensions can be derived by the stress values. The stress function is a measure of the discrepancy between the original distances and the distances in the reduced space. According to [72], it is reasonable to choose a value of dimensions that makes the stress acceptably small and for which a further increase in the number of dimensions does not significantly reduce the stress. It is also mentioned that a value of 5 % of stress can be considered good optimization for the goodness of fit and 2.5 % as excellent. Another approach, frequently used, is the examination of the scree plot that depicts the eigenvalues against the number of dimensions that are considered. Then, the known “elbow” criterion is used to find the appropriate number of dimensions. Usually, the lower-dimensional space ends up being a 2-dimensional or 3-dimensional space, and thus a matrix with coordinates in the represented space (dimension 1 in column 1, dimension 2 in column 2, etc.) is formed.
Lastly, the optimal number of clusters is determined by the average silhouette criterion [73] in both the first case (i.e., the univariate) and the second multivariate. In order, however, to have a more robust choice, the Davies–Bouldin index was computed as well. However, the gap statistic [74] or the total within the sum of squares could also be adopted as an alternative.

3.2. Secondary Features Clustering

The second level of clustering consists of two steps and relies on the use of the available raw data. Initially, an imputation technique is assisted to complete the missing data in the raw data, and afterwards, one more clustering level is employed on the resulting—from the first level—clusters. In the following subsections, details of these two steps are provided.

3.2.1. Imputation Technique

While no special handling and imputation of missing values is applied during the first stage of clustering, this does not hold for the second level. Missing imputation techniques are crucial for the second-step clustering employing Dynamic Time Warping. Explicitly, Dynamic Time Warping which is employed in the second level does not inherently handle missing data. At this point, the “Seasonally Splitted Missing Value Imputation” (SSMVI) method is suggested for the imputation of values for missing data in the raw time series. This method is included in imputeTS package on CRAN, a package that specializes in time series imputation and includes several algorithms [75].
The SSMVI method takes into account the seasonality of the time series during imputation of the missing values. In particular, SSMVI initially divides the time series into seasons, which can be considered a preprocessing step, and then performs imputation separately for each of the resulting time series datasets. The missing values within each season are estimated using interpolation, i.e., by estimating the missing values based on their surrounding points. In this way, the unique characteristics of each season are preserved.

3.2.2. Clustering Algorithm

Following the procedure of the first level of clustering and imputation, the resulting time series objects are grouped based on the clusters identified by the first level of clustering. Thence, time series agglomerative hierarchical cluster analysis is performed in time series objects for every single first-level cluster, comprising the second stage of the method. The linkage criterion employed is the average agglomeration method.
Hierarchical clustering is another example of an unsupervised technique. It can be either agglomerative or divisive. The first variant treats each data point (in our case, time series) as a separate cluster and iteratively merges subsets until all subsets are merged into one. On the contrary, divisive hierarchical clustering begins with all data points in a single cluster and iteratively divides the initial cluster until the data point is located in its cluster [29]. The above general procedure can be represented in a dendrogram.
One of the main challenges in clustering time series is dealing with time shifts and distortions. Euclidean distance, while simple and computationally efficient, does not account for such misalignments, meaning that similar time series with slight delays would be considered distant. Dynamic Time Warping (DTW) overcomes this issue, ensuring that patterns occurring at different time points can still be recognized as similar. For this reason, DTW was employed in the univariate scenario, as it provides robustness against time shifts and distortions, improving clustering accuracy in cases where the Euclidean distance may fail. Dynamic Time Warping is a commonly accepted technique for finding an optimal alignment between two given (time-dependent) sequences under certain restrictions [76]. As it can be inferred by the name of this technique, the sequences, let X = ( x 1 , , x N ) and Y = ( y 1 , , y M ) , N , M N , are warped to match each other. The optimal alignment is derived by finding an alignment that has a minimal overall cost of a warping path p between X and Y. This total cost is the sum of local cost measures that are derived from warping paths between pairs of elements of the two sequences. A warping path is a sequence, let p = ( p 1 , , p L ) , with p l = ( n l , m l ) [ 1 : N ] × [ 1 : M ] for l [ 1 : L ] , that satisfies certain conditions.
A key limitation of standard DTW is its computational cost, making it inefficient for long sequences. To mitigate this, DTW with Euclidean distance as the local distance function is applied, using the “DTWARP” method from the TSclust package in CRAN. The function applies standard DTW for alignment and, by default, employs Euclidean distance as the local cost function. However, alternative modifications exist [see for example [77,78,79]] which can reduce the computational burden and improve alignment in certain cases. While these alternatives are not implemented in this study, future research could explore their potential benefits.
In the multivariate case, the choice of distance metric becomes more complex because multiple attributes (e.g., precipitation and temperature) must be compared simultaneously. Multivariate Dynamic Time Warping (mDTW) is employed, as it allows for the comparison of multivariate time series by aligning multiple dimensions simultaneously (e.g., precipitation and temperature). However, it comes with a higher computational cost, making it less practical for large datasets. If computational constraints allow, mDTW can be implemented using the dwtclust package on CRAN, which extends the functionality of proxy::dist by providing custom distance functions, including DTW for time series. Applying DTW independently to each attribute and summing the resulting costs is a possible alternative; however, this approach might lead to the loss of inter-attribute dependencies and the computational complexity. Lastly, in the work of [80], the Euclidean timed and spaced method is selected, so this could also be an alternative.
As the final step of clustering, it is necessary to find an appropriate cutting point of the produced dendrogram that will reveal the number of clusters. The average silhouette width method is again proposed to be employed to identify the optimal number of (sub-)clusters in every first-level cluster. Finally, these sub-clusters, for each first-level cluster, are the final groups identified by the proposed method.

4. Application

The proposed two-level clustering method is applied to monthly time series (2006–2020) covering the entire Greek territory (see Section 4.2) for the purpose of identifying homogeneous (meteorological) regions in this area. In the simple univariate case, the method is applied to the available precipitation time series and to the mean temperature time series derived from the average of the two extreme values, i.e., minimum and maximum, registered temperatures for each station. Concerning the multivariate case, both of the aforementioned characteristics are employed at the same time.

4.1. Data Description

The Institute for Environmental Research and Sustainable Development of the National Observatory of Athens has developed a network of more than 500 automated weather stations that have operated across Greece during the last 15–20 years [81]. In this study, we use the registered data from this network which have been preprocessed as thoroughly analyzed in the Appendix A in order to ensure an adequate data quality and suitability for the herein proposed analysis. The summary statistics for the final data of precipitation and mean temperature are presented in the following Table 1. Further, it should be mentioned that preliminary analysis using visual inspection and Mann–Kendall tests confirmed no significant trends in the 15-year dataset, and thus the trend models discussed in Section 3.1.1 are omitted.

4.2. Univariate Case

For the univariate case, the proposed methodology is applied two times: initially, to the available precipitation data and then, to the mean temperature calculated by averaging the two extreme values of the temperature. In accordance with Section 3, a clustering on the dominant features is applied and afterwards, another clustering procedure is employed, based on the secondary features and the clusters of the first-level clustering.

4.2.1. First-Level Clustering

In order to proceed to the first level of clustering, the seasonal indices are primarily computed for each station by averaging the available values for the corresponding month over all years. Then, the principal component analysis is used to reduce the dimensionality of the seasonal indices after standardizing them. According to this technique, the first three principal components are able to account for 87.888% for the case of precipitation data, while the first two principal components for the temperature case account for 96.366% of the total variance in the data, and thus are selected to perform the first-level clustering based on the K-means algorithm.
Considering the average silhouette width, the optimal number of clusters is found to be equal to two for both cases (Figure 1 and  Figure 2). The Davies–Bouldin index leads to the same optimal number as well. Regarding the first case, i.e., the case of the precipitation data, the first cluster consists of 313 stations. The rest of the stations (57) are assigned to the second cluster. Figure 3 presents the mean seasonal indices for these two clusters under the initial case concerning the precipitation. From the figure, it is clear that the first cluster consists of areas of lower rainfall in all months. For further insights on the clusters, the boxplots of the monthly seasonal indices are presented in Figure 4.
The first cluster of the case with the mean temperature data consists of 96 stations, while the second consists of 274 stations. In Figure 5, the corresponding mean seasonal indices for these two groups are presented concerning the second case of the mean temperature, while in Figure 6, the boxplots of the monthly seasonal indices are depicted. In addition, for both cases, plots of average silhouette width per cluster for this level of clustering are provided in the Appendix C.2.
The first level of clustering, using both meteorological features, identifies two climatic regions determined by orography and the passage of depressions from the west, i.e., the Pindos mountain range, extending from the NW edge to the Crete island, dividing the country into two major hydroclimatic areas, the water-rich one and the semi-arid eastern ones [82]. More specifically, cluster 1 based on precipitation includes stations of medium to low precipitation rates in Eastern and Southern Greece with mean annual precipitation between 219.16 and 861.29 mm and a mean altitude of 126.52 m above sea level. On the other hand, cluster 2 includes stations of high to very high precipitation rates in Northern Greece with a mean annual precipitation between 924.42 and 2494.14 mm and a mean altitude of 741.14 m above sea level. It is worth mentioning that 84.59% of all stations are included in the “dry” cluster 1, while the remaining 15.4% of the stations are embodied within the “wet” cluster 2.
It is of note that, while these clusters align more or less geographically with the Pindos mountain range, this division emerges solely from the temporal patterns of the data, such as contrasting winter precipitation peaks (see for example the 3rd subgroup with 181.33 mm in November vs. the 1st subgroup with 75.48 mm). Spatial attributes (e.g., elevation or proximity) are not inputs to the algorithm.
Cluster 2, based on temperature, includes circa 74% of stations with high mean annual temperature (18.5 °C), ranging between 15.84 °C and 22.25 °C and a mean altitude of 144 m above sea level. On the other hand, cluster 1 includes stations of low mean annual temperature (13.1 °C) ranging between 3.84 °C and 15.77 °C and a mean altitude of 759.1 m above sea level, located mainly in mountainous areas.

4.2.2. Second-Level Clustering

Regarding the second-level clustering, the missing values for each time series are imputed by splitting the time series into seasons and afterwards, performing interpolation to deal with the missing values separately for each of the resulting time series datasets. The percentage of missing values for precipitation and mean temperature ranges from 0 to 85% with a mean of 45% across the different available stations. The time series in-cluster dissimilarity matrices are afterwards computed (based on the Dynamic Time Warping method computed for each pair of objects) for each cluster, determined by the first-level clustering. Using hierarchical cluster analysis, each cluster is split into sub-clusters. The optimal number of clusters is determined again by the silhouette method.
Each first-level cluster is again split into two sub-clusters (see Appendix C.1). More specifically, in the case of precipitation, the stations of the second main cluster are split into 50 and 7 stations, while the stations of the first main cluster are divided into 291 and 22. Stations under the second scenario and the first main cluster are split into 90 and 6 stations, and those stations that result in the second main cluster are divided into clusters comprised of 270 and 4 stations. Indicatively, for this level of clustering and for the second case, plots of the average silhouette width per cluster are provided in the Appendix C.2.
A rain gauge only provides the depth of rain at the point where the station is located, so rain on an area should be estimated from that point of measurement, using surface integration methods. In this study, the Thiessen polygons method [83], using ArcGIS Pro 2.5, is applied to divide the study area into regions (polygons) where every point within a given polygon is closer to its associated data point than to any other. Given the set of weather stations, Thiessen polygons delineate the area most likely influenced by each station. This method is still popular and widely applied because of its high calculation accuracy, fast computation, and simplicity, requiring only the area data of sample points to estimate average rainfall (and other hydrometeorological factors, e.g., solar radiation) over a bounded area [84,85].
In Figure 7 are depicted the four sub-clusters regions (using Thiessen polygons) for the precipitation time series, and in Figure 8, the four sub-clusters regions for the mean temperature time series. The second level clustering divides each of the above-mentioned groups into two subgroups, resulting in four final groups of similar mean annual precipitation rates. More precisely, the previous “dry” cluster 1 includes almost the entire of Eastern Greece and most of the coastal and low-altitude regions of Western Greece. It is split into (a) a “very dry” cluster (including 92.97% of the stations of the first cluster) with mean annual precipitation of 589.68 mm, including among others the Region of Attica (the most densely populated metropolitan region of Athens) as well as Thessaloniki, the second biggest city of Greece, and (b) a “less dry” cluster (including 7.03% of the stations) with mean annual precipitation 792.65 mm including regions located in Crete, some islands in Aegean sea (e.g., Chios, Ikaria, Lesvos, Thassos, Rhodos, and Kastelorizo), and a few regions in the continental Greece (e.g., Kalamata in Messinia, Geraki in Lakonia, and Tatoi in Attica). Similarly, the previous “wet” cluster with mainly mountainous areas is divided into (a) a “wet” cluster, including 87.72% of the stations, with mean annual precipitation 1342.27 mm with regions in Western Greece around the mountainous systems extending from North to South, (Pindus, Panachaiko, Erymanthos, Chelmos, Taygetos, etc.), and (b) a “very wet” cluster, including the rest 12.28% of the stations, with mean annual precipitation 2099.75 mm, including the most rainy areas (>1845 mm) in Northern Greece, located in Epirus (i.e, Ioannina, Thesprotia, and Arta) and a small area of Evrytania.
Further, the second-level clustering divides each of the above-mentioned groups into two subgroups, resulting in four final groups of similar mean annual temperature. More precisely, the previous “warm” cluster 2 is further split into (a) a “very warm” cluster (including 72.97% of all the stations in total) with mean annual temperature 18.51 °C including most of Eastern Greece, all islands and a wide coastal area of Western Greece and (b) a “ less warm” cluster (including 1.08% of all the available stations) with mean annual temperature 16.91 °C, including only four stations in Achaia, Chania, Chios, and Veroia. Similarly, the previous “cold” cluster is divided into (a) a “cold” cluster, including 24.32% of the stations, with mean annual temperature 13.57 °C with most regions located in Northern Greece (e.g., Alexandroupoli, Ioannina, and Metsovo) and at the mountainous (e.g., Panachaiko) and (b) a “very cold” cluster, including the rest 1.62% of the stations, with mean annual temperature 6.09 °C with altitude >1500 (e.g., Kaimaktsalan and Parnassos). The above-mentioned insights, for both scenarios, are summarized for comparison purposes in Table 2 and Table 3, whereas in Table 4 and Table 5, some descriptive statistics for all four regions are identified through second-level clustering. This stage further divides each of the two initial clusters into two sub-clusters, resulting in four distinct groups per scenario, clearly highlighting the unique characteristics of each cluster.

4.3. Multivariate Case

The available information on the exact meteorological stations that were employed in the previous section is once again employed. This time, however, the precipitation and mean temperature levels were considered simultaneously.

4.3.1. First-Level Clustering

Following the procedure described in Section 3, for the first level of clustering, seasonal indices of both precipitation levels and mean temperatures are calculated. As for the second step, multidimensional scaling is employed in these seasonal indices, with the aid of the distance matrix given by relation (9). After executing the K-means algorithm and computing also the Davies–Bouldin index, the optimal number of clusters is revealed, and it is equal to two clusters (Figure 9). More precisely, the first cluster contains 296 stations, while the second consists of 74 stations. In Figure 10, the mean seasonal indices are presented for both characteristics for these two clusters. The upper plots concern the mean precipitation levels, while those below take into consideration the mean of the average temperature levels. As an illustrative example, a plot of the average silhouette width per cluster corresponding to this clustering level is provided in the Appendix C.2.
Generally, the first-level clustering identifies two climatic regions, mainly determined by the great mountain chains along the central part extending from the north to the south and other mountainous bodies. The two groups embody, on one side, the dry Eastern Greece, including also the dry metropolitan area of Athens (the Attica Region, hosting more than half of the Greek population), and, on the other side, the wet Northern and Western Greece.

4.3.2. Second-Level Clustering

In accordance with the procedure mentioned in the previous section, the “Seasonally Splitted Missing Value Imputation” method is performed in order to impute missing values in both cases of raw time series. That is, the time series concerning the precipitation levels as well as the time series concerning the mean temperatures on each of the available meteorological stations. The resulting time series objects are afterwards grouped based on the resulting clusters from the first level of clustering. In order to proceed to the second level of the clustering (i.e., hierarchical clustering), Euclidean timed and spaced distance is calculated.
The average silhouette width criterion revealed that the optimal number of sub-clusters is 9 for the resulting first cluster of the first level clustering and 10 for the second cluster, respectively. Due to the total count of the sub-clusters, the tables with the descriptive statistics are omitted but are available in the Appendix C.1. In Figure 11 and Figure 12 the resulting sub-clusters are depicted.
The second-level clustering identifies nineteen sub-clusters driven by multivariate temporal patterns (e.g., synchronized precipitation–temperature cycles). These clusters are in compliance with the great spatial weather variety of the country. Situated at the southern end of the Balkan Peninsula, Greece presents a complex and fragmented topography with a strong relief resulting in a high geomorphological diversity [86]. More precisely, the western side of the country is mainly mountainous, including only a few plains. The eastern side of the mainland presents a contrary topography with many plains displaying relief near the coastline. This complex topography, both horizontal (long coastline and many islands) as well as vertical (many mountain ranges and individual mountains up to 2904 m), contributes to the creation of a mosaic of climates in the country. While this results in the well-known spatial weather diversity in Greece, the identified sub-clusters reflect temporal coherence, not spatial intuition. For example, high-altitude stations in Crete (south) are grouped with northern mountainous areas due to shared winter rainfall spikes, despite geographical differentiation.
Moving upwards from sub-cluster 1 to sub-cluster 9 (of the ‘drier’ cluster 1), generally, the mean annual precipitation increases, and the mean annual temperature decreases. The same holds for sub-clusters 10–19 of the second main cluster. Nevertheless, some sub-clusters present a much more complex pattern, including stations with a combination of high precipitation (considering the rest of sub-clusters 1–9) with a low temperature (e.g., sub-cluster 5) or the opposite (e.g., cluster 11). The regions of cluster 1 have 570.33 mm mean annual precipitation and 18.58 °C. The regions of sub-clusters 2 and 3 have progressively higher mean annual precipitation rates, i.e., 582.3 mm and 640.12 mm, and lower temperatures, i.e., 15.27 °C and 14.13 °C, respectively. The regions of sub-cluster 4 have a lower mean annual precipitation height (463.72 mm) combined with a higher mean annual temperature of 16.83 °C. On the contrary, the regions of cluster 5 have a significantly higher mean annual precipitation (774.27 mm) and a much lower mean annual temperature of 6.37 °C. The regions of cluster 6 follow the general pattern with 491.71 mm mean annual precipitation and 16.83 °C. Cluster 7 is characterized by a combination of moderate mean annual precipitation (695.42 mm) with a very low mean annual temperature (6.37 °C). In this cluster, the algorithm includes only one station (1515 mm altitude) at the Lailias mountains, situated in the north of Greece, belonging to the mountain range of Eastern Macedonia.
The next clusters follow the general pattern of increasing precipitation and variable temperature, with some exceptions as well, in compliance with the complex morphology of Greece. In Table 6, the mean annual precipitation along with the mean annual temperature for each of the sub-clusters are presented. Sub-cluster 19 includes only one station (1240 mm altitude) at Plikati, Ioannina in Western Greece. Sub-cluster 12 represents regions with very high annual precipitation heights (1758 mm). It includes mainly regions of Midwestern Greece. A few regions from Eastern Greece with high precipitation rates (which have been identified also by the National Observatory of Athens in the annual lists of meteorological stations with the highest rainfall rates) are also included in sub-clusters 11, 14, and 17. Characteristic examples are Setta in Evoia and Zagora in Pilio, the last one having experienced devastating flooding following extreme rainfall associated with storm Daniel [87].
Overall, the 19 identified distinct sub-clusters based entirely on temporal patterns in precipitation and temperature, as depicted in Figure 11 and Figure 12, are in accordance with the very rare previous studies (e.g., [86]) with respect to the different time periods, scope of the studies, and methodological procedures.

5. Conclusions

Considering the first case of univariate time series, meaning the case of precipitation time series, the proposed method identified four regions with similar precipitation characteristics and patterns in Greece. These clusters could result from the division of Greece by the Pindos Mountain range, where regions located along the mountain passage (i.e., the most mountainous) receive higher precipitation levels. A similar result was derived from the mean temperature time series, where once again, four regions of Greece were identified as having similar mean temperatures. Regions with higher mean altitudes consistently maintained lower temperatures compared to those at lower altitudes or near the sea.
In the multivariate scenario, the proposed method identified 19 distinct sub-clusters based entirely on temporal patterns in precipitation and temperature. While some clusters align with known geographical features (e.g., the Pindos mountain range), their formation was strictly data driven through seasonal indices and DTW-aligned raw series, prioritizing temporal patterns over spatial intuition. This approach proved particularly effective in Greece’s complex topography, where proximate areas often exhibit markedly different meteorological behaviors. The method revealed meaningful, data-driven subgroups that transcend geography (e.g., high-altitude stations in Crete clustering with northern mountainous areas), demonstrating that temporal patterns can capture climatic relationships that spatial correlations alone might miss. While spatial information could enhance interpretability in some contexts, our results show that temporal features can effectively reveal Greece’s diverse climatic regions.
Finally, it should be taken into consideration that there are parts of Greece where the coverage of automated weather stations is insufficient (e.g., East Peloponnese, and North or Central Greece). For that reason, along with the complexity of topography that controls both temperature and precipitation, any conclusions should be extracted with caution.
The proposed two-level clustering methodology, however, offers both theoretical and practical implications. Theoretically, it contributes to the study of time series clustering by addressing challenges associated with missing data, unequal series lengths, and seasonal structures, particularly in spatiotemporal environmental data. Practically, the framework provides a data-driven approach to identify meteorologically homogeneous regions. This can support national and regional stakeholders—such as environmental agencies, water resource planners, and climate policy advisors—in identifying climate-sensitive zones for more effective infrastructure planning, risk assessment, and resource allocation. Furthermore, the general structure of the methodology ensures that it can be easily applied to other regions or climatic variables, making it a versatile tool for broader geospatial applications under varying environmental contexts.
As in any methodology, limitations exist and are presented with possible suggestions that could also be part of future work. The method assumes a common seasonal time period across all time series, which may not hold true for geographically diverse meteorological stations with varying local climates. For cases where common seasonality does not exist, adaptive seasonal decomposition techniques (e.g., STL or seasonal autoencoder models) could be an alternative. The extensive or non-random missingness of data could lead to implications when imputation is applied in general, distorting original temporal patterns. In this case, however, the original temporal and seasonal patterns are derived from the first-level clustering when no imputation is performed. Model-based or probabilistic imputation techniques, such as Gaussian Process imputation, could be an option.
As already mentioned, the clustering methods in this work were initially selected for their interpretability, computational efficiency, and widespread use in environmental time series clustering, in order to establish a robust baseline for future method comparisons. A point of interest, however, could be a comparison of the known flexible clustering techniques (e.g DBSCAN) or deep learning-based clustering, options that capture complex relationships, with the proposed methodology. Other comparisons could be with the basic approaches of imputation (KNN imputation, pattern DTW matching) or other state-of-the-art methods like GAIN, DTW LSTM, etc. A promising direction for future research is to utilize temporal modeling techniques, such as HMMs or RNNs, that might provide a richer representation of the dynamic evolution of weather patterns. It should be noted that the clustering focus on dominant seasonal patterns was driven by the data’s high proportion of missing values and the prominent seasonal dynamics of the meteorological data. Incorporating clustering based on extreme/recurrent events would add important insights though.
Further, an interesting aspect for future work is to research in the direction of nonlinear dimensionality reduction techniques (e.g., t-SNE and UMAP) to compare the results. Our preliminary applications of t-SNE (see Appendix C) produced clusters inconsistent with the known climatic regions of Greece. This discrepancy is likely due to the sensitivity of these methods to high dimensionality and parameter tuning, and thus is suggested for further study. In addition to the above-mentioned points, it is acknowledged that the present study has been conducted using a meteorological dataset from Greece, and that its generalizability to other geographical regions and climatic conditions has yet to be empirically demonstrated. Nevertheless, the proposed methodology has been developed within a broadly applicable framework, designed to be easily transferable to datasets from other locations. Thus, it is intended to broaden its generalizability to other climatic variables, regions, or climatic zones of global datasets. To enhance usability, an R-Shiny dashboard is planned to be developed in future work, which would allow users to easily explore the clustering results, upload new data, or dynamically view seasonal and spatial patterns.

Author Contributions

Conceptualization, E.S., E.S.B. and P.E.; methodology, E.S., E.S.B. and P.E.; software, E.S.; validation, E.S., E.S.B. and P.E.; formal analysis, E.S.; resources, E.S.B.; data curation, E.S. and E.S.B.; writing—original draft preparation, E.S. and E.S.B.; writing—review and editing, E.S., E.S.B. and P.E.; visualization, E.S. and EB; supervision, P.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available on GitHub at https://github.com/EkateriniSk/Data (accessed on 24 March 2025).

Acknowledgments

The authors thank Konstantinos Lagouvardos, for providing us with the data from the Institute for Environmental Research and Sustainable Development of the National Observatory of Athens.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Data Preprocessing

Data preprocessing, as in most cases, is a necessary part of this work in order to obtain precise information and a suitable dataset to work with. Specifically, the data received are in Excel sheets covering the period 2006–2021, with every sheet referring to one year and containing measurements for monthly rainfall and the minimum along with the maximum monthly temperatures for every station that was under operation this specific year. The available stations are given in both the Greek distinct way and Latin. However, due to the resemblance of some letters between these two languages and the amount of measurements that take place every day, simple inconsistencies have been made in these sheets. For instance, there are misplaced Greek letters in Latin words and vice versa, longer spaces in the start/end of the names of the stations, wrong spelling, etc. Further, a station might be referred to in different ways over the years.
All the above-mentioned points create problems in the automated procedure of correctly deriving the available data for each station and, for each one of the measurements. As a result, their unification in another way is of crucial importance in order to proceed in the analysis. In this way, a dictionary is created based on all the different stations available in all sheets searched, with the aim of creating a dictionary that no longer contains these inconsistencies. To create the dictionary, spaces at the beginning and the end of each record are eliminated, and special characters (-, (, ), *) are removed, while longer spaces are replaced by regular spaces and accent marks are not considered. Last but not least, all Latin letters in stations written in Greek letters are replaced by the corresponding Greek and conversely. Finally, the letters are converted to uppercase, duplicates are dropped, and the stations are sorted by the Greek alphabet. During this procedure, the 1007 seemingly different stations that were derived automatically are reduced to 836 unique stations, or, better defined, 836 uniquely written but not necessarily distinct stations.
However, there are still different formats for the same station. That is, some stations over the years may be named (‘name_in_Greek, name_in_English’) or (‘name_in _Greek_name_of_district, name_in_English’), etc. For that reason, only one format is selected (whenever is available) and specifically, (‘name_in_ Greek_name_of_district, name_in_English’), leading to a remainder of 539 stations.
After this procedure, a dictionary with the 539 different stations is obtained. This dictionary is later used to unify all the different formats of each station on the different sheets and thus be able to obtain all measurements over the years. For that reason, fuzzy set theory is employed, and all stations in the data sheets are compared with the stations in the dictionary in a fuzzy way. When a high match is obtained, that is, greater than 92%, the stations of the raw data are changed with the format of those in the dictionary.
However, there are cases in which the high percentage of the fuzzy match is not achieved, even though the stations might be the same. This may have happened due to the fact that the length of (‘name_in_Greek,name_in_English’) in some cases is a lot shorter compared to the length of (‘name_in_Greek_name_of_district,name_in_English’) that is selected when available. Another reason could be the inconsistencies that exist in the initial raw data (e.g., misplaced letters). In such cases, i.e., whenever the selected high percentage of the fuzzy match is not achieved between a station of the data and any of the stations in the dictionary, a second search is also conducted. This time, for that search, a similar dictionary is created, comprised of the stations in the dictionary after the drop of duplicates and before the unification of different formats, that is, 836 instances. A second column is added containing the corresponding format (‘name_in_Greek_name_of_district,name_in_English’) that is previously decided. These stations are searched and when a high match exists (a percentage greater than 90%), the station from the sheets takes the format of the unified format from the dictionary. For instance, if a high match exists for a station of the Excel sheets with the station ( Φ A Λ HPO, FALIRO) from the first column of the second dictionary, then the format of the first station is replaced by the ( Φ A Λ HPO ATTIKH Σ , FALIRO) of the second column because this represents the format of our interest.
Eventually, after replacing the stations of the Excel sheets with those from the dictionaries based on the above-mentioned procedure, each sheet is separated into three partitions (due to the available measurements). The available data for each station and measurement are automatically collected, forming a time series. Of course, all the available 539 stations that were derived in the preprocessing do not necessarily mean that gather information for both precipitation levels and the extreme values of temperature. Thus, each of the three partitions end up with a different number of time series. Specifically, 425 stations (leading to an analogous number of time series) for precipitation data and 429 and 430 for minimum and maximum temperatures, correspondingly. It should be mentioned, however, that all these stations have not operated consequently all the years nor exist in the current list of stations of the observatory, leading to no information about the coordinates of all the stations or their altitude and ID.
According to the above, a final cleaning is necessary. Stations for which no information on their coordinates is available are excluded. Then, stations with no operation in the last year or with missing data for more than one and a half consecutive years are also excluded, as well as the stations with no information on the three considered meteorological characteristics, i.e., precipitation, and minimum and maximum temperatures for each month. The data of the resulting 370 stations are employed for this work.

Appendix B. Tables

Table A1. Descriptive statistics for the two identified initial cluster regions for the multivariate case.
Table A1. Descriptive statistics for the two identified initial cluster regions for the multivariate case.
MonthMeanSDMedianMinMax MonthMeanSDMedianMinMax
1st group (members: 296)Jan86.7136.2176.9727.93197.321st group (members: 296)Jan7.753.798.79−5.6113.74
Feb61.2427.4458.676.07142.10Feb9.713.1510.31−4.5015.16
Mar56.8819.4352.8023.25126.24Mar12.082.7512.60−1.6416.74
Apr34.2415.4032.293.60110.33Apr15.852.6516.382.3020.31
May28.1318.6523.650.30100.75May20.722.7521.236.2825.80
Jun29.1421.2426.730.27120.95Jun24.282.7225.0310.4029.24
Jul17.5816.4611.920.0078.63Jul26.922.7527.5612.8031.89
Aug13.4014.587.430.0069.25Aug26.842.9327.4812.2832.27
Sep35.1823.6732.500.00196.55Sep23.053.1123.918.9328.91
Oct55.6023.3152.8612.60147.40Oct18.233.3719.054.5924.68
Nov72.2626.9069.0320.40162.57Nov14.213.3114.960.5420.38
Dec90.5034.0683.7828.60195.24Dec9.693.4910.55−3.6315.76
2nd group (members: 74)Jan224.2466.95210.90131.29398.422nd group (members: 74)Jan6.173.006.30−0.2910.80
Feb163.4966.46147.5275.67449.70Feb8.452.448.452.0612.82
Mar127.7441.58117.5972.05241.55Mar10.632.3810.563.2114.97
Apr64.2428.0059.8121.60142.87Apr14.512.3814.746.7118.26
May58.7733.6750.798.13136.52May18.332.6318.799.6722.34
Jun43.2227.7040.363.13121.46Jun22.032.6622.4611.8826.39
Jul23.1017.2521.220.0058.02Jul24.722.4024.8318.5829.15
Aug24.1817.2623.770.0068.24Aug24.882.5124.9719.0629.35
Sep75.3931.2575.039.53158.03Sep20.952.7521.1312.8625.20
Oct129.3740.97126.3340.93243.12Oct16.582.6316.759.8920.56
Nov190.1961.10180.9388.66367.08Nov12.552.5712.706.3016.94
Dec212.7071.62193.33131.57463.56Dec8.022.838.211.9212.45
Table A2. Descriptive statistics of the 1st sub-cluster region of the multivariate case.
Table A2. Descriptive statistics of the 1st sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
1st subgroup (members: 214)Jan94.0636.3886.8827.93197.32Jan9.392.469.82−0.8213.74
Feb68.7525.9065.8917.40142.10Feb10.972.0811.352.0515.16
Mar57.3120.7251.7823.25126.24Mar13.051.9013.464.7716.74
Apr29.8113.4627.033.6066.87Apr16.801.8317.189.1220.31
May20.8912.2918.560.3054.43May21.621.8021.9113.0625.80
Jun20.9415.9717.850.2765.00Jun25.071.8325.3516.0429.24
Jul10.4310.137.460.0045.60Jul27.771.8427.9819.4131.89
Aug7.399.374.360.0069.25Aug27.712.0027.9818.5532.27
Sep32.1522.1330.300.00132.97Sep24.251.9224.6815.2128.91
Oct53.7724.4150.4112.60131.88Oct19.612.1919.9710.3424.68
Nov75.3527.0672.0028.31162.57Nov15.582.2415.907.1620.38
Dec99.4831.9893.1447.00195.24Dec11.122.4111.502.1315.76
Table A3. Descriptive statistics of the 2nd sub-cluster region of the multivariate case.
Table A3. Descriptive statistics of the 2nd sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
2nd subgroup (members: 23)Jan75.4935.0759.0540.15161.00Jan4.072.994.55−0.659.22
Feb32.4320.7026.736.0787.07Feb6.992.578.071.6011.00
Mar47.4015.6945.8023.8096.80Mar10.322.1410.984.9513.30
Apr48.0020.0445.7316.60110.33Apr13.991.9014.459.2116.79
May39.4119.5030.3517.6582.53May19.372.4120.5813.3422.49
Jun59.2322.5955.8530.75120.95Jun22.892.5323.7716.2726.70
Jul43.6019.9741.955.3578.63Jul25.752.5826.0019.0128.95
Aug33.3618.2927.043.8564.75Aug25.422.9425.9819.6829.82
Sep29.7115.4224.7513.8082.90Sep20.402.8120.5914.2125.15
Oct51.4116.2844.5233.7299.25Oct15.332.8515.508.8120.29
Nov57.8526.0251.6426.10140.64Nov11.742.3511.737.3216.45
Dec64.4133.4857.1528.72174.65Dec7.002.607.242.3111.96
Table A4. Descriptive statistics of the 3rd sub-cluster region of the multivariate case.
Table A4. Descriptive statistics of the 3rd sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
3rd subgroup (members: 45)Jan63.6822.9661.4228.42126.00Jan2.962.863.06−5.619.44
Feb48.5517.5045.9217.03113.49Feb6.282.936.85−4.509.97
Mar61.4913.9358.4633.4096.55Mar9.362.999.91−1.6412.75
Apr44.7110.5943.4420.8379.33Apr13.342.9613.892.3016.35
May53.4815.3652.9026.31100.75May18.053.3219.046.2821.56
Jun49.3614.7346.5521.6088.43Jun22.133.4423.2810.4025.99
Jul34.948.8334.7019.8670.49Jul24.323.3025.1112.8027.91
Aug28.9811.1830.787.7054.94Aug24.163.5624.6012.2827.87
Sep51.9526.6248.6019.37196.55Sep19.663.3221.128.9323.46
Oct66.4419.0763.3336.95147.40Oct14.092.9214.864.5918.20
Nov69.0720.1467.2236.49127.12Nov9.962.7110.690.5414.26
Dec67.4722.2363.6535.17138.72Dec5.262.836.19−3.6310.60
Table A5. Descriptive statistics of the 4th sub-cluster region of the multivariate case.
Table A5. Descriptive statistics of the 4th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
4th subgroup (members: 7)Jan53.6925.1543.6037.00108.30Jan7.301.067.575.328.45
Feb21.5015.4015.009.1351.70Feb8.311.078.436.959.60
Mar49.2113.9546.0034.4077.70Mar10.891.4910.329.3013.33
Apr48.0012.8949.0731.8069.50Apr14.550.6814.5713.5715.35
May25.096.3926.6017.0734.00May20.530.6820.7019.6721.50
Jun36.719.1038.1020.0048.30Jun24.221.6124.0221.7026.42
Jul21.8610.0820.008.9338.80Jul27.621.0928.0525.4228.62
Aug17.4910.3619.000.0732.60Aug27.851.3027.3526.0229.42
Sep28.7521.8828.204.7370.40Sep22.630.7122.4021.8523.62
Oct51.269.1453.7332.1558.73Oct17.781.3317.5016.1319.35
Nov44.0318.8147.2720.4070.73Nov13.211.5612.8711.4215.28
Dec66.1330.3561.3334.47129.93Dec9.021.169.787.3710.23
Table A6. Descriptive statistics of the 5th sub-cluster region of the multivariate case.
Table A6. Descriptive statistics of the 5th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
5th subgroup (members: 2)Jan65.5218.9865.5252.1078.94Jan−3.100.70−3.10−3.60−2.61
Feb52.3915.1452.3941.6863.10Feb−1.250.35−1.25−1.50−1.00
Mar59.0613.1859.0649.7468.38Mar1.430.001.431.431.43
Apr41.029.5641.0234.2547.78Apr5.750.115.755.675.83
May74.254.9074.2570.7877.71May9.000.459.008.689.31
Jun44.223.7944.2241.5446.89Jun12.500.5512.5012.1112.89
Jul22.6011.0122.6014.8230.38Jul15.631.3015.6314.7216.55
Aug26.361.3826.3625.3827.33Aug15.010.5915.0114.5915.43
Sep68.2223.3168.2251.7484.70Sep11.170.0911.1711.1111.24
Oct92.979.1492.9786.5199.44Oct6.950.216.956.817.10
Nov125.2644.78125.2693.60156.92Nov4.110.114.114.034.19
Dec102.4037.21102.4076.08128.71Dec−0.650.66−0.65−1.11−0.18
Table A7. Descriptive statistics of the 6th sub-cluster region of the multivariate case.
Table A7. Descriptive statistics of the 6th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
6th subgroup (members: 2)Jan64.477.9264.4758.8770.07Jan1.093.761.09−1.573.75
Feb17.407.1717.4012.3322.47Feb4.424.424.421.307.55
Mar38.404.6238.4035.1341.67Mar9.381.799.388.1210.65
Apr46.932.3646.9345.2748.60Apr12.482.2612.4810.8814.08
May34.204.4334.2031.0737.33May18.042.5118.0416.2719.82
Jun38.030.6138.0337.6038.47Jun22.182.1822.1820.6323.72
Jul48.242.1148.2446.7549.73Jul25.592.6725.5923.7027.48
Aug40.928.7340.9234.7547.10Aug24.872.9124.8722.8126.93
Sep22.4711.9122.4714.0530.90Sep19.773.3919.7717.3822.16
Oct53.658.3453.6547.7559.55Oct14.282.6714.2812.3916.16
Nov47.683.7147.6845.0550.30Nov10.802.4010.809.1012.50
Dec39.3315.1739.3328.6050.05Dec5.452.105.453.966.94
Table A8. Descriptive statistics of the 7th sub-cluster region of the multivariate case.
Table A8. Descriptive statistics of the 7th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
7th subgroup (members: 1)Jan84.02 84.0284.0284.02Jan−2.48 −2.48−2.48−2.48
Feb45.92 45.9245.9245.92Feb−0.01 −0.01−0.01−0.01
Mar45.46 45.4645.4645.46Mar2.56 2.562.562.56
Apr69.88 69.8869.8869.88Apr7.54 7.547.547.54
May84.02 84.0284.0284.02May11.72 11.7211.7211.72
Jun76.86 76.8676.8676.86Jun14.55 14.5514.5514.55
Jul67.10 67.1067.1067.10Jul17.41 17.4117.4117.41
Aug35.51 35.5135.5135.51Aug18.34 18.3418.3418.34
Sep47.02 47.0247.0247.02Sep13.00 13.0013.0013.00
Oct40.47 40.4740.4740.47Oct8.57 8.578.578.57
Nov44.38 44.3844.3844.38Nov4.33 4.334.334.33
Dec54.78 54.7854.7854.78Dec0.06 0.060.060.06
Table A9. Descriptive statistics of the 8th sub-cluster region of the multivariate case.
Table A9. Descriptive statistics of the 8th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
8th subgroup (members: 1)Jan78.80 78.8078.8078.80Jan4.41 4.414.414.41
Feb89.97 89.9789.9789.97Feb7.51 7.517.517.51
Mar75.05 75.0575.0575.05Mar10.23 10.2310.2310.23
Apr30.27 30.2730.2730.27Apr11.45 11.4511.4511.45
May52.30 52.3052.3052.30May17.50 17.5017.5017.50
Jun47.52 47.5247.5247.52Jun20.67 20.6720.6720.67
Jul33.02 33.0233.0233.02Jul24.44 24.4424.4424.44
Aug20.90 20.9020.9020.90Aug25.39 25.3925.3925.39
Sep77.53 77.5377.5377.53Sep21.91 21.9121.9121.91
Oct65.22 65.2265.2265.22Oct16.77 16.7716.7716.77
Nov57.04 57.0457.0457.04Nov11.09 11.0911.0911.09
Dec50.60 50.6050.6050.60Dec6.91 6.916.916.91
Table A10. Descriptive statistics of the 9th sub-cluster region of the multivariate case.
Table A10. Descriptive statistics of the 9th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
9th subgroup (members: 1)Jan136.20 136.20136.20136.20Jan9.49 9.499.499.49
Feb57.10 57.1057.1057.10Feb10.74 10.7410.7410.74
Mar54.03 54.0354.0354.03Mar13.26 13.2613.2613.26
Apr27.47 27.4727.4727.47Apr17.52 17.5217.5217.52
May15.07 15.0715.0715.07May22.38 22.3822.3822.38
Jun14.10 14.1014.1014.10Jun25.36 25.3625.3625.36
Jul1.60 1.601.601.60Jul22.94 22.9422.9422.94
Aug1.20 1.201.201.20Aug23.46 23.4623.4623.46
Sep4.77 4.774.774.77Sep24.94 24.9424.9424.94
Oct21.30 21.3021.3021.30Oct20.29 20.2920.2920.29
Nov68.13 68.1368.1368.13Nov17.06 17.0617.0617.06
Dec129.80 129.80129.80129.80Dec11.95 11.9511.9511.95
Table A11. Descriptive statistics of the 10th sub-cluster region of the multivariate case.
Table A11. Descriptive statistics of the 10th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
10th subgroup (members: 44)Jan213.1260.92195.99131.29398.42Jan5.563.074.72−0.2910.61
Feb151.2551.34139.8875.67325.23Feb8.072.657.522.0612.82
Mar126.7938.22119.4772.05238.28Mar10.282.599.853.2114.97
Apr66.3229.5764.9721.60142.87Apr14.232.6513.886.7118.26
May66.0132.6360.3815.02136.52May17.972.7817.719.6722.27
Jun49.1320.8750.1117.5392.89Jun21.782.8821.4111.8825.98
Jul26.1815.2128.273.3850.27Jul24.572.4924.4818.5828.91
Aug25.0214.6328.261.8148.75Aug24.882.7424.9719.0629.35
Sep81.8821.4279.7044.89129.43Sep20.602.8919.9712.8625.20
Oct140.8736.16137.6490.44241.15Oct16.092.7616.069.8920.56
Nov199.2548.30189.68127.78355.34Nov12.082.7011.576.3016.94
Dec192.1753.49179.52133.29388.96Dec7.483.006.561.9212.38
Table A12. Descriptive statistics of the 11th sub-cluster region of the multivariate case.
Table A12. Descriptive statistics of the 11th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
11th subgroup (members: 16)Jan258.1471.22232.61168.04386.90Jan8.581.678.925.3210.80
Feb205.5390.69172.50108.70449.70Feb10.071.2810.267.4811.74
Mar123.1942.59111.6481.69220.05Mar12.171.5012.278.8214.44
Apr56.8728.1644.7127.20115.33Apr15.791.3016.1613.2017.45
May31.0918.3625.468.1381.51May19.761.8520.1716.1022.34
Jun10.758.607.673.1339.13Jun23.091.7423.6019.3725.31
Jul4.895.093.390.0017.54Jul25.651.8026.3020.9828.24
Aug9.6711.845.320.0044.69Aug25.281.8525.5620.4328.07
Sep43.0531.9333.749.53127.53Sep22.601.5622.7618.7224.57
Oct95.2934.3095.2740.93154.50Oct18.431.6318.6614.5220.21
Nov155.7573.76126.4988.66367.08Nov14.371.4914.5611.2916.27
Dec257.5081.04227.52177.73463.56Dec10.001.449.797.1412.10
Table A13. Descriptive statistics of the 12th sub-cluster region of the multivariate case.
Table A13. Descriptive statistics of the 12th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
12th subgroup (members: 4)Jan257.6765.47249.82194.26336.77Jan4.612.624.052.048.28
Feb174.7584.12157.51102.20281.77Feb7.932.417.195.9811.36
Mar142.0239.55146.9497.73176.46Mar10.302.429.578.3313.74
Apr59.4522.0752.1643.3090.17Apr14.362.6013.5612.2418.10
May77.2142.7176.3337.07119.10May18.502.8818.2715.2222.24
Jun87.4926.5684.0960.30121.46Jun22.222.9921.4619.5826.39
Jul35.3118.3338.6310.2953.70Jul25.163.0424.7521.9929.15
Aug40.998.0239.4033.2951.89Aug24.343.2823.1121.9929.16
Sep101.4939.1186.5074.91158.03Sep20.853.2820.4417.3625.14
Oct132.2444.36131.7178.71186.81Oct16.312.3515.7414.2019.57
Nov242.4392.39242.60150.31334.23Nov11.622.4011.079.3814.95
Dec225.4964.01221.77162.57295.86Dec6.872.366.115.0610.17
Table A14. Descriptive statistics of the 13th sub-cluster region of the multivariate case.
Table A14. Descriptive statistics of the 13th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
13th subgroup (members: 2)Jan305.1579.97305.15248.60361.70Jan4.681.204.683.835.53
Feb180.7557.56180.75140.05221.45Feb7.450.377.457.197.71
Mar133.2831.57133.28110.95155.60Mar9.681.019.688.9610.39
Apr65.208.4965.2059.2071.20Apr14.410.2614.4114.2214.59
May100.101.44100.1099.08101.12May18.110.2618.1117.9318.29
Jun87.2635.2787.2662.32112.20Jun21.700.1121.7021.6321.78
Jul46.806.7346.8042.0451.56Jul22.162.9422.1620.0824.24
Aug47.6829.0847.6827.1268.24Aug25.230.8625.2324.6225.84
Sep48.540.7648.5448.0049.08Sep20.120.4620.1219.7920.44
Oct136.7227.61136.72117.20156.24Oct15.612.0415.6114.1717.06
Nov267.8773.61267.87215.82319.92Nov11.731.3411.7310.7812.67
Dec338.84128.64338.84247.88429.80Dec6.651.216.655.807.51
Table A15. Descriptive statistics of the 14th sub-cluster region of the multivariate case.
Table A15. Descriptive statistics of the 14th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
14th subgroup (members: 2)Jan155.4413.38155.44145.97164.90Jan6.250.746.255.726.77
Feb188.4181.99188.41130.43246.38Feb8.150.748.157.638.68
Mar175.8992.85175.89110.23241.55Mar10.430.1310.4310.3410.53
Apr88.829.6088.8282.0395.61Apr14.620.3914.6214.3514.90
May55.6928.2255.6935.7375.65May19.410.3519.4119.1619.65
Jun49.4129.7549.4128.3770.45Jun22.910.3122.9122.6923.12
Jul42.6121.7942.6127.2058.02Jul25.110.6025.1124.6925.54
Aug37.4225.4837.4219.4055.44Aug25.370.7625.3724.8425.91
Sep116.6749.04116.6782.00151.35Sep21.280.5121.2820.9321.64
Oct163.58112.50163.5884.03243.12Oct16.170.3116.1715.9516.39
Nov156.8261.65156.82113.23200.42Nov12.450.3912.4512.1712.72
Dec173.4239.22173.42145.69201.15Dec8.560.448.568.258.87
Table A16. Descriptive statistics of the 15th sub-cluster region of the multivariate case.
Table A16. Descriptive statistics of the 15th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
15th subgroup (members: 1)Jan233.33 233.33233.33233.33Jan1.88 1.881.881.88
Feb97.13 97.1397.1397.13Feb5.25 5.255.255.25
Mar103.80 103.80103.80103.80Mar7.55 7.557.557.55
Apr68.33 68.3368.3368.33Apr11.35 11.3511.3511.35
May36.40 36.4036.4036.40May15.55 15.5515.5515.55
Jun39.40 39.4039.4039.40Jun18.35 18.3518.3518.35
Jul27.33 27.3327.3327.33Jul21.35 21.3521.3521.35
Aug32.53 32.5332.5332.53Aug21.45 21.4521.4521.45
Sep88.33 88.3388.3388.33Sep17.08 17.0817.0817.08
Oct110.20 110.20110.20110.20Oct13.65 13.6513.6513.65
Nov167.87 167.87167.87167.87Nov9.38 9.389.389.38
Dec229.73 229.73229.73229.73Dec4.67 4.674.674.67
Table A17. Descriptive statistics of the 16th sub-cluster region of the multivariate case.
Table A17. Descriptive statistics of the 16th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
16th subgroup (members: 1)Jan155.91 155.91155.91155.91Jan8.20 8.208.208.20
Feb96.83 96.8396.8396.83Feb10.15 10.1510.1510.15
Mar91.49 91.4991.4991.49Mar10.86 10.8610.8610.86
Apr41.23 41.2341.2341.23Apr14.73 14.7314.7314.73
May14.83 14.8314.8314.83May14.60 14.6014.6014.60
Jun27.11 27.1127.1127.11Jun22.27 22.2722.2722.27
Jul8.63 8.638.638.63Jul26.91 26.9126.9126.91
Aug9.88 9.889.889.88Aug22.37 22.3722.3722.37
Sep60.44 60.4460.4460.44Sep18.77 18.7718.7718.77
Oct91.83 91.8391.8391.83Oct18.64 18.6418.6418.64
Nov184.90 184.90184.90184.90Nov14.36 14.3614.3614.36
Dec131.57 131.57131.57131.57Dec9.53 9.539.539.53
Table A18. Descriptive statistics of the 17th sub-cluster region of the multivariate case.
Table A18. Descriptive statistics of the 17th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
17th subgroup (members: 2)Jan230.5951.65230.59194.07267.11Jan6.723.256.724.439.02
Feb147.9748.12147.97113.95182.00Feb8.582.878.586.5510.61
Mar175.2377.86175.23120.17230.29Mar10.622.7210.628.7012.54
Apr94.334.7894.3390.9597.71Apr14.621.6914.6213.4315.82
May76.6635.8076.6651.35101.97May19.271.4819.2718.2220.32
Jun43.691.1143.6942.9044.48Jun22.472.3322.4720.8224.12
Jul36.032.2336.0334.4537.60Jul25.342.0525.3423.8926.79
Aug52.3317.9352.3339.6565.00Aug25.032.3225.0323.3926.67
Sep102.296.77102.2997.50107.08Sep19.854.3419.8516.7822.92
Oct121.2013.83121.20111.42130.97Oct16.233.7016.2313.6118.84
Nov150.9560.02150.95108.51193.39Nov12.622.3312.6210.9714.27
Dec278.28119.13278.28194.04362.53Dec8.502.378.506.8310.18
Table A19. Descriptive statistics of the 18th sub-cluster region of the multivariate case.
Table A19. Descriptive statistics of the 18th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
18th subgroup (members: 1)Jan131.62 131.62131.62131.62Jan9.45 9.459.459.45
Feb131.95 131.95131.95131.95Feb9.47 9.479.479.47
Mar85.05 85.0585.0585.05Mar11.23 11.2311.2311.23
Apr28.58 28.5828.5828.58Apr14.80 14.8014.8014.80
May29.80 29.8029.8029.80May19.03 19.0319.0319.03
Jun16.75 16.7516.7516.75Jun23.23 23.2323.2323.23
Jul2.47 2.472.472.47Jul23.33 23.3323.3323.33
Aug9.43 9.439.439.43Aug28.36 28.3628.3628.36
Sep90.45 90.4590.4590.45Sep24.37 24.3724.3724.37
Oct148.82 148.82148.82148.82Oct18.12 18.1218.1218.12
Nov169.56 169.56169.56169.56Nov15.72 15.7215.7215.72
Dec183.16 183.16183.16183.16Dec12.45 12.4512.4512.45
Table A20. Descriptive statistics of the 19th sub-cluster region of the multivariate case.
Table A20. Descriptive statistics of the 19th sub-cluster region of the multivariate case.
PrecipitationMean Temperature
Month Mean SD Median Min Max Month Mean SD Median Min Max
19th subgroup (members: 1)Jan152.59 152.59152.59152.59Jan1.63 1.631.631.63
Feb95.46 95.4695.4695.46Feb4.17 4.174.174.17
Mar86.25 86.2586.2586.25Mar7.34 7.347.347.34
Apr53.10 53.1053.1053.10Apr9.60 9.609.609.60
May92.17 92.1792.1792.17May12.94 12.9412.9412.94
Jun70.55 70.5570.5570.55Jun15.37 15.3715.3715.37
Jul48.80 48.8048.8048.80Jul20.39 20.3920.3920.39
Aug43.33 43.3343.3343.33Aug21.17 21.1721.1721.17
Sep106.97 106.97106.97106.97Sep16.36 16.3616.3616.36
Oct128.08 128.08128.08128.08Oct12.31 12.3112.3112.31
Nov172.25 172.25172.25172.25Nov7.92 7.927.927.92
Dec136.60 136.60136.60136.60Dec3.07 3.073.073.07

Appendix C. Plots

Appendix C.1. Second Level Clustering

Figure A1. Plot of average silhouette width against number of clusters for the precipitation data (2nd level, cluster 1).
Figure A1. Plot of average silhouette width against number of clusters for the precipitation data (2nd level, cluster 1).
Stats 08 00036 g0a1
Figure A2. Plot of average silhouette width against number of clusters for the precipitation data (2nd level, cluster 2).
Figure A2. Plot of average silhouette width against number of clusters for the precipitation data (2nd level, cluster 2).
Stats 08 00036 g0a2
Figure A3. Plot of average silhouette width against number of clusters for the mean temperature data (2nd level, cluster 1).
Figure A3. Plot of average silhouette width against number of clusters for the mean temperature data (2nd level, cluster 1).
Stats 08 00036 g0a3
Figure A4. Plot of average silhouette width against number of clusters for the mean temperature data (2nd level, cluster 2).
Figure A4. Plot of average silhouette width against number of clusters for the mean temperature data (2nd level, cluster 2).
Stats 08 00036 g0a4

Appendix C.2. Silhouette per Cluster

Figure A5. Plot of average silhouette width per cluster for the precipitation data (1st Level).
Figure A5. Plot of average silhouette width per cluster for the precipitation data (1st Level).
Stats 08 00036 g0a5
Figure A6. Plot of average silhouette width per cluster for the mean temperature data (1st Level).
Figure A6. Plot of average silhouette width per cluster for the mean temperature data (1st Level).
Stats 08 00036 g0a6
Figure A7. Plot of average silhouette width per cluster for the mean temperature data (2nd level, cluster 1).
Figure A7. Plot of average silhouette width per cluster for the mean temperature data (2nd level, cluster 1).
Stats 08 00036 g0a7
Figure A8. Plot of average silhouette width per cluster for the mean temperature data (2nd level, cluster 2).
Figure A8. Plot of average silhouette width per cluster for the mean temperature data (2nd level, cluster 2).
Stats 08 00036 g0a8
Figure A9. Plot of average silhouette width per cluster for the multivariate case (1st Level).
Figure A9. Plot of average silhouette width per cluster for the multivariate case (1st Level).
Stats 08 00036 g0a9

Appendix C.3. t-SNE

Figure A10. Plot of average silhouette width against number of clusters for the precipitation data (t-SNE).
Figure A10. Plot of average silhouette width against number of clusters for the precipitation data (t-SNE).
Stats 08 00036 g0a10
Figure A11. Plot of average silhouette width against number of clusters for the mean temperature data (t-SNE).
Figure A11. Plot of average silhouette width against number of clusters for the mean temperature data (t-SNE).
Stats 08 00036 g0a11
Figure A12. Plot of average silhouette width against number of clusters for the multivariate case (t-SNE).
Figure A12. Plot of average silhouette width against number of clusters for the multivariate case (t-SNE).
Stats 08 00036 g0a12

References

  1. Ralanamahatana, C.A.; Lin, J.; Gunopulos, D.; Keogh, E.; Vlachos, M.; Das, G. Mining time series data. In Data Mining and Knowledge Discovery Handbook; Springer: Boston, MA, USA, 2005; pp. 1069–1103. [Google Scholar]
  2. Ives, A.R.; Zhu, J. Statistics for correlated data: Phylogenies, space, and time. Ecol. Appl. 2006, 16, 20–32. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, Y.; Cizeau, P.; Meyer, M.; Peng, C.K.; Stanley, H.E. Correlations in economic time series. Phys. A Stat. Mech. Its Appl. 1997, 245, 437–440. [Google Scholar] [CrossRef]
  4. Lam, C.; Yao, Q.; Bathia, N. Estimation of latent factors for high-dimensional time series. Biometrika 2011, 98, 901–918. [Google Scholar] [CrossRef]
  5. Duchon, C.; Hale, R. Time Series Analysis in Meteorology and Climatology: An Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  6. Glass, L.; Kaplan, D. Time series analysis of complex dynamics in physiology and medicine. Med. Prog. Through Technol. 1993, 19, 115. [Google Scholar]
  7. Tsay, R.S. Analysis of Financial Time Series; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  8. Touloumi, G.; Atkinson, R.; Tertre, A.L.; Samoli, E.; Schwartz, J.; Schindler, C.; Vonk, J.M.; Rossi, G.; Saez, M.; Rabszenko, D.; et al. Analysis of health outcome time series data in epidemiological studies. Environmetrics 2004, 15, 101–117. [Google Scholar] [CrossRef]
  9. Panagoulia, D.; Economou, P.; Caroni, C. Stationary and nonstationary generalized extreme value modelling of extreme precipitation over a mountainous area under climate change. Environmetrics 2014, 25, 29–43. [Google Scholar] [CrossRef]
  10. Godsill, S.J.; Doucet, A.; West, M. Monte Carlo smoothing for nonlinear time series. J. Am. Stat. Assoc. 2004, 99, 156–168. [Google Scholar] [CrossRef]
  11. Harvey, A.C.; Trimbur, T.M.; Van Dijk, H.K. Trends and cycles in economic time series: A Bayesian approach. J. Econom. 2007, 140, 618–649. [Google Scholar] [CrossRef]
  12. Verbesselt, J.; Hyndman, R.; Newnham, G.; Culvenor, D. Detecting trend and seasonal changes in satellite image time series. Remote Sens. Environ. 2010, 114, 106–115. [Google Scholar] [CrossRef]
  13. Skarlatos, K.; Bekri, E.S.; Georgakellos, D.; Economou, P.; Bersimis, S. Projecting Annual Rainfall Timeseries Using Machine Learning Techniques. Energies 2023, 16, 1459. [Google Scholar] [CrossRef]
  14. Lim, B.; Zohren, S. Time-series forecasting with deep learning: A survey. Philos. Trans. R. Soc. A 2021, 379, 20200209. [Google Scholar] [CrossRef] [PubMed]
  15. Gallagher, C.; Lund, R.; Robbins, M. Changepoint detection in climate time series with long-term trends. J. Clim. 2013, 26, 4994–5006. [Google Scholar] [CrossRef]
  16. Karioti, V.; Economou, P. Detection of Outlier in Time Series Count Data. In Proceedings of the Advances in Time Series Analysis and Forecasting; Rojas, I., Pomares, H., Valenzuela, O., Eds.; Springer: Cham, Switzerland, 2017; pp. 209–221. [Google Scholar]
  17. Maharaj, E.A.; D’Urso, P.; Caiado, J. Time Series Clustering and Classification; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  18. Aghabozorgi, S.; Shirkhorshidi, A.S.; Wah, T.Y. Time-series clustering—A decade review. Inf. Syst. 2015, 53, 16–38. [Google Scholar] [CrossRef]
  19. Liao, T.W. Clustering of time series data—A survey. Pattern Recognit. 2005, 38, 1857–1874. [Google Scholar] [CrossRef]
  20. Wismüller, A.; Lange, O.; Dersch, D.R.; Leinsinger, G.L.; Hahn, K.; Pütz, B.; Auer, D. Cluster analysis of biomedical image time-series. Int. J. Comput. Vis. 2002, 46, 103–128. [Google Scholar] [CrossRef]
  21. Rani, S.; Kautish, S. Association clustering and time series based data mining in continuous data for diabetes prediction. In Proceedings of the 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 14–15 June 2018; pp. 1209–1214. [Google Scholar]
  22. Ernst, J.; Nau, G.J.; Bar-Joseph, Z. Clustering short time series gene expression data. Bioinformatics 2005, 21, i159–i168. [Google Scholar] [CrossRef]
  23. Kurbalija, V.; von Bernstorff, C.; Burkhard, H.D.; Nachtwei, J.; Ivanović, M.; Fodor, L. Time-series mining in a psychological domain. In Proceedings of the Fifth Balkan Conference in Informatics, Novi Sad, Serbia, 16–20 September 2012; pp. 58–63. [Google Scholar]
  24. Steinbach, M.; Tan, P.N.; Kumar, V.; Klooster, S.; Potter, C. Discovery of climate indices using clustering. In Proceedings of the ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 24–27 August 2003; pp. 446–455. [Google Scholar]
  25. Bengtsson, T.; Cavanaugh, J.E. State-space discrimination and clustering of atmospheric time series data based on Kullback information measures. Environmetrics 2008, 19, 103–121. [Google Scholar] [CrossRef]
  26. Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M.K.; Liess, S. Trend analysis using non-stationary time series clustering based on the finite element method. Nonlinear Process. Geophys. 2014, 21, 605–615. [Google Scholar] [CrossRef]
  27. Han, J.; Pei, J.; Tong, H. Data Mining: Concepts and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2022. [Google Scholar]
  28. Rokach, L.; Maimon, O. Clustering methods. In Data Mining and Knowledge Discovery Handbook; Springer: New York, NY, USA, 2005; pp. 321–352. [Google Scholar]
  29. Nielsen, F. Hierarchical clustering. In Introduction to HPC with MPI for Data Science; Springer: Cham, Switzerland, 2016; pp. 195–211. [Google Scholar]
  30. Kriegel, H.P.; Kröger, P.; Sander, J.; Zimek, A. Density-based clustering. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2011, 1, 231–240. [Google Scholar] [CrossRef]
  31. Cheng, W.; Wang, W.; Batista, S. Grid-based clustering. In Data Clustering; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018; pp. 128–148. [Google Scholar]
  32. Economou, P. A clustering algorithm for overlapping Gaussian mixtures. Res. Stat. 2023, 1, 2242337. [Google Scholar] [CrossRef]
  33. McNicholas, P.D. Model-based clustering. J. Classif. 2016, 33, 331–373. [Google Scholar] [CrossRef]
  34. Fop, M.; Murphy, T.B. Variable selection methods for model-based clustering. Stat. Surv. 2018, 12, 18–65. [Google Scholar] [CrossRef]
  35. Malakar, S.; Goswami, S.; Ganguli, B.; Chakrabarti, A.; Roy, S.S.; Kadhirvel, B.; Rangaraj, A.G. An Lstm Based Adaptive Model for Solar Forecasting Using Clustering. SSRN 2022. Available online: https://ssrn.com/abstract=4028663 (accessed on 24 March 2025). [CrossRef]
  36. Sun, H.; Jie, W.; Chen, Y.; Wang, Z. A comparison study of several strategies in multivariate time series clustering based on graph community detection. Appl. Intell. 2025, 55, 530. [Google Scholar] [CrossRef]
  37. Yoon, J.; Jordon, J.; Schaar, M. Gain: Missing data imputation using generative adversarial nets. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 5689–5698. [Google Scholar]
  38. Cao, W.; Wang, D.; Li, J.; Zhou, H.; Li, L.; Li, Y. Brits: Bidirectional recurrent imputation for time series. Adv. Neural Inf. Process. Syst. 2018, 31, 6776–6786. [Google Scholar]
  39. Paparrizos, J.; Yang, F.; Li, H. Bridging the gap: A decade review of time-series clustering methods. arXiv 2024, arXiv:2412.20582. [Google Scholar]
  40. Alqahtani, A.; Ali, M.; Xie, X.; Jones, M.W. Deep time-series clustering: A review. Electronics 2021, 10, 3001. [Google Scholar] [CrossRef]
  41. Ienco, D.; Interdonato, R. Deep semi-supervised clustering for multi-variate time-series. Neurocomputing 2023, 516, 36–47. [Google Scholar] [CrossRef]
  42. Li, H. Multivariate time series clustering based on common principal component analysis. Neurocomputing 2019, 349, 239–247. [Google Scholar] [CrossRef]
  43. Jorge, M.B.; Rubén, C. Time series clustering with random convolutional kernels. Data Min. Knowl. Discov. 2024, 38, 1862–1888. [Google Scholar] [CrossRef]
  44. Wang, J.; Du, W.; Yang, Y.; Qian, L.; Cao, W.; Zhang, K.; Wang, W.; Liang, Y.; Wen, Q. Deep learning for multivariate time series imputation: A survey. arXiv 2024, arXiv:2402.04059. [Google Scholar]
  45. Richter, A.; Ijaradar, J.; Wetzker, U.; Jain, V.; Frotzscher, A. A Survey on Multivariate Time Series Imputation using Adversarial Learning. IEEE Access 2024, 12, 148167–148189. [Google Scholar] [CrossRef]
  46. Bagnall, A.; Janacek, G. Clustering time series with clipped data. Mach. Learn. 2005, 58, 151–178. [Google Scholar] [CrossRef]
  47. Yang, D.; Yang, Y.; Xia, J. Hydrological cycle and water resources in a changing world: A review. Geogr. Sustain. 2021, 2, 115–122. [Google Scholar] [CrossRef]
  48. Kokkoris, I.P.; Bekri, E.S.; Skuras, D.; Vlami, V.; Zogaris, S.; Maroulis, G.; Dimopoulos, D.; Dimopoulos, P. Integrating MAES implementation into protected area management under climate change: A fine-scale application in Greece. Sci. Total Environ. 2019, 695, 133530. [Google Scholar] [CrossRef]
  49. Bekri, E.S.; Economou, P.; Yannopoulos, P.C.; Demetracopoulos, A.C. Reassessing existing reservoir supply capacity and management resilience under climate change and sediment deposition. Water 2021, 13, 1819. [Google Scholar] [CrossRef]
  50. Bekri, E.S.; Kokkoris, I.P.; Skuras, D.; Hein, L.; Dimopoulos, P. Ecosystem accounting for water resources at the catchment scale, a case study for the Peloponnisos, Greece. Ecosyst. Serv. 2024, 65, 101586. [Google Scholar] [CrossRef]
  51. Alam, M.S.; Paul, S. A comparative analysis of clustering algorithms to identify the homogeneous rainfall gauge stations of Bangladesh. J. Appl. Stat. 2020, 47, 1460–1481. [Google Scholar] [CrossRef]
  52. Raziei, T.; Bordi, I.; Pereira, L. A precipitation-based regionalization for Western Iran and regional drought variability. Hydrol. Earth Syst. Sci. 2008, 12, 1309–1321. [Google Scholar] [CrossRef]
  53. Shirin, A.S.; Thomas, R. Regionalization of rainfall in Kerala state. Procedia Technol. 2016, 24, 15–22. [Google Scholar] [CrossRef]
  54. Lolis, C.J.; Kotsias, G.; Bartzokas, A. Objective definition of climatologically homogeneous areas in the Southern Balkans based on the ERA5 data set. Climate 2018, 6, 96. [Google Scholar] [CrossRef]
  55. Gompertz, B. XXIV. On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. In a letter to Francis Baily, Esq. FRS &c. Philos. Trans. R. Soc. Lond. 1825, 115, 513–583. [Google Scholar]
  56. Tjørve, K.M.; Tjørve, E. The use of Gompertz models in growth analyses, and new Gompertz-model approach: An addition to the Unified-Richards family. PLoS ONE 2017, 12, e0178691. [Google Scholar] [CrossRef] [PubMed]
  57. Richards, F.J. A flexible growth function for empirical use. J. Exp. Bot. 1959, 10, 290–301. [Google Scholar] [CrossRef]
  58. Gregorczyk, A. Richards plant growth model. J. Agron. Crop Sci. 1998, 181, 243–247. [Google Scholar] [CrossRef]
  59. Ding, C.; Li, T. Adaptive dimension reduction using discriminant analysis and k-means clustering. In Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, USA, 20–24 June 2007; pp. 521–528. [Google Scholar]
  60. Hou, C.; Nie, F.; Jiao, Y.; Zhang, C.; Wu, Y. Learning a subspace for clustering via pattern shrinking. Inf. Process. Manag. 2013, 49, 871–883. [Google Scholar] [CrossRef]
  61. Wang, X.D.; Chen, R.C.; Yan, F. High-dimensional Data Clustering Using K-means Subspace Feature Selection. J. Netw. Intell. 2019, 4, 80–87. [Google Scholar]
  62. Gower, J.C. Some distance properties of latent root and vector methods used in multivariate analysis. Biometrika 1966, 53, 325–338. [Google Scholar] [CrossRef]
  63. Labrín, C.; Urdinez, F. Principal component analysis. In R for Political Data Science; Chapman and Hall/CRC: Boca Raton, FL, USA, 2020; pp. 375–393. [Google Scholar]
  64. Skarlatos, K.; Fousteris, A.; Georgakellos, D.; Economou, P.; Bersimis, S. Assessing Ships’ Environmental Performance Using Machine Learning. Energies 2023, 16, 2544. [Google Scholar] [CrossRef]
  65. Wang, T.; Zhang, F.; Gu, H.; Hu, H.; Kaur, M. A research study on new energy brand users based on principal component analysis (PCA) and fusion target planning model for sustainable environment of smart cities. Sustain. Energy Technol. Assess. 2023, 57, 103262. [Google Scholar] [CrossRef]
  66. Wang, J.; Wang, J. Classical multidimensional scaling. In Geometric Structure of High-Dimensional Data and Dimensionality Reduction; Springer: Berlin/Heidelberg, Germany, 2012; pp. 115–129. [Google Scholar]
  67. Borg, I.; Groenen, P.J. Modern Multidimensional Scaling: Theory and Applications; Springer Science & Business Media: New York, NY, USA, 2005. [Google Scholar]
  68. Wang, J.; Wang, J. Geometric Structure of High-Dimensional Data; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  69. Everitt, B.; Hothorn, T. An Introduction to Applied Multivariate Analysis with R; Springer Science & Business Media: New York, NY, USA, 2011. [Google Scholar]
  70. Cox, T.F.; Cox, M.A. Multidimensional Scaling; CRC Press: lBoca Raton, FL, USA, 2000. [Google Scholar]
  71. Saeed, N.; Nam, H.; Haq, M.I.U.; Muhammad Saqib, D.B. A survey on multidimensional scaling. ACM Comput. Surv. (CSUR) 2018, 51, 1–25. [Google Scholar] [CrossRef]
  72. Kruskal, J.B. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 1964, 29, 1–27. [Google Scholar] [CrossRef]
  73. Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
  74. Tibshirani, R.; Walther, G.; Hastie, T. Estimating the number of clusters in a data set via the gap statistic. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2001, 63, 411–423. [Google Scholar] [CrossRef]
  75. Moritz, S.; Bartz-Beielstein, T. imputeTS: Time series missing value imputation in R. R J. 2017, 9, 207. [Google Scholar] [CrossRef]
  76. Müller, M. Dynamic Time Warping. In Information Retrieval for Music and Motion; Springer: Berlin/Heidelberg, Germany, 2007; pp. 69–84. [Google Scholar]
  77. Zhao, J.; Itti, L. shapedtw: Shape dynamic time warping. Pattern Recognit. 2018, 74, 171–184. [Google Scholar] [CrossRef]
  78. Jeong, Y.S.; Jeong, M.K.; Omitaomu, O.A. Weighted dynamic time warping for time series classification. Pattern Recognit. 2011, 44, 2231–2240. [Google Scholar] [CrossRef]
  79. Hsu, C.J.; Huang, K.S.; Yang, C.B.; Guo, Y.P. Flexible dynamic time warping for time series classification. Procedia Comput. Sci. 2015, 51, 2838–2842. [Google Scholar] [CrossRef]
  80. Dechpichai, P.; Jinapang, N.; Yamphli, P.; Polamnuay, S.; Injan, S.; Humphries, U. Multivariable Panel Data Cluster Analysis of Meteorological Stations in Thailand for ENSO Phenomenon. Math. Comput. Appl. 2022, 27, 37. [Google Scholar] [CrossRef]
  81. Lagouvardos, K.; Kotroni, V.; Bezes, A.; Koletsis, I.; Kopania, T.; Lykoudis, S.; Mazarakis, N.; Papagiannaki, K.; Vougioukas, S. The automatic weather stations NOANN network of the National Observatory of Athens: Operation and database. Geosci. Data J. 2017, 4, 4–16. [Google Scholar] [CrossRef]
  82. Koutsoyiannis, D.; Mamassis, N.; Efstratiadis, A.; Zarkadoulas, N.; Markonis, Y. Floods in Greece. In Changes of Flood Risk in Europe; IAHS: Wallingford, UK, 2012; pp. 238–256. [Google Scholar]
  83. Thiessen, A.H. District no. 10, great basin. Mon. Weather Rev. 1911, 39, 1248–1254. [Google Scholar] [CrossRef]
  84. Han, D.; Bray, M. Automated Thiessen polygon generation. Water Resour. Res. 2006, 42. [Google Scholar] [CrossRef]
  85. Grant, R.; Hollinger, S.; Hubbard, K.; Hoogenboom, G.; Vanderlip, R. Ability to predict daily solar radiation values from interpolated climate records for use in crop simulation models. Agric. For. Meteorol. 2004, 127, 65–75. [Google Scholar] [CrossRef]
  86. Feidas, H.; Karagiannidis, A.; Keppas, S.; Vaitis, M.; Kontos, T.; Zanis, P.; Melas, D.; Anadranistakis, E. Modeling and mapping temperature and precipitation climate data in Greece using topographical and geographical parameters. Theor. Appl. Climatol. 2014, 118, 133–146. [Google Scholar] [CrossRef]
  87. Qiu, J.; Zhao, W.; Brocca, L.; Tarolli, P. Storm Daniel revealed the fragility of the Mediterranean region. Innov. Geosci 2023, 1, 100036. [Google Scholar] [CrossRef]
Figure 1. Plot of average silhouette width against number of clusters using the precipitation data.
Figure 1. Plot of average silhouette width against number of clusters using the precipitation data.
Stats 08 00036 g001
Figure 2. Plot of average silhouette width against number of clusters using the temperature data.
Figure 2. Plot of average silhouette width against number of clusters using the temperature data.
Stats 08 00036 g002
Figure 3. Monthly seasonal indices (mean) for the two first level clusters under the 1st case (precipitation levels).
Figure 3. Monthly seasonal indices (mean) for the two first level clusters under the 1st case (precipitation levels).
Stats 08 00036 g003
Figure 4. Boxplots of the monthly seasonal indices for both clusters of the 1st case.
Figure 4. Boxplots of the monthly seasonal indices for both clusters of the 1st case.
Stats 08 00036 g004
Figure 5. Monthly seasonal indices (mean) for the two first level clusters under the 2nd case (mean temperature levels).
Figure 5. Monthly seasonal indices (mean) for the two first level clusters under the 2nd case (mean temperature levels).
Stats 08 00036 g005
Figure 6. Boxplots of the monthly seasonal indices for both clusters of the 2nd case.
Figure 6. Boxplots of the monthly seasonal indices for both clusters of the 2nd case.
Stats 08 00036 g006
Figure 7. Thiessen polygons of the four stations subgroups (sub-clusters regions) for the precipitation time series.
Figure 7. Thiessen polygons of the four stations subgroups (sub-clusters regions) for the precipitation time series.
Stats 08 00036 g007
Figure 8. Thiessen polygons of the four stations subgroups (sub-cluster regions) for the mean temperature time series.
Figure 8. Thiessen polygons of the four stations subgroups (sub-cluster regions) for the mean temperature time series.
Stats 08 00036 g008
Figure 9. Plot of average silhouette width against number of clusters for the multivariate case.
Figure 9. Plot of average silhouette width against number of clusters for the multivariate case.
Stats 08 00036 g009
Figure 10. Monthly seasonal indices (mean) for the two first-level clusters under the multivariate case. Upper indices are for the precipitation levels, while the bottom are for mean temperatures.
Figure 10. Monthly seasonal indices (mean) for the two first-level clusters under the multivariate case. Upper indices are for the precipitation levels, while the bottom are for mean temperatures.
Stats 08 00036 g010
Figure 11. Thiessen polygons of the 19 station groups defined by the second level (precipitation-based coloring).
Figure 11. Thiessen polygons of the 19 station groups defined by the second level (precipitation-based coloring).
Stats 08 00036 g011
Figure 12. Thiessen polygons of the 19 station groups defined by the second level (mean temperature-based coloring).
Figure 12. Thiessen polygons of the 19 station groups defined by the second level (mean temperature-based coloring).
Stats 08 00036 g012
Table 1. Summary statistics for precipitation data.
Table 1. Summary statistics for precipitation data.
DataTotal ObservationsMeanSDMinMax
Precipitation71,04063.11 mm78.18 mm0.00 mm1225.00 mm
Mean temperature71,04017.02 °C7.50 °C−10.25 °C36.10 °C
Table 2. Summarized results of the four identified sub-cluster regions under the 2nd scenario (i.e., precipitation-based clusters).
Table 2. Summarized results of the four identified sub-cluster regions under the 2nd scenario (i.e., precipitation-based clusters).
ClusterSub-ClusterPercentage of StationsMean Annual PrecipitationNotable Regions
Dry (Cluster 1)Very Dry92.97%589.68 mmEastern Greece, Attica (Athens), Thessaloniki.
Less Dry7.03%792.65 mmCrete, Aegean islands (Chios, Ikaria, Lesvos, etc.), Continental Greece (e.g., Kalamata, Geraki, and Tatoi).
Wet (Cluster 2)Wet87.72%1342.27 mmMountainous areas in Western Greece (Pindus, Panachaiko, Erymanthos, etc.).
Very Wet12.28%2099.75 mmRainy areas in Northern Greece, located in Epirus (Ioannina, Thesprotia, and Arta), Evrytania.
Table 3. Summarized results of the four identified sub-cluster regions under the 2nd scenario (i.e., temperature-based clusters).
Table 3. Summarized results of the four identified sub-cluster regions under the 2nd scenario (i.e., temperature-based clusters).
ClusterSub-ClusterPercentage of StationsMean Annual TemperatureNotable Regions
Warm (Cluster 3)Very Warm72.97%18.51 °CEastern Greece, islands, coastal Western Greece.
Less Warm1.08%16.91 °CFour stations: Achaia, Chania, Chios, and Veroia.
Cold (Cluster 4)Cold24.32%13.57 °CNorthern Greece (e.g., Alexandroupoli, Ioannina, and Metsovo), mountainous areas.
Very Cold1.62%6.09 °CHigh-altitude areas (>1500 m), e.g., Kaimaktsalan and Parnassos.
Table 4. Descriptive statistics for the four identified sub-cluster regions for precipitation time series.
Table 4. Descriptive statistics for the four identified sub-cluster regions for precipitation time series.
MonthMeanSDMedianMinMax MonthMeanSDMedianMinMax
1st subgroup (members: 291)Jan88.8340.4576.9027.93245.783rd subgroup (members: 50)Jan204.7469.48201.7663.60386.90
Feb63.1629.9359.076.07161.13Feb154.1171.23141.4340.05449.70
Mar57.6619.6853.4723.25123.80Mar130.7538.37120.1661.45241.55
Apr33.6913.9132.653.6069.88Apr71.4020.6168.3637.37115.33
May27.7417.6524.090.3084.11May66.5126.1361.7920.75118.63
Jun28.2020.0325.760.27120.95Jun53.9422.2055.1012.20121.46
Jul16.8915.8111.220.0078.63Jul31.4915.7432.582.0770.49
Aug12.6413.667.400.0069.25Aug32.4314.7133.194.9165.00
Sep35.6721.5133.200.00106.81Sep85.9129.9485.7228.40196.55
Oct58.0126.4053.4012.60157.04Oct129.3536.52126.8067.60243.12
Nov75.4833.3769.1320.40197.53Nov181.3355.01180.1771.55367.08
Dec91.7136.6183.7028.60229.17Dec200.3174.55182.6180.60463.56
2nd subgroup (members: 22)Jan170.8259.60161.9377.97329.204th subgroup (members: 7)Jan345.1834.17340.54284.87398.42
Feb106.1949.8396.8141.13205.77Feb257.6442.62271.20201.77325.23
Mar74.2526.0875.1938.13122.10Mar191.3026.80188.18155.60238.28
Apr36.0920.7331.8014.30111.80Apr102.9230.83109.4960.83142.87
May21.5311.2719.660.8048.94May114.9712.10112.7499.08136.52
Jun14.6613.667.640.3352.49Jun85.6815.0782.9069.80112.20
Jul5.265.803.960.0021.00Jul44.008.9448.0029.7853.70
Aug7.1210.473.520.0046.90Aug42.9311.6239.5733.1168.24
Sep26.5116.4825.001.6064.62Sep104.6736.86101.1648.00158.03
Oct62.9428.1162.0220.92120.43Oct185.1839.01186.81137.13241.15
Nov103.3534.3098.2247.90181.53Nov308.2841.57319.92241.09355.34
Dec163.9348.01154.7582.71255.93Dec317.0072.24295.86234.58429.80
Table 5. Descriptive statistics for the four identified sub-cluster regions for mean temperature time series.
Table 5. Descriptive statistics for the four identified sub-cluster regions for mean temperature time series.
MonthMeanSDMedianMinMax MonthMeanSDMedianMinMax
1st subgroup (members: 90)Jan2.901.862.95−1.578.003rd subgroup (members: 270)Jan9.192.199.423.7513.74
Feb5.971.726.451.308.61Feb10.861.7410.896.8215.16
Mar8.781.558.953.2111.32Mar13.031.5013.289.3016.74
Apr12.701.6213.046.7115.28Apr16.771.4516.8112.1920.31
May17.001.9317.249.6720.92May21.591.3621.5718.3325.80
Jun20.732.1220.8511.8824.58Jun25.101.4025.2116.0429.24
Jul23.471.9023.7318.5828.16Jul27.761.4527.8120.0831.89
Aug23.191.9423.3118.4927.69Aug27.811.5027.8023.7432.27
Sep18.871.8519.1512.8621.91Sep24.161.5124.3619.3828.91
Oct13.981.7214.178.8117.37Oct19.451.8619.6515.4224.68
Nov10.021.4410.245.8613.14Nov15.391.9715.6210.9320.38
Dec5.211.575.331.538.31Dec10.972.0811.026.3015.76
2nd subgroup (members: 6)Jan−3.631.54−3.10−5.61−2.074th subgroup (members: 4)Jan7.621.998.084.809.49
Feb−1.601.60−1.25−4.50−0.01Feb9.980.7310.108.9810.74
Mar0.971.881.43−1.643.01Mar12.070.9812.0810.8613.26
Apr5.302.285.752.307.62Apr16.041.1515.9514.7317.52
May9.132.309.006.2811.77May18.833.6519.1714.6022.38
Jun12.731.9912.5010.4015.48Jun24.201.6124.4122.2725.68
Jul15.412.2215.6312.8017.94Jul24.702.0624.4722.9326.91
Aug15.172.3415.0112.2818.34Aug24.501.9624.4522.3726.73
Sep10.981.7611.178.9213.00Sep22.032.5722.2118.7724.94
Oct6.641.606.954.598.57Oct18.451.6818.6516.2120.29
Nov3.321.864.110.545.34Nov14.611.9914.5712.2217.06
Dec−1.301.69−0.65−3.630.21Dec9.631.789.497.6011.95
Table 6. Summary of sub-clusters for clusters 1 and 2.
Table 6. Summary of sub-clusters for clusters 1 and 2.
ClusterSub-ClusterMean Annual PrecipitationMean Annual Temperature
11570.33 mm18.58 °C
2582.3 mm15.27 °C
3640.12 mm14.13 °C
4463.72 mm16.82 °C
5774.27 mm6.38 °C
6491.71 mm14.03 °C
7695.42 mm7.97 °C
8678.22 mm14.85 °C
9530.77 mm18.28 °C
2101337.99 mm15.3 °C
111251.72 mm17.15 °C
121576.51 mm15.26 °C
131758.19 mm14.79 °C
141404.18 mm15.89 °C
151234.38 mm12.29 °C
16914.65 mm15.95 °C
171509.55 mm15.82 °C
181027.64 mm17.46 °C
191186.15 mm11.02 °C
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Skamnia, E.; Bekri, E.S.; Economou, P. Unraveling Meteorological Dynamics: A Two-Level Clustering Algorithm for Time Series Pattern Recognition with Missing Data Handling. Stats 2025, 8, 36. https://doi.org/10.3390/stats8020036

AMA Style

Skamnia E, Bekri ES, Economou P. Unraveling Meteorological Dynamics: A Two-Level Clustering Algorithm for Time Series Pattern Recognition with Missing Data Handling. Stats. 2025; 8(2):36. https://doi.org/10.3390/stats8020036

Chicago/Turabian Style

Skamnia, Ekaterini, Eleni S. Bekri, and Polychronis Economou. 2025. "Unraveling Meteorological Dynamics: A Two-Level Clustering Algorithm for Time Series Pattern Recognition with Missing Data Handling" Stats 8, no. 2: 36. https://doi.org/10.3390/stats8020036

APA Style

Skamnia, E., Bekri, E. S., & Economou, P. (2025). Unraveling Meteorological Dynamics: A Two-Level Clustering Algorithm for Time Series Pattern Recognition with Missing Data Handling. Stats, 8(2), 36. https://doi.org/10.3390/stats8020036

Article Metrics

Back to TopTop