Next Article in Journal
A Comparative Analysis of Remotely Sensed and High-Fidelity ArcSWAT Evapotranspiration Estimates Across Various Timescales in the Upper Anthemountas Basin, Greece
Previous Article in Journal
A Paleo-Perspective of 21st Century Drought in the Hron River (Slovakia)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Just a Single-Layer CNN for Stochastic Modeling: A Discriminator-Free Approach

Institute for Environmental Research & Sustainable Development, National Observatory of Athens, 15236 Athens, Greece
Hydrology 2025, 12(7), 170; https://doi.org/10.3390/hydrology12070170
Submission received: 17 June 2025 / Accepted: 24 June 2025 / Published: 29 June 2025
(This article belongs to the Section Statistical Hydrology)

Abstract

The advent of machine learning (ML) has significantly transformed hydrology, particularly in the simulation of hydrological flows. However, ML techniques have not been employed to the same extent in stochastic hydrology. In applied sciences, the most common ML-based approach for developing stochastic simulation schemes is the use of generative adversarial networks (GANs), which consist of two sub-models, that is, a generator and a discriminator. Despite their potential, GANs have notable limitations, including high architectural complexity and the requirement to divide observed time series into shorter segments to generate sufficient training examples. This segmentation reduces the effective length of the series, limiting the model’s ability to capture and reproduce long-term dependencies. In this study, we propose a simpler stochastic scheme based on a single convolutional neural network (CNN) used as a generator, replacing the discriminator component of the GAN with a specifically designed cost function. The model is applied to a case study involving measured flow velocity time series and evaluated against traditional stochastic schemes designed for both Markovian and Hurst–Kolmogorov processes. Results show that the CNN-based approach not only offers computational simplicity but also outperforms conventional methods in preserving key statistical characteristics of the observed data.

1. Introduction

In recent years, machine learning (ML) has achieved remarkable milestones across a range of disciplines, often demonstrating capabilities that surpass human performance. Notable examples include AlphaGo, which defeated world champions in the complex game of Go; AlphaFold, which revolutionized protein structure prediction; and ResNet, which significantly advanced the field of computer vision [1]. Despite these groundbreaking achievements, the mathematical foundations of such models remain relatively simple. For instance, large language models (LLMs), like GPT-4, exhibit advanced reasoning and nuanced language understanding, yet they operate through a simple probabilistic token sampling process based on the distribution P ( w t | w 1 , w 2 , . . . , w t 1 ) , where w t is the token at position t in the sequence [2]. At their core, machine learning models are fundamentally statistical in nature.
Beginning almost three decades ago, ML techniques have gained significant traction in hydrology due to their ability to model complex, nonlinear processes without requiring explicit physical formulations. For instance, artificial neural networks (ANNs) have long been employed for runoff discharge prediction (e.g., Ref. [3]). More recently, long short-term memory (LSTM) networks, a form of recurrent neural networks, have demonstrated excellent performance in capturing temporal dependencies in streamflow time series (e.g., Ref. [4]). In groundwater hydrology, support vector regression (SVR) has been applied for modeling groundwater levels [5], while random forest (RF) algorithms have shown promise in regional hydrological frequency analysis [6]. Additionally, convolutional neural networks (CNNs) have been used to estimate discharge in ungauged watersheds, utilizing satellite imagery as input features [7]. These examples highlight the growing role of ML in addressing the limitations of traditional, physically based hydrological models. However, one subfield where ML applications remain limited is stochastic hydrology.
Stochastic approaches play a pivotal role in applied sciences such as hydrology and economics, where uncertainty, variability, and incomplete information are inherent features of the systems studied. Both natural and anthropogenic processes exhibit intrinsic randomness that deterministic models alone cannot fully capture. In hydrology, stochastic models are essential for simulating rainfall, streamflow, and groundwater fluctuations—in the sense of generating equiprobable alternative realizations—enabling more realistic assessments of risk and variability under changing climate conditions [8]. Likewise, in economics, stochastic modeling forms the backbone of financial forecasting, market dynamics, and decision-making under uncertainty [9]. A fundamental element in both fields involves the analysis of time series, which provides valuable insight into temporal patterns, trends, cycles, and memory effects, including long-range dependence. Understanding and modeling these temporal structures is essential for developing robust predictions and management strategies.
Among machine learning algorithms, generative adversarial networks (GANs) are particularly well-suited to stochastic analysis due to their design for generative modeling. The generator aims to produce synthetic data that mimics real observations, while the discriminator learns to distinguish between real and synthetic data. Through this adversarial process, the generator progressively improves its ability to capture the underlying stochastic structure of the training distribution [10].
GANs are especially valuable in stochastic modeling because they do not rely on predefined assumptions about the form of the probability distribution. Instead, they learn complex, high-dimensional distributions directly from data [10]. This makes them well-suited for generating equiprobable synthetic time series, spatial fields, or future scenarios. Although initially applied mainly to image generation, the potential of GANs for hydrological applications was quickly recognized [11], whereas in the field of economics, a particularly influential study by Takahashi et al. [12] demonstrated that GANs could replicate the statistical properties of historical financial time series. Takahashi et al. assessed GANs’ ability to reproduce specific “stylized facts” of financial time series, including properties of temporal correlation (such as linear unpredictability and volatility clustering) and distributional characteristics (such as fat tails). GANs showed impressive performance in capturing and reproducing these features. Subsequently, GANs have been applied to hydrology for similar tasks, including weather generation [13], synthetic runoff series for multi-reservoir management [14], and stochastic simulation of delta formation [15].
GANs are particularly effective in time series applications because of their capacity to learn and replicate abstract features of stochastic processes, provided the generator and discriminator are properly balanced and trained on a sufficient number of examples [16,17]. In time series modeling, this typically involves segmenting historical records into multiple shorter samples. However, this segmentation can undermine the inference of long-term statistics, which are crucial in many hydrological and economic contexts. To address this limitation, Shaham et al. [18] proposed SinGAN, a scheme that can be trained on a single example. SinGAN employs a pyramid of GANs, where lower layers operate on small subregions of the training example, and higher layers progressively cover larger areas. This hierarchical structure allows the model to extract fine-scale statistics from localized subregions and coarse-scale statistics from the entire dataset.
Figure 1 presents a schematic of a typical GAN, illustrating its two key components, that is, the generator and the discriminator. Although the architecture is simpler than that used in other advanced ML applications, such as LLMs, it still entails a certain level of complexity. A standard GAN generator typically includes four transposed convolutional layers, while the discriminator consists of a similar number of convolutional layers [19]. When employing SinGAN, which is essential for preserving long-term statistics as previously discussed, the architectural complexity increases proportionally with the number of GAN layers used.
The complexity of GANs introduces certain requirements regarding the computational resources needed for network training, both in terms of capacity and time. Moreover, in applied hydrology, practitioners often face challenges in adopting sophisticated tools due to limited technical familiarity. To address this issue, Rozos et al. [20] proposed a simplified approach based on a multilayer perceptron (MLP) with fourteen input nodes, two hidden layers (each with two nodes), and a single output node. The simplicity of the network takes advantage of the annual periodicity encoded in the input data, while the cost function computes the deviation between the statistical characteristics of the synthetic and observed time series. It evaluates key properties across multiple scales (daily, annual, and higher), as well as wet/dry state frequencies and transition probabilities. Since the cost function lacks an analytical expression for its derivative with respect to the network’s output, a genetic algorithm was used for training, resulting in relatively long training times. Once trained, the MLP can be implemented in a standard spreadsheet environment. This approach is suitable for simulating a single intermittent stochastic process with periodicity, such as daily rainfall.
This study focuses on the stochastic modeling of multivariate continuous and regular (non-intermittent, non-periodic) processes. Such processes are prevalent in hydrological applications, where the generative adversarial network (GAN) framework represents a recent and significant advancement in stochastic modeling due to its ability to capture complex data distributions. Consider soil moisture, a crucial hydrological variable that regulates vegetation growth and environmental health. GANs have been successfully employed for generating spatiotemporal time series of soil moisture indices in the context of drought forecasting [21]. Similarly, the flow discharge of non-ephemeral rivers exemplifies another hydrological process within this category. For instance, a GAN incorporating mass conservation constraints has been utilized for forecasting extreme flood events [22], thereby supporting early warning systems. Furthermore, a variation of this approach has been applied to flood risk mapping for identifying vulnerable regions [23]. From a broader perspective, continuous and regular processes are ubiquitous in geophysical sciences. Characteristic examples related to natural hazards include seismic waves generated by earthquakes, sea surface temperature (linked to localized atmospheric hazards like hurricanes), and groundwater levels (a key controlling factor in landslide occurrences). Generative deep learning has demonstrated high efficiency and broad applicability across these diverse domains [24].
In this study, a novel ML-based approach suitable for multivariate non-intermittent, non-periodic processes is proposed. The conceptual design draws inspiration from the GAN framework but omits the discriminator component to both simplify the architecture and avoid the dilemma of either segmenting the data or employing more complex models like SinGAN. In the proposed model, a single convolutional layer replaces the GAN generator, operating on white noise inputs to produce synthetic data. Unlike the work of Rozos et al. [20], the cost function is designed to have analytical derivatives, allowing for efficient training via backpropagation. The simplicity of the architecture—centered on a single convolutional layer—makes it feasible to implement the trained model in practitioner-friendly platforms such as spreadsheets.
The proposed method was tested using flow velocity measurements from the Laboratory of Hydromechanics at the University of Siegen [25]. The magnitudes of the velocity vectors along the three spatial axes (x, y, z) were treated as a trivariate stochastic process. For benchmarking purposes, conventional stochastic models—a Markovian model and a Hurst–Kolmogorov model—were also applied to the same dataset.

2. Materials and Methods

2.1. Measurements

The dataset was obtained from flow measurements conducted in a hydraulic channel with a width of 8.5 cm and a length of 5 m. The flow depth during the experiment was maintained at 15.2 cm, and the discharge was set to 3.2 L/s. Flow velocity measurements were acquired using an acoustic Doppler velocimeter (ADV) equipped with a side-looking probe. The experimental setup is illustrated in Figure 2. Further details are provided in Ref. [25].

2.2. Second-Order Characteristics of Stochastic Process—The Climacogram

As previously mentioned, evaluating the deviation of statistical characteristics between synthetic and observed time series is central to the proposed machine learning scheme. These characteristics are described using statistical moments, which, for stationary stochastic processes, remain invariant over time. The most commonly used metrics are the mean, variance, and skewness, which correspond to the first three moments of the first-order distribution function. Higher-order classical moments, however, are considered unknowable in practice due to limitations in their estimation, as discussed in [26].
Solely preserving these first-order metrics is insufficient for most real-world processes. In such cases, second-order characteristics, such as the autocovariance function, must also be accurately represented. However, the use of the autocovariance function has recently been criticized due to its mathematical formulation—it is the second derivative of the climacogram, normalized by the variance—which can lead to misinterpretations of a process’s behavior [27].
Alternative second-order statistical metrics include the variogram, the power spectrum, and the climacogram, all of which are mathematically equivalent transformations. Despite this equivalence, they differ significantly in terms of practical application. Among these, the climacogram is gaining popularity due to its lower statistical bias and reduced uncertainty, especially when working with small datasets [27].
The climacogram, introduced by Koutsoyiannis [28], is conceptually straightforward. It shows how the variance of a dataset evolves as the data are aggregated over increasing time scales (e.g., using moving averages). By plotting the variance against the time scale on a log–log graph, the climacogram reveals the underlying statistical structure of the process. One of its key insights is persistence, which is quantified by the Hurst parameter H. The parameter H ranges from 0 to 1, with values greater than 0.5 indicating a persistent process. The slope of the climacogram at larger scales is equal to 2 H 2 .
Figure 3 illustrates three examples of climacograms:
  • Figure 3a shows the climacogram of a random signal, exhibiting the typical slope of 1 and a Hurst parameter of 0.5. This slope is also characteristic of all Markovian processes, although their climacograms tend to flatten at small scales.
  • Figure 3b presents a random signal combined with a linear trend. At small scales, the slope is 1 , as in a pure random signal, but it increases (tends toward zero) at larger scales as the deterministic trend dominates.
  • Figure 3c shows a random process combined with a sinusoidal signal with a period of 100 time units. Initially, the climacogram has a mild negative slope, which steepens as the time scale approaches the period of the sinusoidal signal, where variance drops sharply. Beyond that, the slope gradually returns to 1 , with some oscillations due to the harmonics.
These examples demonstrate the diagnostic power of the climacogram in analyzing the structure of stochastic processes. For this reason, it is adopted here as the primary metric for assessing model performance with respect to second-order statistics. First-order characteristics will be evaluated using conventional metrics such as mean, variance, skewness, covariance, and histograms.

2.3. Markovian Model (AR1)

The first-order autoregressive scheme (AR1) is among the most widely used stochastic models for capturing the essential structure of temporal dependence in time-series data. Its simplicity and analytical tractability make it particularly attractive for modeling processes that exhibit short-term memory. In hydrology, multivariate extensions of AR1 have been employed to simulate and forecast multiple interrelated hydrological variables, with varying degrees of success, depending on system complexity and data quality [29]. The multivariate AR1 can be expressed as follows:
V t = A V t 1 + B ϵ t ,
where t denotes the time index. In the case of the three variables, V t is a 3 × 1 vector containing the values of the variables at time step t, and ϵ t is a 3 × 1 vector of independent and identically distributed (i.i.d.) random variables. The matrix A is the 3 × 3 coefficient matrix representing the linear dependence between time steps, and B is the 3 × 3 innovation matrix that scales the noise components.
Note: For simplicity, all matrix dimensions hereinafter are presented assuming the specific case of the three variables used in this study. However, generalization to an arbitrary number of variables is straightforward.
The matrices A and B can be estimated directly from the observational data using the following formulas:
A = cov ( V , V 1 ) cov ( V ) 1 ,
B B T = cov ( V ) A cov ( V ) A T ,
where cov ( · , · ) denotes the sample cross-covariance matrix between its arguments, and cov ( · ) denotes the sample covariance matrix. The matrix V is constructed such that each column t contains the values of the three variables at time step t, while V 1 is obtained by a horizontal circular shift of V , meaning that column t of V corresponds to column t + 1 of V 1 .
Note that in this case study, the matrix V has three rows, corresponding to each component of the flow velocity vector in the x-, y-, and z-directions. The covariance matrix cov ( V ) is, therefore, a 3 × 3 matrix, with the covariances between V x , V y , and V z as off-diagonal elements, and the corresponding variances on the diagonal.
The right-hand sides of Equations (2) and (3) can be directly computed from the observed data. If the resulting matrix on the right-hand side of Equation (3) is positive definite, the innovation matrix B can be estimated by a straightforward matrix decomposition of B B . In this study, eigendecomposition was used for this purpose [30]. In cases where the matrix is not positive definite, more advanced decomposition techniques are required [31].
To ensure that the skewness of the original process is preserved, the statistical estimator μ 3 ( ϵ ) —a 3 × 1 vector representing the third central moments of the i.i.d. terms—should satisfy the following relation [30]:
μ 3 ( ϵ ) = B ( 3 ) ( μ 3 ( V ) μ 3 ( A V ) ) ,
Note that the “∘” (Hadamard notation) explicitly states that the exponentiation is element-wise.
For the generation of ϵ , a random number generator based on the Generalized Pareto distribution was used to produce three time series (one for each row of ϵ ) with zero mean, unit standard deviation, and a third central moment equal to μ 3 ( ϵ ) .
A schematic representation of the AR1 algorithm is shown in Figure 4.
AR1 is most appropriate for modeling Markovian processes, where the future state depends only on the present state (although other unknown influences may exist). A characteristic feature of such processes is that their autocovariance function decays exponentially with increasing lag.

2.4. Hurst–Kolmogorov Model (HK)

Physical processes that exhibit substantial persistence—manifested as prolonged periods of similar values—are more accurately described by Hurst–Kolmogorov (HK) stochastic processes. These processes differ from Markovian ones in that they feature long-range dependence, which significantly affects their statistical behavior and modeling approach. For a single process, such as V x ̲ , a moving average (MA) scheme can be employed to generate synthetic time series:
V x t = j = J J a j ϵ t j
where a j are coefficients calculated by the following formula [27]:
a j = 2 ( 1 H ) ( 1.5 H ) 2 var ( V x ) 0.5 | j + 1 | H + 0.5 + 0.5 | j 1 | H + 0.5 | j | H + 0.5
The previous moving average scheme is a univariate stochastic model. For multivariate processes, Equation (5) needs to be applied to each component individually, using modified i.i.d. terms ϵ that incorporate the covariance structure of the multivariate system. These terms are derived from the following transformation [32]:
ϵ t = b ϵ t
where the 3 × 3 matrix b is given by the following formula:
b b T = c
where c is a 3 × 3 matrix whose elements are derived from the normalized covariance values of the original variables. Specifically, each element of c corresponds to the covariance between a pair of variables, divided by the sum of the products of their respective MA weights a j . For instance, if c 21 represents the covariance between V x and V y , it is calculated using the following formula [32]:
c 21 = cov ( V x , V y ) / j = J J a x j a y j
Then, the matrix b is computed from b b T using the eigendecomposition method.
If the skewness of the process is significant, then this characteristic should be preserved by the stochastic scheme. For the trivariate case study, the skewnesses of the three-row matrix ϵ are represented by the vector ξ ϵ . The first element of this vector, corresponding to the x-direction, is related to the estimated skewness of V x ̲ according to the following formula (the formulas for the other two dimensions are similar):
ξ ϵ x = ξ V x var ( V x ) 3 / 2 j = J J a x j 3
The skewness of ϵ can be calculated from the skewness of ϵ with the following formula [32]:
ξ ϵ = ξ ϵ b ( 3 )
The schematic representation of the algorithm is displayed in the following figure. Three time series of i.i.d. terms are produced with a random number generator that follows the Generalized Pareto distribution with a mean equal to zero, a standard deviation equal to 1, and skewness ξ ϵ . These random numbers are organized into the three-row matrix ϵ . Then, by applying Equation (7), these time series are transformed into time series of i.i.d. terms, organized into the three-row matrix ϵ . The time series of ϵ have zero self-correlation in time, but are correlated with each other. The moving average scheme of Equation (5) is applied independently to each of the time series of ϵ to produce synthetic values of the simulated variables.
A schematic representation of the algorithm of the HK scheme is displayed in Figure 5.
This scheme can preserve the autocovariance function up to a lag equal to J (see Figure 3 in Ref. [32]).

2.5. Autoregressive Integrated Moving Average

The combination of autoregressive (AR) and moving average (MA) schemes yields the versatile autoregressive moving average (ARMA) stochastic model. The autoregressive integrated moving average (ARIMA) model extends ARMA by incorporating a differencing operator, enabling it to accommodate non-stationary data characterized by trends or varying means. Further generalizing ARIMA, autoregressive fractional integrated moving average (ARFIMA) models introduce fractional differencing (non-integer orders of integration). This feature provides a natural framework for modeling long-memory processes, in which the autocorrelation function exhibits a slow, hyperbolic decay.
ARFIMA models are defined by the triplet of parameters ( p , d , q ) , where p and q denote the orders of the AR and MA components, respectively, and d represents the degree of differencing. The value of d is contingent upon the nature of the stochastic process being modeled. Specifically, if d [ 0.5 , 0.5 ] , the ARFIMA model can effectively simulate a stationary process [33].
In the present study, the ARFIMA model, implemented in MATLAB R2021b, was employed. This implementation was developed by building upon the contributions of Fatichi, Caballero, and Inzelt. A comparative analysis by Liu et al. [34] evaluated this MATLAB-based tool against an ARFIMA implementation in R, demonstrating virtually identical simulation results across all test case studies. Given its univariate nature, this specific ARFIMA model cannot be comprehensively compared with the other models examined in this study. Instead, it serves as an initial exploration to estimate the maximum efficiencies attainable with conventional time series models. For this investigation, the orders p and q were set to 2, while the degree of differencing d was estimated by the model itself.

2.6. Machine Learning (CNN)

The topology of the CNN model is shown in Figure 6. Typically, the input of a CNN is a structured data grid, most often images represented as multi-dimensional arrays (e.g., RGB images as height × width × 3 tensors). In this study, each input instance is a one-dimensional array of size 1 × 54,000, where each of the 54,000 values is drawn from a Generalized Pareto distribution. The instance size matches the length of the observed time series (54,000 values). Consequently, the total length of synthetic values generated by the CNN is 54,000n, where n is the number of instances. The hidden layer is a convolutional layer with a kernel of 3 channels (dimensions: 1 × 1350 × 3). Each channel produces the synthetic values for one of the three stochastic variables simulated (i.e., the three components of flow velocity along the primary axis). No activation function is applied.
The input features (i.i.d. values) have a mean of 0 and a standard deviation of 1. This does not affect the CNN’s ability to reproduce the mean and standard deviation of the observed time series; the mean of the output is primarily influenced by the bias terms, while the standard deviation is determined by the scale of the kernel weights. In contrast, skewness is more implicitly influenced by the CNN weights, as inferred from Equation (10).
To ensure the skewness of the output matches that of the observed time series, the skewness of the features imposes a constraint on the kernel weights via Equation (10). In practice, setting the feature skewness to 4–5 times the largest observed absolute skewness minimizes the restrictive influence of this constraint.
The training methodology follows the approach used by Rozos et al. [20]. The cost function compares outputs to observed data not by direct distance, but by differences in statistical metrics. It is defined as follows:
C ( y ) = W 1 W 5 · D ( C S C O ) 2 D ( C 1 S C 1 O ) 2 D ( γ T S γ T O ) 2 D ( ξ S ξ O ) 2 D ( μ S μ H ) 2
where y is the CNN output; W 1 to W 5 are cost function weights; C S and C O (3 × 3 matrices) are the covariance matrices of the simulated and observed time series; C 1 S and C 1 O are the corresponding lag-1 cross-covariance matrices; γ T S and γ T O (3 × 1 vectors) are the variances at scale T; ξ S and ξ O are the skewness vectors; and μ S and μ O are the mean value vectors. The function D returns the average of all elements in its argument (whether a vector or matrix).
In Rozos et al. [20], the derivative of the cost function with respect to the network output was unavailable analytically, which prevented the use of backpropagation. As a result, an evolutionary algorithm (a genetic algorithm) was employed, although with slow convergence. In this study, the cost function is differentiable (see Appendix A), allowing for efficient training via the backpropagation method [35].
The weights W 1 to W 5 serve dual purposes, that is, they balance the relative importance of each statistical metric in the cost function and also modulate the learning rate. To prevent exploding gradients, gradient values are rescaled when they exceed a predefined threshold [36]. The training uses the ADADELTA optimization algorithm [37], which dynamically adjusts the learning rate over time and is robust to noisy data.
Each instance is treated as a separate batch during training. No dropout is used, enabling the network to directly learn from the data without introducing stochastic regularization that could hinder the model’s ability to capture dependencies.
As noted earlier, the number of instances, n, defines the length of the synthetic time series. This number can be kept low during training to reduce computational demand and then increased afterward for generation. In this study, n was set to 20 during training and raised to 100 for application.
The CNN scheme was implemented using the Cortexsys framework [38], which is compatible with both MATLAB® and GNU Octave.

3. Results

This section presents a comparative analysis of the synthetic time series generated by the stochastic schemes—AR1, HK, ARFIMA, and CNN—against the observed data. For the CNN scheme, the results are based on synthetic values produced using a new set of input features (i.e., independently and identically distributed values), distinct from those used during training. This ensures that the evaluation reflects the model’s generalization ability rather than overfitting.
The covariance matrices of the observed and synthetic time series generated by the stochastic schemes AR1, HK, and CNN are shown in Table 1. These matrices are symmetric and positive definite, with their diagonals representing the variances of each variable. The covariances of the synthetic time series produced by all three schemes were very close to the observed values.
The lag-1 cross-covariance matrices are presented in Table 2. These matrices are positive definite but not necessarily symmetric; the elements on their diagonals represent the lag-1 autocovariances of each variable. The matrix generated by the HK scheme was symmetric and deviated from the observed, indicating limitations in capturing directionality in lagged dependencies. In contrast, the matrices from the AR1 and CNN schemes closely matched the observed matrix.
Table 3 presents the skewness of the observed and synthetic time series. All three schemes demonstrated satisfactory performance in replicating skewness.
Table 4 presents the Kullback–Leibler (KL) divergence of the synthetic time series and the fitting error (FE) between climacograms of the synthetic and observed time series. The KL divergence measures the similarity between the probability distributions of the synthetic and observed time series, a concept rooted in information theory [39]. The fitting error measures the distance between the climacograms employing a weighted sum of the square of the differences in the log–log space (see Equation (15) in Ref. [40]). CNN presented remarkably lower deviations considering both metrics.
Figure 7, Figure 8 and Figure 9 compare histograms of the synthetic time series to those of the observed data. The y-axis is logarithmic; hence, longer downward bars correspond to lower frequencies. According to these figures, although all schemes reproduce the empirical distributions reasonably well, CNN presents a clear advantage regarding the extreme values.
Figure 10 shows the climacogram of V x . The reference climacogram initially exhibits a mild slope, which becomes steeper at larger scales. The AR1 scheme matches the reference at smaller scales but diverges at larger scales. The HK scheme shows a consistent deviation across all scales. The CNN scheme fits the reference climacogram closely across scales up to approximately 1000 time units.
Figure 11 and Figure 12 show the climacograms of V y and V z , respectively. In both cases, the reference climacograms follow a mild–steep–mild pattern across increasing scales. The AR1 scheme matches well at small scales but diverges significantly at larger ones. The HK scheme exhibits a mismatch at intermediate scales. In contrast, the CNN scheme replicates the reference climacograms well across all scales up to approximately 1000 time units.

4. Discussion

The results of the simulations suggest that all stochastic schemes performed generally well with respect to first-order statistics (mean, variance, and skewness). However, each scheme exhibited at least one shortcoming regarding second-order statistics (covariance, cross-covariance, and climacogram), which can be attributed to the theoretical foundations of the respective methods.
  • Autoregressive order 1 (AR1): This scheme is well-suited for modeling Markovian processes, which exhibit scale-invariant variance at small scales, a feature manifested as a horizontal trend on the left side of their climacogram. This behavior aligns with the physical constraint of finite energy in natural systems, explaining AR1’s good performance at smaller scales. At larger scales, however, the variance of Markovian processes decays similarly to white noise, producing a climacogram slope of approximately 1 . This fixed slope limits AR1’s capacity to capture long-range dependence or persistence in the data. Nevertheless, AR1 performs particularly well in preserving lag-1 cross-covariances. In general, autoregressive models preserve cross-covariances up to a lag equal to their order, as dictated by their mathematical formulation (see Equation (1)).
  • Hurst–Kolmogorov (HK): This scheme models processes where the variance scales as a power law, resulting in a linear climacogram. However, such scaling implies an unrealistic increase in variance toward smaller scales, which would necessitate infinite energy (physically implausible for natural systems). As a result, the HK scheme may only align with the reference climacogram at specific ranges, that is, at lower scales (e.g., Figure 11b and Figure 12b), at higher scales, or it may exhibit a constant offset across all scales (e.g., Figure 10b). Additionally, the moving average weights in the HK scheme (derived from Equation (6)) are symmetric about the central weight, leading to symmetric cross-covariances—an assumption not generally met in empirical data.
  • Convolutional neural network (CNN): Unlike the other two schemes, the CNN scheme is not based on explicit theoretical assumptions. Although it is mathematically equivalent to a moving average (MA) scheme, its weights are estimated heuristically rather than derived analytically, offering greater flexibility. A single convolutional layer is functionally equivalent to an MA scheme and can preserve persistence up to a time scale equal to the product of the time step d t and the number of weights used (i.e., 2 J d t in Equation (5)). In the present study, 1350 weights were used, implying that the CNN can preserve persistence up to 1350 time units. This is corroborated by Figure 10c, Figure 11c, and Figure 12c, which show excellent agreement between the CNN climacogram and the observed reference up to this scale.
Evidently, the CNN scheme outperformed the other two approaches in representing second-order statistics. In this case study, the matrices on the right-hand side of Equations (3) and (8) were positive definite, which made it possible to apply Equations (4) and (11) to compute the third moment of the white noise (i.i.d. terms) used in the AR1 and HK schemes. However, this favorable condition is not guaranteed in all applications. If the matrices are not positive definite, more advanced methods, such as solving an optimization problem, are required for the AR1 and HK schemes to preserve the observed skewness [31].
The synthetic time series generated by the AR1 scheme produced a realistic climacogram at smaller scales, but its fixed slope of 1 at larger scales limits its suitability for processes with long-range dependence. The HK scheme used here is capable of modeling climacograms with arbitrary, but constant, slopes across all scales. More sophisticated variants, such as the filtered Hurst–Kolmogorov model, can yield climacograms with more realistic behavior at both low and high scales, although deriving the moving average weights in such models is considerably more complex than directly applying Equation (6). The CNN-based scheme, by contrast, can reproduce the shape of any climacogram up to a scale corresponding to the kernel size (expressed in time units, where each unit equals the time step of the series). In the present case study, it is noteworthy that CNN was the only scheme that successfully replicated the climacogram of the observed V y component. The form of this climacogram suggests the presence of a trend in the original data, the origin of which remains unclear; it may result from the pump motor in the hydraulic experiment setup or from artifacts introduced by the ADV’s electronics. Regardless of the source, CNN managed to reproduce this feature accurately.
A disadvantage of CNN-based schemes is the computational cost associated with training. In this case, training required approximately 30 min on a 2.5 GHz dual-core CPU. This duration can be significantly reduced by leveraging multi-core architectures, and even more so by using a GPU instead of a CPU.

5. Conclusions

This study introduced a novel machine learning-based stochastic simulation scheme utilizing a single convolutional neural network (CNN) layer as a generator. In contrast to the commonly used generative adversarial networks (GANs), which require a more complex generator–discriminator framework, the proposed approach simplifies the architecture by employing a custom-designed cost function. This function directly compares essential statistical properties between observed and synthetic time series, thereby removing the need for a discriminator.
The effectiveness of the CNN-based scheme was evaluated against conventional stochastic methods designed for Markovian and Hurst–Kolmogorov processes. The results demonstrated that the CNN-based approach outperforms these traditional methods in reproducing second-order statistics. Notably, it preserved variance across temporal scales and successfully captured the effects of persistence, both critical for producing realistic long-term simulations.
Furthermore, by establishing the formulation equivalence between the CNN-based generator and classical moving average schemes, this study highlighted both the theoretical implications and certain limitations of the method. A key advantage of the proposed scheme lies in its simplicity and practical applicability—it can be implemented in widely accessible tools such as standard spreadsheet software. This makes it a valuable option for engineers and practitioners involved in design, planning, and decision-making processes.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in this study are openly available at https://hydronoa.gr/hydronoa/DAAD-Scholarship.html (accessed on 25 June 2025).

Acknowledgments

The author wishes to thank the Deutscher Akademischer Austauschdienst (DAAD) for covering the travel costs associated with the completion of this study.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADVAcoustic Doppler velocimetry
ANNArtificial neural network
AR1First-order autoregressive
ARFIMAAutoregressive fractional integrated moving average
ARIMA  Autoregressive integrated moving average
ARMAAutoregressive moving average
CNNConvolutional neural network
GANGenerative adversarial network
HKHurst–Kolmogorov
i.i.dIndependent and identically distributed
LLMLarge language model
LSTMLong short-term memory
MAMoving average
MLMachine learning
MLPMulti-layer perceptron
PDFProbability density function
RFRandom forest
SVRSupport vector regression

Appendix A

The derivative of the cost function (see Equation (12)) with respect to the network output y i j , where i is the output node of the network and j is the time index of the output time series, is given by the following equation:
J y i j = y i j W 1 W 5 D ( C S C O ) 2 D ( C 1 S C 1 O ) 2 D ( γ T S γ T O ) 2 D ( ξ S ξ O ) 2 D ( μ S μ H ) 2 ,
The previous algebraic formula can be easily calculated given the analytic formula of the partial derivative / y i j of each statistical metric included in the cost function.
The partial derivative with respect to y 1 j of the deviation of the covariance of the output of nodes 1 and 2 from the covariance of the corresponding observations is provided by the following equation:
( cov ( y 1 , y 2 ) cov Obs ) 2 y 1 , j = 2 ( cov ( y 1 , y 2 ) cov Obs ) cov ( y 1 , y 2 ) y 1 , j = 2 ( cov ( y 1 , y 2 ) cov Obs ) y 1 , j 1 / N k = 1 N ( y 1 , k μ ( y 1 ) ) ( y 2 , k μ ( y 2 ) ) = 2 / N ( cov ( y 1 , y 2 ) cov Obs ) ( y 2 , j μ ( y 2 ) )
where N is the total number of synthetic values.
Based on the previous, the derivatives of the deviation of the other covariance values, the variance, and the lag-1 cross-covariance can be easily obtained.
The derivative with respect to y 1 j of the deviation of the climacogram at scale T from the climacogram of the synthetic data to that of the observed is given for the output of node 1:
( γ T ( y 1 ) γ T Obs ) 2 y 1 , j = 2 ( γ T ( y 1 ) γ T Obs ) γ T ( y 1 ) y 1 , j = 2 ( γ T ( y 1 ) γ T Obs ) y 1 , j T / N k = 1 N / T t = 1 T y 1 , ( k 1 ) T + t T μ ( y 1 ) 2 = 2 ( γ T ( y 1 ) γ T Obs ) 2 / N 1 / T t = 1 T y 1 , ( k j 1 ) T + t μ ( y 1 )
where k j is the window of T values that contains y 1 j .
The derivative of the deviation of the third moment of the synthetic data from the observed data is given for the output of node 1 with respect to y i 1 :
( μ 3 ( y 1 ) μ 3 Obs ) 2 y 1 , j = 2 ( μ 3 ( y 1 ) μ 3 Obs ) μ 3 ( y 1 ) y 1 , j = 2 ( μ 3 ( y 1 ) μ 3 Obs ) y 1 , j 1 / N k = 1 N ( y 1 , k μ ( y 1 ) ) 3 = 6 ( μ 3 ( y 1 ) μ 3 Obs ) ( y 1 , j μ ( y 1 ) ) 2 var ( y 1 ) / N
And finally, the derivative of the deviation of the mean of the synthetic data from the observed data is given for the output of node 1 with respect to y i 1 :
( μ ( y 1 ) μ Obs ) 2 y 1 , j = 2 ( μ ( y 1 ) μ Obs ) μ ( y 1 ) y 1 , j = 2 / N ( μ ( y 1 ) μ Obs )

Appendix B

This section provides the results of a second case study as an additional test of the suggested stochastic scheme. The data were obtained from the same experimental apparatus displayed in Figure 2, but with a different setup. A sluice was employed to generate a hydraulic jump with upstream and downstream depths of 21 and 9.5 cm, respectively. Despite the discharge being similar to that of the first experiment, the hydraulic jump introduced a large variation in the velocity magnitudes along all directions, resulting in considerably different statistical characteristics compared to the dataset of the first case study. The results displayed in the following tables suggest that the stochastic model based on the CNN approach outperformed the classical stochastic models.
Table A1 and Table A2 display the covariance matrices and the lag-1 cross-covariance matrices of the observed and synthetic time series generated by the three stochastic schemes (AR1, HK, and CNN).
Table A1. Covariance matrix of the observed and synthetic time series.
Table A1. Covariance matrix of the observed and synthetic time series.
Obs.AR1HKCNN
V x V y V z V x V y V z V x V y V z V x V y V z
V x 432.5−2.52−149.4435.8−3.04−149.4433.7−2.62−149.8429.6−2.50−144.8
V y −2.52182.0−7.88−3.04182.2−7.70−2.20181.7−8.06−2.50182.1−7.88
V z −149.4−7.88270.1−149.4−7.69270.5−149.6−7.79270.7−144.8−7.88269.7
Table A2. Lag-1 cross-covariance matrix of the observed and synthetic time series.
Table A2. Lag-1 cross-covariance matrix of the observed and synthetic time series.
Obs.AR1HKCNN
V x V y V z V x V y V z V x V y V z V x V y V z
V x 376.815.91−132.4379.715.51−132.383.55−1.12−17.14346.9814.50−127.8
V y −18.692.4−4.55−19.092.7−4.31−0.33−3.21−0.39−17.9192.32−4.47
V z −135.0−9.43206.9−134.9−9.1207.3−17.30.6512.6−131.1−9.4204.3
Table A3 displays the skewness of the observed and synthetic time series generated by the three stochastic schemes (AR1, HK, and CNN). Table A4 presents the Kullback–Leibler (KL) divergence between the PDF of the synthetic and observed time series, and the fitting error (FE) between climacograms of the synthetic and observed time series.
Table A3. Skewness of the observed and synthetic time series.
Table A3. Skewness of the observed and synthetic time series.
Obs.AR1HKCNN
V x V y V z V x V y V z V x V y V z V x V y V z
0.4990.214−0.3230.6240.225−0.3390.5010.212−0.3210.4660.214−0.320
Table A4. Kullback–Leibler (KL) divergence between the PDFs of the synthetic and observed time series, and fitting error (FE) between the climacograms of the synthetic and observed time series.
Table A4. Kullback–Leibler (KL) divergence between the PDFs of the synthetic and observed time series, and fitting error (FE) between the climacograms of the synthetic and observed time series.
AR1ARFIMAHKCNN
V x V y V z V x V y V z V x V y V z V x V y V z
KL0.1256.0600.7322.5820.6760.8953.26945.786.5371.0130.2570.148
EF8.74 × 10−412.67 × 10−411.67 × 10−428.987 × 10−44.634 × 10−42.220 × 10−414.86 × 10−27.84 × 10−217.49 × 10−225.877 × 10−40.866 × 10−41.848 × 10−4

References

  1. Miao, Q.; Zheng, W.; Lv, Y.; Huang, M.; Ding, W.; Wang, F.Y. DAO to HANOI via DeSci: AI paradigm shifts from AlphaGo to ChatGPT. IEEE/CAA J. Autom. Sin. 2023, 10, 877–897. [Google Scholar] [CrossRef]
  2. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog 2019, 1, 9. [Google Scholar]
  3. Minns, A.; Hall, M. Artificial neural networks as rainfall-runoff models. Hydrol. Sci. J. 1996, 41, 399–417. [Google Scholar] [CrossRef]
  4. Ayzel, G.; Kurochkina, L.; Abramov, D.; Zhuravlev, S. Development of a regional gridded runoff dataset using long short-term memory (LSTM) networks. Hydrology 2021, 8, 6. [Google Scholar] [CrossRef]
  5. Gaur, S.; Johannet, A.; Graillot, D.; Omar, P.J. Modeling of groundwater level using artificial neural network algorithm and WA-SVR model. In Groundwater Resources Development and Planning in the Semi-Arid Region; Springer: Berlin/Heidelberg, Germany, 2021; pp. 129–150. [Google Scholar]
  6. Desai, S.; Ouarda, T.B. Regional hydrological frequency analysis at ungauged sites with random forest regression. J. Hydrol. 2021, 594, 125861. [Google Scholar] [CrossRef]
  7. Kim, D.Y.; Song, C.M. Developing a discharge estimation model for ungauged watershed using CNN and hydrological image. Water 2020, 12, 3534. [Google Scholar] [CrossRef]
  8. Reddy, P.J. Stochastic Hydrology (HB); Laxmi Publications, Ltd.: New Delhi, India, 1997. [Google Scholar]
  9. Dupacova, J.; Hurt, J.; Stepan, J. Stochastic Modeling in Economics and Finance; Springer Science & Business Media: New York, NY, USA, 2002; Volume 75. [Google Scholar]
  10. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
  11. Shen, C. A transdisciplinary review of deep learning research and its relevance for water resources scientists. Water Resour. Res. 2018, 54, 8558–8593. [Google Scholar] [CrossRef]
  12. Takahashi, S.; Chen, Y.; Tanaka-Ishii, K. Modeling financial time-series with generative adversarial networks. Phys. A Stat. Mech. Its Appl. 2019, 527, 121261. [Google Scholar] [CrossRef]
  13. Ji, H.K.; Mirzaei, M.; Lai, S.H.; Dehghani, A.; Dehghani, A. Implementing generative adversarial network (GAN) as a data-driven multi-site stochastic weather generator for flood frequency estimation. Environ. Model. Softw. 2024, 172, 105896. [Google Scholar] [CrossRef]
  14. Ma, Y.; Zhong, P.a.; Xu, B.; Zhu, F.; Yang, L.; Wang, H.; Lu, Q. Stochastic generation of runoff series for multiple reservoirs based on generative adversarial networks. J. Hydrol. 2022, 605, 127326. [Google Scholar] [CrossRef]
  15. Zhang, T.; Yang, Z.; Li, D. Stochastic simulation of deltas based on a concurrent multi-stage VAE-GAN model. J. Hydrol. 2022, 607, 127493. [Google Scholar] [CrossRef]
  16. Berthelot, D.; Schumm, T.; Metz, L. BEGAN: Boundary Equilibrium Generative Adversarial Networks. arXiv 2017, arXiv:1703.10717. [Google Scholar]
  17. Öcal, A.; Özbakır, L. Supervised deep convolutional generative adversarial networks. Neurocomputing 2021, 449, 389–398. [Google Scholar] [CrossRef]
  18. Shaham, T.R.; Dekel, T.; Michaeli, T. Singan: Learning a generative model from a single natural image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4570–4580. [Google Scholar]
  19. MathWorks. Train Generative Adversarial Network. 2025. Available online: https://uk.mathworks.com/help/deeplearning/ug/train-generative-adversarial-network.html (accessed on 6 April 2025).
  20. Rozos, E.; Dimitriadis, P.; Mazi, K.; Koussis, A.D. A multilayer perceptron model for stochastic synthesis. Hydrology 2021, 8, 67. [Google Scholar] [CrossRef]
  21. Ferchichi, A.; Chihaoui, M.; Ferchichi, A. Spatio-temporal modeling of climate change impacts on drought forecast using Generative Adversarial Network: A case study in Africa. Expert Syst. Appl. 2024, 238, 122211. [Google Scholar] [CrossRef]
  22. Karimanzira, D. Mass Conservative Time-Series GAN for Synthetic Extreme Flood-Event Generation: Impact on Probabilistic Forecasting Models. Stats 2024, 7, 808–826. [Google Scholar] [CrossRef]
  23. Belhajjam, R.; Chaqdid, A.; Yebari, N.; Seaid, M.; El Moçayd, N. Climate-informed flood risk mapping using a GAN-based approach (ExGAN). J. Hydrol. 2024, 638, 131487. [Google Scholar] [CrossRef]
  24. Ma, Z.; Mei, G.; Xu, N. Generative deep learning for data generation in natural hazard analysis: Motivations, advances, challenges, and opportunities. Artif. Intell. Rev. 2024, 57, 160. [Google Scholar] [CrossRef]
  25. Rozos, E.; Wieland, J.; Leandro, J. Measuring Turbulent Flows: Analyzing a Stochastic Process with Stochastic Tools. Fluids 2024, 9, 128. [Google Scholar] [CrossRef]
  26. Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Papalexiou, S. Just two moments! A cautionary note against use of high-order moments in multifractal models in hydrology. Hydrol. Earth Syst. Sci. 2014, 18, 243–255. [Google Scholar] [CrossRef]
  27. Koutsoyiannis, D. Stochastics of Hydroclimatic Extremes—A Cool Look at Risk, 4th ed.; KALLIPOS: Athens, Greece, 2023. [Google Scholar]
  28. Koutsoyiannis, D. HESS Opinions “A random walk on water”. Hydrol. Earth Syst. Sci. 2010, 14, 585–601. [Google Scholar] [CrossRef]
  29. Salas, J.D. Analysis and modelling of hydrological time series. In Handbook of Hydrology; Maidment, D., Ed.; McGraw-Hill: New York, NY, USA, 1993; Chapter 19. [Google Scholar]
  30. Rozos, E.; Leandro, J.; Koutsoyiannis, D. Stochastic Analysis and Modeling of Velocity Observations in Turbulent Flows. J. Environ. Earth Sci. Vol. 2024, 6, 45–56. [Google Scholar] [CrossRef]
  31. Koutsoyiannis, D. Optimal decomposition of covariance matrices for multivariate stochastic models in hydrology. Water Resour. Res. 1999, 35, 1219–1229. [Google Scholar] [CrossRef]
  32. Koutsoyiannis, D. A generalized mathematical framework for stochastic simulation and forecast of hydrologic time series. Water Resour. Res. 2000, 36, 1519–1533. [Google Scholar] [CrossRef]
  33. Tanaka, J.C.G. AutoRegressive Fractionally Integrated Moving Average (ARFIMA) Model. 2025. Available online: https://blog.quantinsti.com/arfima-model/ (accessed on 6 June 2025).
  34. Liu, K.; Chen, Y.; Zhang, X. An Evaluation of ARFIMA (Autoregressive Fractional Integral Moving Average) Programs. Axioms 2017, 6, 16. [Google Scholar] [CrossRef]
  35. Demuth, H.B.; Beale, M.H.; De Jess, O.; Hagan, M.T. Neural Network Design; Martin Hagan: Cambridge, MA, USA, 2014. [Google Scholar]
  36. Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Atlanta, GA, USA, 16–21 June 2013; pp. 1310–1318. [Google Scholar]
  37. Zeiler, M.D. ADADELTA: An Adaptive Learning Rate Method. arXiv 2012, arXiv:1212.5701. [Google Scholar]
  38. Cox, J. Cortexsys 3.1 User Guide. 2016. Available online: https://github.com/rozos/Cortexsys (accessed on 6 April 2025).
  39. Csiszar, I. I-Divergence Geometry of Probability Distributions and Minimization Problems. Ann. Probab. 1975, 3, 146–158. [Google Scholar] [CrossRef]
  40. Koutsoyiannis, D. Climate change, the Hurst phenomenon, and hydrological statistics. Hydrol. Sci. J. 2003, 48, 3–24. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of GAN components and connections between them.
Figure 1. Schematic representation of GAN components and connections between them.
Hydrology 12 00170 g001
Figure 2. Hydraulic experiment used to obtain velocity measurements.
Figure 2. Hydraulic experiment used to obtain velocity measurements.
Hydrology 12 00170 g002
Figure 3. Example climacograms: (a) random signal, (b) random signal plus trend, and (c) a random signal plus a periodic signal.
Figure 3. Example climacograms: (a) random signal, (b) random signal plus trend, and (c) a random signal plus a periodic signal.
Hydrology 12 00170 g003
Figure 4. Multivariate AR1 for producing synthetic V x , V y and V z values.
Figure 4. Multivariate AR1 for producing synthetic V x , V y and V z values.
Hydrology 12 00170 g004
Figure 5. Schematic representation of the HK scheme.
Figure 5. Schematic representation of the HK scheme.
Hydrology 12 00170 g005
Figure 6. Topology of the CNN scheme.
Figure 6. Topology of the CNN scheme.
Hydrology 12 00170 g006
Figure 7. Histogram of observed vs. synthetic V x time series generated with (a) AR1, (b) HK, (c) CNN.
Figure 7. Histogram of observed vs. synthetic V x time series generated with (a) AR1, (b) HK, (c) CNN.
Hydrology 12 00170 g007
Figure 8. Histogram of observed vs. synthetic V y time series generated with (a) AR1, (b) HK, (c) CNN.
Figure 8. Histogram of observed vs. synthetic V y time series generated with (a) AR1, (b) HK, (c) CNN.
Hydrology 12 00170 g008
Figure 9. Histogram of observed vs. synthetic V z time series generated with (a) AR1, (b) HK, (c) CNN.
Figure 9. Histogram of observed vs. synthetic V z time series generated with (a) AR1, (b) HK, (c) CNN.
Hydrology 12 00170 g009
Figure 10. Climacogram of observed vs. synthetic V x time series generated with (a) AR1, (b) HK, (c) CNN.
Figure 10. Climacogram of observed vs. synthetic V x time series generated with (a) AR1, (b) HK, (c) CNN.
Hydrology 12 00170 g010
Figure 11. Climacogram of observed vs. synthetic V y time series generated with (a) AR1, (b) HK, (c) CNN.
Figure 11. Climacogram of observed vs. synthetic V y time series generated with (a) AR1, (b) HK, (c) CNN.
Hydrology 12 00170 g011
Figure 12. Climacogram of observed vs. synthetic V z time series generated with (a) AR1, (b) HK, (c) CNN.
Figure 12. Climacogram of observed vs. synthetic V z time series generated with (a) AR1, (b) HK, (c) CNN.
Hydrology 12 00170 g012
Table 1. Covariance matrix of the observed and synthetic time series.
Table 1. Covariance matrix of the observed and synthetic time series.
Obs.AR1HKCNN
V x V y V z V x V y V z V x V y V z V x V y V z
V x 1.625−0.205−0.0231.626−0.206−0.0231.619−0.204−0.0211.572−0.202−0.025
V y −0.2050.3950.130−0.2060.3950.138−0.2040.3950.138−0.2020.3920.135
V z −0.0230.1301.418−0.0230.1381.417−0.0210.1381.420−0.0250.1351.408
Table 2. Lag-1 cross-covariance matrix of the observed and synthetic time series.
Table 2. Lag-1 cross-covariance matrix of the observed and synthetic time series.
Obs.AR1HKCNN
V x V y V z V x V y V z V x V y V z V x V y V z
V x 0.7450.007−0.0890.7460.005−0.0890.201−0.041−0.0050.7170.006−0.091
V y −0.1850.0940.035−0.1860.0940.036−0.0410.1100.040−0.1820.0930.034
V z 0.0090.1580.3930.0090.1590.392−0.0050.0390.4290.0090.1560.385
Table 3. Skewness of the observed and synthetic time series.
Table 3. Skewness of the observed and synthetic time series.
Obs.AR1HKCNN
V x V y V z V x V y V z V x V y V z V x V y V z
0.5860.114−0.0290.6040.124−0.0290.5860.117−0.0280.5880.110−0.029
Table 4. Kullback–Leibler (KL) divergence between PDF of the synthetic and observed time series, and fitting error (FE) between climacograms of the synthetic and observed time series.
Table 4. Kullback–Leibler (KL) divergence between PDF of the synthetic and observed time series, and fitting error (FE) between climacograms of the synthetic and observed time series.
AR1ARFIMAHKCNN
V x V y V z V x V y V z V x V y V z V x V y V z
KL0.2792.8942.4232.3721.1570.7423.3183.6632.7310.2950.8810.828
FE1.75 × 10−36.15 × 10−312.97 × 10−34.81 × 10−33.73 × 10−33.27 × 10−351.34 × 10−39.75 × 10−33.68 × 10−31.43 × 10−44.30 × 10−42.51 × 10−4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rozos, E. Just a Single-Layer CNN for Stochastic Modeling: A Discriminator-Free Approach. Hydrology 2025, 12, 170. https://doi.org/10.3390/hydrology12070170

AMA Style

Rozos E. Just a Single-Layer CNN for Stochastic Modeling: A Discriminator-Free Approach. Hydrology. 2025; 12(7):170. https://doi.org/10.3390/hydrology12070170

Chicago/Turabian Style

Rozos, Evangelos. 2025. "Just a Single-Layer CNN for Stochastic Modeling: A Discriminator-Free Approach" Hydrology 12, no. 7: 170. https://doi.org/10.3390/hydrology12070170

APA Style

Rozos, E. (2025). Just a Single-Layer CNN for Stochastic Modeling: A Discriminator-Free Approach. Hydrology, 12(7), 170. https://doi.org/10.3390/hydrology12070170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop