Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (461)

Search Parameters:
Keywords = copula distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
47 pages, 3035 KB  
Review
A Review of Photovoltaic Uncertainty Modeling Based on Statistical Relational AI
by Linfeng Yang and Xueqian Fu
Energies 2026, 19(6), 1509; https://doi.org/10.3390/en19061509 - 18 Mar 2026
Viewed by 174
Abstract
With the growing penetration of photovoltaic (PV) generation, robust uncertainty characterization is essential for secure operation, economic dispatch, and flexibility planning. This review surveys PV scenario generation from three perspectives: (i) explicit probabilistic approaches (distribution fitting, Copula-based dependence modeling, autoregressive moving average (ARMA)-type [...] Read more.
With the growing penetration of photovoltaic (PV) generation, robust uncertainty characterization is essential for secure operation, economic dispatch, and flexibility planning. This review surveys PV scenario generation from three perspectives: (i) explicit probabilistic approaches (distribution fitting, Copula-based dependence modeling, autoregressive moving average (ARMA)-type time-series methods, and clustering/dimensionality reduction), (ii) deep generative models (GANs, VAEs, and diffusion models), and (iii) hybrid Statistical Relational AI (SRAI) frameworks. We discuss the strengths of explicit models in interpretability and tractability, and their limitations in representing high-dimensional nonlinear, multimodal, and multiscale spatiotemporal dependencies. We also examine the ability of deep generative methods to synthesize diverse scenarios across meteorological regimes and multiple sites, while noting persistent challenges in interpretability, physical consistency, and deployment. To bridge these gaps, we outline an SRAI-oriented integration pathway that embeds statistical structure, meteorology–power relations, spatiotemporal coupling, and operational constraints into generative architectures. Finally, we highlight directions for future research, including unified evaluation protocols, cross-regional data collaboration, controllable extreme-scenario generation, and computationally efficient generative designs. Full article
Show Figures

Figure 1

18 pages, 516 KB  
Article
On Return Probabilities of Adverse Events Under Dependence and Lessons to Learn for Decision-Making
by Marius Hofert
Risks 2026, 14(3), 58; https://doi.org/10.3390/risks14030058 - 5 Mar 2026
Viewed by 233
Abstract
Considering achieving a goal in each of several time intervals when, in every time interval, an adverse event may lead to a failure raises the question of the return probability of adverse events, so the probability of at least one failure to happen [...] Read more.
Considering achieving a goal in each of several time intervals when, in every time interval, an adverse event may lead to a failure raises the question of the return probability of adverse events, so the probability of at least one failure to happen during the time period of interest. Through basic mathematical arguments in tractable cases, we investigate the behavior of the return probability of adverse events in various setups. In the univariate case, we consider the independent and identically distributed setup, the independent setup, the dependent but not necessarily identically distributed setup, and the dependent and identically distributed setup. In the multivariate case, we consider several goals to be achieved in each time period. Besides different setups for the marginal failure probabilities, we study dependence in terms of comonotone blocks and independent blocks and via nested copulas. In case closed form expressions are not available, we derive bounds on the return probability of at least one failure. Our results are interpretable in terms of decision-making, provide insight into what affects such return probabilities and thus may help to develop strategies to lower them. Full article
Show Figures

Figure 1

30 pages, 618 KB  
Article
Learning Continuous Decomposable Models Using Mutual Information and Statistical Copulas
by Luiz Desuó Neto, Henrique de Oliveira Caetano, Matheus de Souza Sant’Anna Fogliatto and Carlos Dias Maciel
Entropy 2026, 28(3), 293; https://doi.org/10.3390/e28030293 - 4 Mar 2026
Viewed by 237
Abstract
Learning dependence graphs from multivariate continuous data is challenging when marginal distributions are heterogeneous, since likelihood-based nonparametric scores can be sensitive to smoothing choices and can confound marginal irregularities, including non-identifiability, with dependence. This work studies structure learning in the class of decomposable [...] Read more.
Learning dependence graphs from multivariate continuous data is challenging when marginal distributions are heterogeneous, since likelihood-based nonparametric scores can be sensitive to smoothing choices and can confound marginal irregularities, including non-identifiability, with dependence. This work studies structure learning in the class of decomposable (chordal) Markov random fields, where junction tree factorizations enable tractable inference and local score updates. Our first contribution is a theoretical result showing that, under decomposability, mutual information can be expressed as a difference of clique/separator copula entropies, yielding a dependence-only decomposition aligned with the clique/separator structure. Building on this identity, we define an information-theoretic objective for decomposable graphs with a complexity penalty that preserves clique/separator additivity, and we derive closed-form local score differences for chordality-preserving single-edge insertions and deletions. To make the score computable from data, we instantiate clique/separator copula entropies using pseudo-observations and a probit-transformed kernel density estimator with predictive log score evaluation to mitigate boundary effects on the unit hypercube. The resulting nonparametric greedy procedure improves edge recovery accuracy on synthetic chordal benchmarks compared with a likelihood-driven nonparametric baseline, and it produces interpretable dependence summaries on an airway epithelial gene expression dataset. Concretely, this paper contributes (1) a decomposable mutual information identity via clique/separator copula entropies, (2) a copula information score with an additive complexity penalty for decomposable graphs, (3) a closed-form local score, enabling greedy chordal add or delete search, (4) a practical nonparametric copula entropy estimation pipeline, and (5) empirical gains on synthetic and real data. Full article
(This article belongs to the Special Issue Bayesian Network and Signal Processing)
Show Figures

Figure 1

20 pages, 1623 KB  
Article
Deep Contextual Bandits with Multivariate Outcomes: Empirical Copula Normalization, Temporal Feature Learning, and Doubly Robust Policy Evaluation
by Jong-Min Kim
Mathematics 2026, 14(5), 846; https://doi.org/10.3390/math14050846 - 2 Mar 2026
Viewed by 315
Abstract
We develop and evaluate a deep contextual bandit framework for multivariate off-policy evaluation within a controlled simulation-based validation setting. Using real covariate distributions from the Adult, Boston Housing, and Wine Quality datasets, we construct synthetic treatment assignments and multivariate potential outcomes to enable [...] Read more.
We develop and evaluate a deep contextual bandit framework for multivariate off-policy evaluation within a controlled simulation-based validation setting. Using real covariate distributions from the Adult, Boston Housing, and Wine Quality datasets, we construct synthetic treatment assignments and multivariate potential outcomes to enable rigorous benchmarking under known data-generating processes. We compare CNN-LSTM, LSTM, and Feed-forward Neural Network (FNN) architectures as nonlinear action-value estimators. To examine representation learning under structured dependence, an AR(1) feature augmentation scheme is employed, while multivariate outcomes are standardized using empirical copula transformations to preserve cross-dimensional dependence. Policy values are estimated using Stabilized Importance Sampling (SIPS) and doubly robust (DR) estimators with bootstrap inference. Although the decision problem is strictly one-step, empirical results indicate that CNN-LSTM architectures provide competitive action-value calibration under temporal augmentation. Across all datasets, the DR estimator demonstrates substantially lower variance and greater stability than SIPS, consistent with its theoretical variance-reduction properties. Diagnostic analyses—including propensity overlap assessment, cumulative oracle regret (with oracle values known by construction), calibration evaluation, and sensitivity analysis—support the reliability of the proposed evaluation framework. Overall, the results demonstrate that combining copula-normalized multivariate outcomes with doubly robust off-policy evaluation yields a statistically principled and variance-efficient approach for offline policy learning in high-dimensional simulated environments. Full article
(This article belongs to the Special Issue Advances in Statistical AI and Causal Inference)
Show Figures

Figure 1

16 pages, 4922 KB  
Article
Study on the Joint Probability Distribution of Hydrodynamic Conditions in Xiamen Bay Based on Copula Functions
by Xuechun Lin, Zheng Wang, Yuwen Shen, Chunyan Zhou and Changcun Zhou
J. Mar. Sci. Eng. 2026, 14(4), 404; https://doi.org/10.3390/jmse14040404 - 23 Feb 2026
Viewed by 274
Abstract
The Xiamen Bay area is frequently impacted by typhoons and is characterized by a complex hydrodynamic environment. The combined action of waves, currents, and storm surges threatens the construction of the Third Eastern Link. Traditional design methods often overlook the correlations among hydrological [...] Read more.
The Xiamen Bay area is frequently impacted by typhoons and is characterized by a complex hydrodynamic environment. The combined action of waves, currents, and storm surges threatens the construction of the Third Eastern Link. Traditional design methods often overlook the correlations among hydrological variables, potentially leading to overestimated design standards. To address this issue, we developed a high-accuracy multi-driver hydrodynamic numerical model for Xiamen Bay. A high-resolution dataset of waves, currents, and storm surges spanning nearly 20 years was established. Based on the Copula function, a trivariate joint probability distribution of wave–current–storm surge was constructed. The results indicate that the Gamma distribution is the most suitable marginal distribution for the individual variables, and the Clayton Copula function best captures the dependence structure among the three variables. For the same return period, the design values of wave height, current velocity, and water level obtained using the Copula method are lower than those derived using traditional standard methods. The research findings can provide a more scientific and economical design basis for the Third Eastern Link project and serve as a reference for multivariate joint probability modeling in similar sea areas. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

26 pages, 3523 KB  
Article
A Copula-Based Joint Modeling Framework for Hospitalization Costs and Length of Stay in Massive Healthcare Data
by Xuan Xu and Yijun Wang
Systems 2026, 14(2), 226; https://doi.org/10.3390/systems14020226 - 23 Feb 2026
Viewed by 232
Abstract
In large-scale medical data, the connection between hospital length of stay and medical expenses shows a complex and nonlinear relationship instead of a straightforward positive link. This study proposes a Cox–Log-Logistic–Copula joint modeling framework to describe the marginal distributions and latent dependence between [...] Read more.
In large-scale medical data, the connection between hospital length of stay and medical expenses shows a complex and nonlinear relationship instead of a straightforward positive link. This study proposes a Cox–Log-Logistic–Copula joint modeling framework to describe the marginal distributions and latent dependence between the two variables. Specifically, a semi-parametric Cox proportional hazards model is used for hospitalization duration, while a Log-Logistic model handles medical costs. The two margins are flexibly coupled through a Copula function to capture dynamic variations in cost levels during different hospitalization stages. To address computational challenges in large datasets, this study also includes subsample correction and one-step adjustment algorithms, combined with parallel computing strategies, to enhance estimation efficiency and accuracy. Empirical results show that the length of hospital stays and medical costs are not always positively related. In some cases, higher medical expenses occur during shorter stays, suggesting possible over-treatment or uneven resource distribution. The proposed framework proves to have strong explanatory power in identifying nonlinear patterns in healthcare behavior and offers a new quantitative tool for optimizing medical resource allocation and controlling costs. Full article
Show Figures

Figure 1

27 pages, 1628 KB  
Article
Synthetic Data Augmentation for Imbalanced Tabular Data: A Comparative Study of Generation Methods
by Dong-Hyun Won, Kwang-Seong Shin and Sungkwan Youm
Electronics 2026, 15(4), 883; https://doi.org/10.3390/electronics15040883 - 20 Feb 2026
Viewed by 544
Abstract
Class imbalance in tabular datasets poses a challenge for machine learning classification tasks, often leading to biased models that underperform in predicting minority class instances. This study presents a comparative analysis of synthetic data generation methods for addressing class imbalance in tabular data. [...] Read more.
Class imbalance in tabular datasets poses a challenge for machine learning classification tasks, often leading to biased models that underperform in predicting minority class instances. This study presents a comparative analysis of synthetic data generation methods for addressing class imbalance in tabular data. We evaluate four augmentation approaches—Synthetic Minority Over-sampling Technique (SMOTE), Gaussian Copula, Tabular Variational Autoencoder (TVAE), and Conditional Tabular Generative Adversarial Network (CTGAN)—using the University of California Irvine (UCI) Bank Marketing dataset, which exhibits a class imbalance ratio of approximately 7.88:1. Our experimental framework assesses each method across three dimensions: statistical fidelity to the original data distribution evaluated through four complementary metrics (marginal numerical similarity, categorical distribution similarity, correlation structure preservation, and Kolmogorov–Smirnov test), machine learning utility measured through classification performance, and minority class detection capability. Results indicate that all augmentation methods achieved statistically significant improvements over the baseline (p<0.05). SMOTE achieved the highest recall (54.2%, a 117.6% relative improvement over the baseline) and F1-Score (0.437, +22.4% over the baseline) for minority class detection, while Gaussian Copula provided the highest composite fidelity score (0.930) with competitive predictive performance. A weak negative correlation (ρ=0.30) between composite fidelity and classification performance was observed, suggesting that higher statistical fidelity does not necessarily translate to better downstream task performance. Deep learning-based methods (TVAE, CTGAN) showed statistically significant improvements over the baseline (recall: +58% to +63%) but underperformed compared to simpler methods under default configurations, suggesting the need for larger training samples or more extensive hyperparameter tuning. These findings offer reference points for practitioners working with moderately imbalanced tabular data with limited minority class samples, supporting the selection of generation strategies based on specific requirements regarding data fidelity and classification objectives. Full article
(This article belongs to the Special Issue Data-Related Challenges in Machine Learning: Theory and Application)
Show Figures

Figure 1

42 pages, 10041 KB  
Article
Probabilistic Prediction of Concrete Compressive Strength Using Copula Functions: A Novel Framework for Uncertainty Quantification
by Cheng Zhang, Senhao Cheng, Shanshan Tao, Shuai Du and Zhengjun Wang
Buildings 2026, 16(4), 754; https://doi.org/10.3390/buildings16040754 - 12 Feb 2026
Viewed by 272
Abstract
Traditional machine learning models for concrete compressive strength prediction provide only single-value estimates without quantifying the probability of meeting design requirements, leaving engineers unable to make risk-informed decisions. This study addresses this critical limitation by developing a novel probabilistic prediction framework that integrates [...] Read more.
Traditional machine learning models for concrete compressive strength prediction provide only single-value estimates without quantifying the probability of meeting design requirements, leaving engineers unable to make risk-informed decisions. This study addresses this critical limitation by developing a novel probabilistic prediction framework that integrates explainable machine learning with Copula-based joint distribution modeling. Using a dataset of 1030 concrete samples with curing ages ranging from 1 to 365 days, we first established an XGBoost 2.1.4 prediction model achieving R2 = 0.9211 (RMSE = 4.51 MPa) on the test set. SHAP 0.49.1 (SHapley Additive exPlanations) analysis identified curing age (33.3%) and water–cement ratio (28.8%) as the dominant features, together accounting for 62.1% of predictive importance. These two controllable engineering parameters were then selected as core variables for probabilistic modeling. The key innovation lies in integrating Copula-based dependence modeling with explainable machine learning (XGBoost–SHAP) to quantify the compliance probability of concrete strength under specific mix designs and curing conditions, thereby supporting risk-informed quality control decisions. Through systematic comparison of five Copula families (Gaussian, Student t, Clayton, Gumbel, and Frank), we identified optimal dependence structures: Gaussian Copula (ρ = −0.54) for the water–cement ratio–strength relationship and Clayton Copula for the age–strength relationship, revealing asymmetric tail dependence patterns invisible to conventional correlation analysis. The three-dimensional Copula model enables engineers to estimate compliance probability—the likelihood of concrete achieving target strength under specific mix designs and curing conditions. We propose an illustrative three-tier decision rule for construction quality management based on the compliance probability P: P ≥ 0.95 (high-confidence approval), 0.80 ≤ P < 0.95 (warning zone requiring enhanced monitoring), and P < 0.80 (high risk suggesting corrective actions such as mix adjustment or extended curing), noting that these thresholds can be recalibrated to project-specific risk tolerance and local specifications. This framework supports a paradigm shift from reactive “mix-then-test” quality control to proactive “predict-then-decide” construction management, providing quantitative risk assessment tools previously unavailable in deterministic prediction approaches. Full article
Show Figures

Figure 1

24 pages, 3767 KB  
Article
A Typical Scenario Generation Method Based on KDE-Copula for PV Hosting Capacity Analysis in Distribution Networks
by Bo Zhao, Minglei Jiang, Xuyang Wang, Ruizhang Wang, Jingyao Xiong, Nan Yang and Zhenhua Li
Processes 2026, 14(4), 617; https://doi.org/10.3390/pr14040617 - 10 Feb 2026
Viewed by 277
Abstract
Wind-solar power generation is inherently uncertain. These uncertainties bring considerable difficulties to the assessment of hosting capacity. To tackle these difficulties, it is essential to create typical scenarios that can precisely capture the statistical traits and interrelationships of wind-solar power. In this research, [...] Read more.
Wind-solar power generation is inherently uncertain. These uncertainties bring considerable difficulties to the assessment of hosting capacity. To tackle these difficulties, it is essential to create typical scenarios that can precisely capture the statistical traits and interrelationships of wind-solar power. In this research, we systematically integrate various scenario generation techniques, resulting in the creation of a holistic framework grounded in kernel density estimation (KDE) and Copula functions. Our proposed approach represents the stochastic nature of wind-solar power output by constructing their respective probability density functions (PDFs). It comprehensively depicts the potential spatiotemporal complementarity between wind-solar power by utilizing Copula functions and establishing a joint probability distribution model. Through Monte Carlo simulation, we generated a large number of wind-solar output scenarios. Subsequently, we employed the K-means clustering algorithm to reduce the number of scenarios. The findings reveal that the integrated framework, which combines KDE and Copula theory, achieves higher fitting accuracy for the marginal distributions and correlation structures of wind-solar power generation. As a result, the generated scenarios are more representative and reliable, offering strong support for photovoltaic (PV) hosting capacity analysis (HCA) and the formulation of typical plans. We validate the proposed method using historical wind-solar data from several representative regions in China, such as Inner Mongolia, northern Hebei, the Beijing–Tianjin–Hebei region, and Hubei Province. This validation demonstrates the method’s applicability under various geographical and climatic conditions. Full article
(This article belongs to the Special Issue Applications of Smart Microgrids in Renewable Energy Development)
Show Figures

Figure 1

25 pages, 1321 KB  
Article
The Stingray Copula for Negative Dependence
by Alecos Papadopoulos
Stats 2026, 9(1), 13; https://doi.org/10.3390/stats9010013 - 4 Feb 2026
Viewed by 423
Abstract
We present a new single-parameter bivariate copula, called the Stingray, that is dedicated to representing negative dependence, and it nests the Independence copula. The Stingray copula is generated in a relatively novel way; it has a simple form and is always defined over [...] Read more.
We present a new single-parameter bivariate copula, called the Stingray, that is dedicated to representing negative dependence, and it nests the Independence copula. The Stingray copula is generated in a relatively novel way; it has a simple form and is always defined over the full support, unlike many copulas that model negative dependence. We provide visualizations of the copula, derive several dependence properties, and compute basic concordance measures. We compare it with other copulas and joint distributions with respect to the extent of dependence it can capture, and we find that the Stingray copula outperforms most of them while remaining competitive with well-known, widely used copulas such as the Gaussian and Frank copulas. Moreover, we show, through simulation, that the dependence structure it represents cannot be fully captured by these copulas, as it is asymmetric. We also show how the non-parametric Spearman’s rho measure of concordance can be used to formally test the hypothesis of statistical independence. As an illustration, we apply it to a financial data sample from the building construction sector in order to model the negative relationship between the level of capital employed and its gross rate of return. Full article
Show Figures

Figure 1

28 pages, 7463 KB  
Article
Efficient Modeling of the Energy Sector Using a New Bivariate Copula
by Jumanah Ahmed Darwish and Muhammad Qaiser Shahbaz
Mathematics 2026, 14(3), 540; https://doi.org/10.3390/math14030540 - 2 Feb 2026
Viewed by 330
Abstract
Copulas are a useful tool to generate bivariate distributions from the univariate marginals. This method is also useful to generate bivariate families of distributions. In this paper, a new copula has been proposed. Some useful properties of the proposed copula have been studied, [...] Read more.
Copulas are a useful tool to generate bivariate distributions from the univariate marginals. This method is also useful to generate bivariate families of distributions. In this paper, a new copula has been proposed. Some useful properties of the proposed copula have been studied, including the conditional copula. Various dependence measures for the proposed copula have been obtained. A multivariate extension of the proposed copula is also suggested. The proposed copula has been used to obtain a new bivariate family of distributions. Some useful properties of the obtained bivariate family are studied, which include conditional distributions, joint and conditional moments, joint reliability and hazard rate functions, parameter estimation, etc. A specific member of the proposed family has also been discussed. The proposed bivariate distribution has been used to model the energy sector data of the Kingdom of Saudi Arabia. Full article
(This article belongs to the Special Issue Advances in Statistical Methods with Applications)
Show Figures

Figure 1

20 pages, 5161 KB  
Article
Copula-Based Bayesian Inference Approaches for Uncertainty Quantification for Hydrological Simulation
by Feng Wang, Ruixin Duan, Jiannan Zhang, Mengyu Zhai, Yanfeng Li, Yurui Fan and Yulei Xie
Hydrology 2026, 13(2), 50; https://doi.org/10.3390/hydrology13020050 - 29 Jan 2026
Viewed by 493
Abstract
In this study, an advanced copula-based Bayesian inference framework is proposed to characterize probabilistic features in hydrological simulations. Specifically, a Copula–Metropolis–Hastings (CopMH) algorithm is developed through integrating copula functions into the conventional Metropolis–Hastings (MH) algorithm within an interdependence-sampling framework. In CopMH, the interdependence [...] Read more.
In this study, an advanced copula-based Bayesian inference framework is proposed to characterize probabilistic features in hydrological simulations. Specifically, a Copula–Metropolis–Hastings (CopMH) algorithm is developed through integrating copula functions into the conventional Metropolis–Hastings (MH) algorithm within an interdependence-sampling framework. In CopMH, the interdependence structure among model parameters is quantified using copula functions, which are subsequently employed to generate proposal candidates. The proposed approach is then applied to uncertainty analysis in hydrological simulations of the Ruihe River watershed in Northwest China. The results indicate that, compared with the traditional MH, incorporating copula-based proposal distributions significantly improves convergence efficiency and simulation accuracy, as inter-parameter dependence is more effectively captured. All algorithms are independently repeated 15 times, and CopMH exhibits more robust and stable performance than MH. Furthermore, the intercorrelation analysis of hydrological model parameters reveals that interactive effects among parameters are ubiquitous. These findings highlight that consideration of the interrelationship among the parameters in hydrologic models is meaningful and necessary for uncertainty quantification of hydrological simulation. This study demonstrates the strong potential of the proposed CopMH approach for effectively quantifying and reducing parameter uncertainty in hydrological simulations. Full article
Show Figures

Figure 1

22 pages, 3994 KB  
Article
Study on Temporal Convolutional Network Rainfall Prediction Model and Its Interpretability Guided by Physical Mechanisms
by Dongfang Ma, Yunliang Wen, Chongxu Zhao and Chunjin Zhang
Hydrology 2026, 13(1), 38; https://doi.org/10.3390/hydrology13010038 - 19 Jan 2026
Viewed by 320
Abstract
Rainfall, as the main driving force of natural disasters such as floods and droughts, has strong non-linear and abrupt characteristics, which makes it difficult to predict. As extreme weather events occur frequently in the Yellow River Basin, it is especially critical to reveal [...] Read more.
Rainfall, as the main driving force of natural disasters such as floods and droughts, has strong non-linear and abrupt characteristics, which makes it difficult to predict. As extreme weather events occur frequently in the Yellow River Basin, it is especially critical to reveal the physical mechanism of rainfall in the basin and integrate monthly scale meteorological data to achieve monthly rainfall prediction. In this paper, we propose a rainfall prediction model coupled with a physical mechanism and a temporal convolutional network (TCN) to achieve the prediction of monthly rainfall in the basin, aiming to reveal the physical mechanism between rainfall factors in the basin based on the transfer entropy and the multidimensional Copula function and based on the physical mechanism which is embedded into the TCN to construct a dual-driven prediction model with both physical knowledge and data, while the SHAP is used to analyze the interpretability of the prediction model. The results are as follows: (1) Temperature, relative humidity, and evaporation are key characteristic factors driving rainfall. (2) The physical mechanism features between temperature, relative humidity, and evaporation can be described by the three-dimensional Gumbel–Hougaard Copula function, with a more concentrated data distribution of their joint distribution probability. (3) The PHY-TCN model can accurately fit the extremes of the rainfall series, improving the model accuracy in the training set by 3.82%, 1.39%, and 9.82% compared to TCN, CNN, and LSTM, respectively, and in the test set by 6.04%, 2.55%, and 8.91%, respectively. (4) Embedding physical mechanisms enhances the contribution of individual feature variables in the PHY-TCN model and increases the persuasiveness of the model. This study provides a new research framework for rainfall prediction in the YRB and analyzes the physical relationship between the input data and output results of the deep learning model. It has important practical significance and strategic value for guiding the optimal scheduling of water resources, improving the risk management level of the basin, and promoting the ecological protection and high-quality development of the YRB. Full article
(This article belongs to the Special Issue Global Rainfall-Runoff Modelling)
Show Figures

Figure 1

29 pages, 4312 KB  
Article
Distributionally Robust Optimization-Based Planning of an AC-Integrated Wind–Photovoltaic–Hydro–Storage Bundled Transmission System Considering Wind–Photovoltaic Uncertainty and Correlation
by Tu Feng, Xin Liao and Lili Mo
Energies 2026, 19(2), 389; https://doi.org/10.3390/en19020389 - 13 Jan 2026
Viewed by 310
Abstract
This paper investigates the planning problem of AC-integrated wind–photovoltaic–hydro–storage (WPHS) bundled transmission systems. To effectively capture the uncertainty and interdependence in renewable power outputs, a Copula-enhanced distributionally robust optimization (DRO) framework is developed, enabling a unified treatment of stochastic and correlated renewable generation [...] Read more.
This paper investigates the planning problem of AC-integrated wind–photovoltaic–hydro–storage (WPHS) bundled transmission systems. To effectively capture the uncertainty and interdependence in renewable power outputs, a Copula-enhanced distributionally robust optimization (DRO) framework is developed, enabling a unified treatment of stochastic and correlated renewable generation within the system planning process. First, a location and capacity planning model based on DRO for WPHS generation bases is formulated, in which a composite-norm ambiguity set is constructed to describe the uncertainty of renewable resources. Second, the Copula function is employed to characterize the nonlinear dependence structure between wind and photovoltaic (PV) power outputs, providing representative scenarios and initial probability distribution (PD) support for the construction of a bivariate ambiguity set that embeds coupling information. The resulting optimization problem is solved using the column and constraint generation (C&CG) algorithm. In addition, an evaluation metric termed the transmission corridor utilization rate (TCUR) is proposed to quantitatively assess the efficiency of external AC transmission planning schemes, offering a new perspective for the evaluation of regional power transmission strategies. Finally, simulation results validate that the proposed model achieves superior performance in terms of system economic efficiency and TCUR. Full article
Show Figures

Figure 1

17 pages, 3476 KB  
Article
Integer-Valued Time Series Model via Copula-Based Bivariate Skellam Distribution
by Mohammed Alqawba, Norou Diawara and Mame Mor Sene
J. Risk Financial Manag. 2026, 19(1), 27; https://doi.org/10.3390/jrfm19010027 - 2 Jan 2026
Viewed by 588
Abstract
Time series analysis is crucial for modeling and forecasting diverse real-world phenomena. Traditional models typically assume continuous-valued data; however, many applications involve integer-valued series, often including negative integers. This paper introduces an approach that combines copula theory with the bivariate Skellam distribution to [...] Read more.
Time series analysis is crucial for modeling and forecasting diverse real-world phenomena. Traditional models typically assume continuous-valued data; however, many applications involve integer-valued series, often including negative integers. This paper introduces an approach that combines copula theory with the bivariate Skellam distribution to handle such integer-valued data effectively. Copulas are widely recognized for capturing complex dependencies among variables. By integrating copulas, our proposed method respects integer constraints while modeling positive, negative, and temporal dependencies accurately. Through simulation and an empirical study on a real-life example, we demonstrate that our class of models performs well. This approach has broad applicability in areas such as finance, epidemiology, and environmental science, where modeling series with integer values, both positive and negative, is essential. Full article
(This article belongs to the Special Issue Mathematical Modelling in Economics and Finance)
Show Figures

Figure 1

Back to TopTop