Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = Gaussian copula regression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6277 KB  
Article
CoLIME with 2D Copulas for Reliable Local Explanations on Imbalanced Network Data
by Mantas Bacevicius, Kristina Sutiene, Lukas Malakauskas and Agne Paulauskaite-Taraseviciene
Appl. Sci. 2026, 16(1), 119; https://doi.org/10.3390/app16010119 - 22 Dec 2025
Viewed by 429
Abstract
Local Interpretable Model-agnostic Explanations (LIME) is a widely used technique for interpreting individual predictions of complex “black-box” models by fitting a simple surrogate model to synthetic perturbations of the input. However, its standard perturbation strategy of sampling features independently from a Gaussian distribution [...] Read more.
Local Interpretable Model-agnostic Explanations (LIME) is a widely used technique for interpreting individual predictions of complex “black-box” models by fitting a simple surrogate model to synthetic perturbations of the input. However, its standard perturbation strategy of sampling features independently from a Gaussian distribution often generates unrealistic samples and neglects inter-feature dependencies. This can lead to low local fidelity (poor approximation of the model’s behavior) and unstable explanations across different runs. This paper presents CoLIME, which is a copula-based perturbation generation framework for LIME, designed to capture the underlying data distribution and inter-feature dependencies more accurately. The framework employs bivariate (2D) copula models to jointly sample correlated features while fitting suitable marginal distributions for individual features. Furthermore, perturbation localization strategies were implemented, restricting perturbations to a defined local radius and maintaining specific property values to ensure that the synthesized samples remain representative of the actual local environment. The proposed approach was evaluated on a network intrusion detection dataset, comparing the fidelity and stability of LIME under Gaussian versus copula-based perturbations, using Ridge regression as the surrogate explainer. Empirically, for the most dependent feature pairs, CoLIME increases mean surrogate fidelity by 21.84–50.31% on the merged CIC-IDS2017/2018 dataset and by 29.28–60.24% on the UNSW-NB15 dataset. Stability is similarly improved, with mean Jaccard similarity gains of 3.78–5.45% and 1.95–2.12%, respectively. These improvements demonstrate that dependency-preserving perturbations provide a significantly more reliable foundation for explaining complex network intrusion detection models. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

21 pages, 1006 KB  
Article
Wind Reference Year: A New Approach
by Roberto Lázaro, Julio J. Melero and Sergio Arregui
Appl. Sci. 2025, 15(24), 13147; https://doi.org/10.3390/app152413147 - 14 Dec 2025
Viewed by 450
Abstract
The representativeness of long-term wind data at a site remains a challenge, as it is essential for resource analysis, production adjustment in operating plants, and the simulation of hybridised plants. A representative one-year hourly time series, known as a Wind Reference Year (WRY), [...] Read more.
The representativeness of long-term wind data at a site remains a challenge, as it is essential for resource analysis, production adjustment in operating plants, and the simulation of hybridised plants. A representative one-year hourly time series, known as a Wind Reference Year (WRY), is required, yet the availability of long-term real data is rare, making the estimation of WRY from reanalysis data and shorter measurement campaigns a common approach. In this study, Gaussian Mixture Copula Models (GMCM) and five regression models were applied and compared. The GMCM was trained using 15 years of reanalysis data to generate simulations, and subsequently, regression-based Measure–Correlate–Predict (MCP) methods were applied to adapt the simulated reference year to site-specific conditions. Finally, the Hungarian algorithm was used to reorder the simulated data series, aligning it with a typical wind pattern and producing the WRY dataset. The results were validated against 15 years of real measurements and benchmarked against a heuristic method based on long-term similarity of main wind parameters and the commercial tool Windographer. The findings demonstrate the potential of the proposed method, showing improvements over existing techniques and providing a robust approach to constructing representative WRY datasets. Full article
Show Figures

Figure 1

25 pages, 40577 KB  
Article
Analysis of Microbiome for AP and CRC Discrimination
by Alessio Rotelli, Ali Salman, Leandro Di Gloria, Giulia Nannini, Elena Niccolai, Alessio Luschi, Amedeo Amedei and Ernesto Iadanza
Bioengineering 2025, 12(7), 713; https://doi.org/10.3390/bioengineering12070713 - 29 Jun 2025
Cited by 1 | Viewed by 1004
Abstract
Microbiome data analysis is essential for understanding the role of microbial communities in human health. However, limited data availability often hinders research progress, and synthetic data generation could offer a promising solution to this problem. This study aims to explore the use of [...] Read more.
Microbiome data analysis is essential for understanding the role of microbial communities in human health. However, limited data availability often hinders research progress, and synthetic data generation could offer a promising solution to this problem. This study aims to explore the use of machine learning (ML) to enrich an unbalanced dataset consisting of microbial operational taxonomic unit (OTU) counts of 148 samples, belonging to 61 patients. In detail, 34 samples are from 16 adenomatous polyps (AP) patients, while 114 samples are from 46 colorectal cancer (CRC) patients. Synthesis of AP and CRC samples was conducted using the Synthetic Data Vault Python library, employing a Gaussian Copula synthesiser. Subsequently, the synthesised data quality was evaluated using a logistic regression model in parallel with an optimised support vector machine algorithm (polynomial kernel). The data quality is considered good when neither of the two algorithms can discriminate between real and synthetic data, showing low accuracy, F1 score, and precision values. Furthermore, additional statistical tests were employed to confirm the similarity between real and synthetic data. After data validation, layer-wise relevance propagation (LRP) was performed on a deep learning classifier to extract important OTU features from the generated dataset, to discriminate between CRC patients and those affected by AP. Exploiting the acquired features, which correspond to unique bacterial taxa, ML classifiers were trained and tested to estimate the validity of such microorganisms in recognising AP and CRC samples. The simplified version of the original OTU table opens up opportunities for further investigations, especially in the realm of extensive data synthesis. This involves a deeper exploration and augmentation of the condensed data to uncover new insights and patterns that might not be readily apparent in the original, more complex form. Digging deeper into the simplified data may help us better grasp the biological or ecological processes reflected in the OTU data. Transitioning from this exploration, the synergy of ML and synthetic data enrichment holds promise for advancing microbiome research. This approach enhances classification accuracy and reveals hidden microbial markers that could prove valuable in clinical practice as a diagnostic and prognostic tool. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence for Medical Diagnosis)
Show Figures

Graphical abstract

18 pages, 803 KB  
Article
Gaussian Process with Vine Copula-Based Context Modeling for Contextual Multi-Armed Bandits
by Jong-Min Kim
Mathematics 2025, 13(13), 2058; https://doi.org/10.3390/math13132058 - 21 Jun 2025
Cited by 1 | Viewed by 1415
Abstract
We propose a novel contextual multi-armed bandit (CMAB) framework that integrates copula-based context generation with Gaussian Process (GP) regression for reward modeling, addressing complex dependency structures and uncertainty in sequential decision-making. Context vectors are generated using Gaussian and vine copulas to capture nonlinear [...] Read more.
We propose a novel contextual multi-armed bandit (CMAB) framework that integrates copula-based context generation with Gaussian Process (GP) regression for reward modeling, addressing complex dependency structures and uncertainty in sequential decision-making. Context vectors are generated using Gaussian and vine copulas to capture nonlinear dependencies, while arm-specific reward functions are modeled via GP regression with Beta-distributed targets. We evaluate three widely used bandit policies—Thompson Sampling (TS), ε-Greedy, and Upper Confidence Bound (UCB)—on simulated environments informed by real-world datasets, including Boston Housing and Wine Quality. The Boston Housing dataset exemplifies heterogeneous decision boundaries relevant to housing-related marketing, while the Wine Quality dataset introduces sensory feature-based arm differentiation. Our empirical results indicate that the ε-Greedy policy consistently achieves the highest cumulative reward and lowest regret across multiple runs, outperforming both GP-based TS and UCB in high-dimensional, copula-structured contexts. These findings suggest that combining copula theory with GP modeling provides a robust and flexible foundation for data-driven sequential experimentation in domains characterized by complex contextual dependencies. Full article
Show Figures

Figure 1

15 pages, 4627 KB  
Article
Forecasting COVID-19 Cases, Hospital Admissions, and Deaths Based on Wastewater SARS-CoV-2 Surveillance Using Gaussian Copula Time Series Marginal Regression Model
by Hueiwang Anna Jeng, Norou Diawara, Nancy Welch, Cynthia Jackson, Rekha Singh, Kyle Curtis, Raul Gonzalez, David Jurgens and Sasanka Adikari
COVID 2025, 5(2), 25; https://doi.org/10.3390/covid5020025 - 18 Feb 2025
Viewed by 1797
Abstract
Modeling efforts are needed to predict trends in COVID-19 cases and related health outcomes, aiding in the development of management strategies and adaptation measures. This study was conducted to assess whether the SARS-CoV-2 viral load in wastewater could serve as a predictor for [...] Read more.
Modeling efforts are needed to predict trends in COVID-19 cases and related health outcomes, aiding in the development of management strategies and adaptation measures. This study was conducted to assess whether the SARS-CoV-2 viral load in wastewater could serve as a predictor for forecasting COVID-19 cases, hospitalizations, and deaths using copula-based time series modeling. SARS-CoV-2 RNA load in wastewater in Chesapeake, VA, was measured using the RT-qPCR method. A Gaussian copula time series (CTS) marginal regression model, incorporating an autoregressive moving average model and Gaussian copula function, was used as a forecasting model. Wastewater SARS-CoV-2 viral loads were correlated with COVID-19 cases. The forecasted model with both Poisson and negative binomial marginal distributions yielded trends in COVID-19 cases that closely paralleled the reported cases, with 90% of the forecasted COVID-19 cases falling within the 99% confidence interval of the reported data. However, the model did not effectively forecast the trends and the rising cases of hospital admissions and deaths. The forecasting model was validated for predicting clinical cases and trends with a non-normal distribution in a time series manner. Additionally, the model showed potential for using wastewater SARS-CoV-2 viral load as a predictor for forecasting COVID-19 cases. Full article
(This article belongs to the Section COVID Clinical Manifestations and Management)
Show Figures

Figure 1

15 pages, 305 KB  
Article
Copula-Based Regression with Mixed Covariates
by Saeed Aldahmani, Othmane Kortbi and Mhamed Mesfioui
Mathematics 2024, 12(22), 3525; https://doi.org/10.3390/math12223525 - 12 Nov 2024
Cited by 2 | Viewed by 2722
Abstract
In this paper, we focused on developing copula-based modeling procedures that effectively capture the dependence between response and explanatory variables. Building upon the work of Noh et al. (J. Am. Stat. Assoc. 2013, 108, 676–688) we extended copula-based regression to accommodate both continuous [...] Read more.
In this paper, we focused on developing copula-based modeling procedures that effectively capture the dependence between response and explanatory variables. Building upon the work of Noh et al. (J. Am. Stat. Assoc. 2013, 108, 676–688) we extended copula-based regression to accommodate both continuous and discrete covariates. Specifically, we explored the construction of copulas to estimate the conditional mean of the response variable given the covariates, elucidating the relationship between copula structures and marginal distributions. We considered various estimation methods for copulas and distribution functions, presenting a diverse array of estimators for the conditional mean function. These estimators range from non-parametric to semi-parametric and fully parametric, offering flexibility in modeling regression relationships. An adapted algorithm is applied to construct copulas and simulations are carried out to replicate datasets, estimate prediction model parameters, and compare with the OLS method. The practicality and efficacy of our proposed methodologies, grounded in the principles of copula-based regression, are substantiated through methodical simulation studies. Full article
(This article belongs to the Section D1: Probability and Statistics)
16 pages, 1956 KB  
Article
The GARCH-EVT-Copula Approach to Investigating Dependence and Quantifying Risk in a Portfolio of Bitcoin and the South African Rand
by Thabani Ndlovu and Delson Chikobvu
J. Risk Financial Manag. 2024, 17(11), 504; https://doi.org/10.3390/jrfm17110504 - 8 Nov 2024
Cited by 3 | Viewed by 2990
Abstract
This study uses a hybrid model of the exponential generalised auto-regressive conditional heteroscedasticity (eGARCH)-extreme value theory (EVT)-Gumbel copula model to investigate the dependence structure between Bitcoin and the South African Rand, and quantify the portfolio risk of an equally weighted portfolio. The Gumbel [...] Read more.
This study uses a hybrid model of the exponential generalised auto-regressive conditional heteroscedasticity (eGARCH)-extreme value theory (EVT)-Gumbel copula model to investigate the dependence structure between Bitcoin and the South African Rand, and quantify the portfolio risk of an equally weighted portfolio. The Gumbel copula, an extreme value copula, is preferred due to its versatile ability to capture various tail dependence structures. To model marginals, firstly, the eGARCH(1, 1) model is fitted to the growth rate data. Secondly, a mixture model featuring the generalised Pareto distribution (GPD) and the Gaussian kernel is fitted to the standardised residuals from an eGARCH(1, 1) model. The GPD is fitted to the tails while the Gaussian kernel is used in the central parts of the data set. The Gumbel copula parameter is estimated to be α=1.007, implying that the two currencies are independent. At 90%, 95%, and 99% levels of confidence, the portfolio’s diversification effects (DE) quantities using value at risk (VaR) and expected shortfall (ES) show that there is evidence of a reduction in losses (diversification benefits) in the portfolio compared to the risk of the simple sum of single assets. These results can be used by fund managers, risk practitioners, and investors to decide on diversification strategies that reduce their risk exposure. Full article
(This article belongs to the Special Issue Digital Economy and the Role of Accounting and Finance)
Show Figures

Figure 1

13 pages, 630 KB  
Article
A Copula Discretization of Time Series-Type Model for Examining Climate Data
by Dimuthu Fernando, Olivia Atutey and Norou Diawara
Mathematics 2024, 12(15), 2419; https://doi.org/10.3390/math12152419 - 3 Aug 2024
Viewed by 2430
Abstract
The study presents a comparative analysis of climate data under two scenarios: a Gaussian copula marginal regression model for count time series data and a copula-based bivariate count time series model. These models, built after comprehensive simulations, offer adaptable autocorrelation structures considering the [...] Read more.
The study presents a comparative analysis of climate data under two scenarios: a Gaussian copula marginal regression model for count time series data and a copula-based bivariate count time series model. These models, built after comprehensive simulations, offer adaptable autocorrelation structures considering the daily average temperature and humidity data observed at a regional airport in Mobile, AL. Full article
(This article belongs to the Special Issue Statistics and Data Science)
Show Figures

Figure 1

23 pages, 1267 KB  
Article
Comparative Analysis of Local Differential Privacy Schemes in Healthcare Datasets
by Andres Hernandez-Matamoros and Hiroaki Kikuchi
Appl. Sci. 2024, 14(7), 2864; https://doi.org/10.3390/app14072864 - 28 Mar 2024
Cited by 11 | Viewed by 4142
Abstract
In the rapidly evolving landscape of healthcare technology, the critical need for robust privacy safeguards is undeniable. Local Differential Privacy (LDP) offers a potential solution to address privacy concerns in data-rich industries. However, challenges such as the curse of dimensionality arise when dealing [...] Read more.
In the rapidly evolving landscape of healthcare technology, the critical need for robust privacy safeguards is undeniable. Local Differential Privacy (LDP) offers a potential solution to address privacy concerns in data-rich industries. However, challenges such as the curse of dimensionality arise when dealing with multidimensional data. This is particularly pronounced in k-way joint probability estimation, where higher values of k lead to decreased accuracy. To overcome these challenges, we propose the integration of Bayesian Ridge Regression (BRR), known for its effectiveness in handling multicollinearity. Our approach demonstrates robustness, manifesting a noteworthy reduction in average variant distance when compared to baseline algorithms such as LOPUB and LOCOP. Additionally, we leverage the R-squared metric to highlight BRR’s advantages, illustrating its performance relative to LASSO, as LOPUB and LOCOP are based on it. This paper addresses a relevant concern related to datasets exhibiting high correlation between attributes, potentially allowing the extraction of information from one attribute to another. We convincingly show the superior performance of BRR over LOPUB and LOCOP across 15 datasets with varying average correlation attributes. Healthcare takes center stage in this collection of datasets. Moreover, the datasets explore diverse fields such as finance, travel, and social science. In summary, our proposed approach consistently outperforms the LOPUB and LOCOP algorithms, particularly when operating under smaller privacy budgets and with datasets characterized by lower average correlation attributes. This signifies the efficacy of Bayesian Ridge Regression in enhancing privacy safeguards in healthcare technology. Full article
(This article belongs to the Special Issue Data Privacy and Security for Information Engineering)
Show Figures

Figure 1

19 pages, 1907 KB  
Article
Distribution Prediction of Decomposed Relative EVA Measure with Levy-Driven Mean-Reversion Processes: The Case of an Automotive Sector of a Small Open Economy
by Zdeněk Zmeškal, Dana Dluhošová, Karolina Lisztwanová, Antonín Pončík and Iveta Ratmanová
Forecasting 2023, 5(2), 453-471; https://doi.org/10.3390/forecast5020025 - 29 May 2023
Cited by 3 | Viewed by 2723
Abstract
The paper is focused on predicting the financial performance of a small open economy with an automotive industry with an above-standard share. The paper aims to predict the probability distribution of the decomposed relative economic value-added measure of the automotive production sector NACE [...] Read more.
The paper is focused on predicting the financial performance of a small open economy with an automotive industry with an above-standard share. The paper aims to predict the probability distribution of the decomposed relative economic value-added measure of the automotive production sector NACE 29 in the Czech economy. An advanced Monte Carlo simulation prediction model is applied using the exact pyramid decomposition function. The problem is modelled using advanced stochastic process instruments such as Levy-driven mean-reversion, skew t-regression, normal inverse Gaussian distribution, and t-copula interdependencies. The proposed method procedure was found to fit the investigated financial ratios sufficiently, and the estimation was valid. The decomposed approach allows the reflection of the ratios’ complex relationships and improves the prediction results. The decomposed results are compared with the direct prediction. Precision distribution tests confirmed the superiority of the decomposed approach for particular data. Moreover, the Czech automotive sector tends to decrease the mean value and median of financial performance in the future with negative asymmetry and high volatility hidden in financial ratios decomposition. Scholars can generally use forecasting methods to investigate economic system development, and practitioners can obtain quality and valuable information for decision making. Full article
(This article belongs to the Section Forecasting in Economics and Management)
Show Figures

Figure 1

17 pages, 786 KB  
Article
Copula Dynamic Conditional Correlation and Functional Principal Component Analysis of COVID-19 Mortality in the United States
by Jong-Min Kim
Axioms 2022, 11(11), 619; https://doi.org/10.3390/axioms11110619 - 7 Nov 2022
Cited by 7 | Viewed by 3143
Abstract
This paper shows a visual analysis and the dependence relationships of COVID-19 mortality data in 50 states plus Washington, D.C., from January 2020 to 1 September 2022. Since the mortality data are severely skewed and highly dispersed, a traditional linear model is not [...] Read more.
This paper shows a visual analysis and the dependence relationships of COVID-19 mortality data in 50 states plus Washington, D.C., from January 2020 to 1 September 2022. Since the mortality data are severely skewed and highly dispersed, a traditional linear model is not suitable for the data. As such, we use a Gaussian copula marginal regression (GCMR) model and vine copula-based quantile regression to analyze the COVID-19 mortality data. For a visual analysis of the COVID-19 mortality data, a functional principal component analysis (FPCA), graphical model, and copula dynamic conditional correlation (copula-DCC) are applied. The visual from the graphical model shows five COVID-19 mortality equivalence groups in the US, and the results of the FPCA visualize the COVID-19 daily mortality time trends for 50 states plus Washington, D.C. The GCMR model investigates the COVID-19 daily mortality relationship between four major states and the rest of the states in the US. The copula-DCC models investigate the time-trend dependence relationship between the COVID-19 daily mortality data of four major states. Full article
(This article belongs to the Special Issue Statistical Methods and Applications)
Show Figures

Figure 1

28 pages, 7190 KB  
Article
Modeling the Interdependence Structure between Rain and Radar Variables Using Copulas: Applications to Heavy Rainfall Estimation by Weather Radar
by Eric-Pascal Zahiri, Modeste Kacou, Marielle Gosset and Sahouarizié Adama Ouattara
Atmosphere 2022, 13(8), 1298; https://doi.org/10.3390/atmos13081298 - 15 Aug 2022
Cited by 3 | Viewed by 3173
Abstract
In radar quantitative precipitation estimates (QPE), the progressive evolution of rainfall algorithms has been guided by attempts to reduce the uncertainties in rainfall retrieval. However, because most of the algorithms are based on the linear dependence between radar and rain variables and designed [...] Read more.
In radar quantitative precipitation estimates (QPE), the progressive evolution of rainfall algorithms has been guided by attempts to reduce the uncertainties in rainfall retrieval. However, because most of the algorithms are based on the linear dependence between radar and rain variables and designed for rain rates ranging from light to moderate rainfall, they result in misleading estimations of intense or strong rainfall rates. In this paper, based on extensive data gathered during the AMMA and Megha-Tropiques data campaigns, we provided a way to improve the estimation of intense rainfall rates from radar measurements. To this end, we designed a formulation of the QPE algorithm that accounts for the co-dependency between radar observables and rainfall rate using copula simulation synthetic datasets and using the quantile regression features for a more complete picture of covariate effects. The results show a clear improvement in heavy rainfall retrieval from radar data using copula-based R(KDP) algorithms derived from a realistic simulated dataset. For a better performance, Gaussian copula-derived algorithms require a 0.8 percentile distribution to be considered. Conversely, lower percentiles are better for Student’s, Gumbel and HRT copula estimators when retrieving heavy rainfall rates (R > 30). This highlights the need to investigate the entire conditional distribution to determine the performance of radar rainfall estimators. Full article
Show Figures

Figure 1

15 pages, 1179 KB  
Article
Forecasting Crude Oil Prices with Major S&P 500 Stock Prices: Deep Learning, Gaussian Process, and Vine Copula
by Jong-Min Kim, Hope H. Han and Sangjin Kim
Axioms 2022, 11(8), 375; https://doi.org/10.3390/axioms11080375 - 29 Jul 2022
Cited by 17 | Viewed by 5955
Abstract
This paper introduces methodologies in forecasting oil prices (Brent and WTI) with multivariate time series of major S&P 500 stock prices using Gaussian process modeling, deep learning, and vine copula regression. We also apply Bayesian variable selection and nonlinear principal component analysis (NLPCA) [...] Read more.
This paper introduces methodologies in forecasting oil prices (Brent and WTI) with multivariate time series of major S&P 500 stock prices using Gaussian process modeling, deep learning, and vine copula regression. We also apply Bayesian variable selection and nonlinear principal component analysis (NLPCA) for data dimension reduction. With a reduced number of important covariates, we also forecast oil prices (Brent and WTI) with multivariate time series of major S&P 500 stock prices using Gaussian process modeling, deep learning, and vine copula regression. To apply real data to the proposed methods, we select monthly log returns of 2 oil prices and 74 large-cap, major S&P 500 stock prices across the period of February 2001–October 2019. We conclude that vine copula regression with NLPCA is superior overall to other proposed methods in terms of the measures of prediction errors. Full article
(This article belongs to the Special Issue Statistical Methods and Applications)
Show Figures

Figure 1

14 pages, 3696 KB  
Article
Examining Factors That Affect Movie Gross Using Gaussian Copula Marginal Regression
by Joshua Eklund and Jong-Min Kim
Forecasting 2022, 4(3), 685-698; https://doi.org/10.3390/forecast4030037 - 21 Jul 2022
Cited by 1 | Viewed by 3957
Abstract
In this research, we investigate the relationship between a movie’s gross and its budget, year of release, season of release, genre, and rating. The movie data used in this research are severely skewed to the right, resulting in the problems of nonlinearity, non-normal [...] Read more.
In this research, we investigate the relationship between a movie’s gross and its budget, year of release, season of release, genre, and rating. The movie data used in this research are severely skewed to the right, resulting in the problems of nonlinearity, non-normal distribution, and non-constant variance of the error terms. To overcome these difficulties, we employ a Gaussian copula marginal regression (GCMR) model after adjusting the gross and budget variables for inflation using a consumer price index. An analysis of the data found that year of release, budget, season of release, genre, and rating were all statistically significant predictors of movie gross. Specifically, one unit increases in budget and year were associated with an increase in movie gross. G movies were found to gross more than all other kinds of movies (PG, PG-13, R, and Other). Movies released in the fall were found to gross the least compared to the other three seasons. Finally, action movies were found to gross more than biography, comedy, crime, and other movie genres, but gross less than adventure, animation, drama, fantasy, horror, and mystery movies. Full article
Show Figures

Figure 1

15 pages, 965 KB  
Article
On the Relationship of Cryptocurrency Price with US Stock and Gold Price Using Copula Models
by Jong-Min Kim, Seong-Tae Kim and Sangjin Kim
Mathematics 2020, 8(11), 1859; https://doi.org/10.3390/math8111859 - 23 Oct 2020
Cited by 36 | Viewed by 10378
Abstract
This paper examines the relationship of the leading financial assets, Bitcoin, Gold, and S&P 500 with GARCH-Dynamic Conditional Correlation (DCC), Nonlinear Asymmetric GARCH DCC (NA-DCC), Gaussian copula-based GARCH-DCC (GC-DCC), and Gaussian copula-based Nonlinear Asymmetric-DCC (GCNA-DCC). Under the high volatility financial situation such as [...] Read more.
This paper examines the relationship of the leading financial assets, Bitcoin, Gold, and S&P 500 with GARCH-Dynamic Conditional Correlation (DCC), Nonlinear Asymmetric GARCH DCC (NA-DCC), Gaussian copula-based GARCH-DCC (GC-DCC), and Gaussian copula-based Nonlinear Asymmetric-DCC (GCNA-DCC). Under the high volatility financial situation such as the COVID-19 pandemic occurrence, there exist a computation difficulty to use the traditional DCC method to the selected cryptocurrencies. To solve this limitation, GC-DCC and GCNA-DCC are applied to investigate the time-varying relationship among Bitcoin, Gold, and S&P 500. In terms of log-likelihood, we show that GC-DCC and GCNA-DCC are better models than DCC and NA-DCC to show relationship of Bitcoin with Gold and S&P 500. We also consider the relationships among time-varying conditional correlation with Bitcoin volatility, and S&P 500 volatility by a Gaussian Copula Marginal Regression (GCMR) model. The empirical findings show that S&P 500 and Gold price are statistically significant to Bitcoin in terms of log-return and volatility. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

Back to TopTop