Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = risk bounds and equalities

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 373 KB  
Article
Robust Learning of Tail Dependence
by Omid M. Ardakani
Econometrics 2025, 13(4), 47; https://doi.org/10.3390/econometrics13040047 - 20 Nov 2025
Viewed by 383
Abstract
Accurate estimation of tail dependence is difficult due to model misspecification and data contamination. This paper introduces a class of minimum f-divergence estimators for the tail dependence coefficient that unifies robust estimation with extreme value theory. I establish strong consistency and derive [...] Read more.
Accurate estimation of tail dependence is difficult due to model misspecification and data contamination. This paper introduces a class of minimum f-divergence estimators for the tail dependence coefficient that unifies robust estimation with extreme value theory. I establish strong consistency and derive the semiparametric efficiency bound for estimating extremal dependence, the extremal Cramér–Rao bound. I show that the estimator achieves this bound if and only if the second derivative of its generating function at unity equals one, formally characterizing the trade-off between robustness and asymptotic efficiency. An empirical application to systemic risk in the US banking sector shows that the robust Hellinger estimator provides stability during crises, while the efficient maximum likelihood estimator offers precision during normal periods. Full article
Show Figures

Figure 1

21 pages, 517 KB  
Article
Finite-Horizon Optimal Consumption and Investment with Upper and Lower Constraints on Consumption
by Geonwoo Kim and Junkee Jeon
Mathematics 2025, 13(22), 3598; https://doi.org/10.3390/math13223598 - 10 Nov 2025
Viewed by 508
Abstract
We study a finite-horizon optimal consumption and investment problem in a complete continuous-time market where consumption is restricted within fixed upper and lower bounds. Assuming constant relative risk aversion (CRRA) preferences, we employ the dual-martingale approach to reformulate the problem and derive closed-form [...] Read more.
We study a finite-horizon optimal consumption and investment problem in a complete continuous-time market where consumption is restricted within fixed upper and lower bounds. Assuming constant relative risk aversion (CRRA) preferences, we employ the dual-martingale approach to reformulate the problem and derive closed-form integral representations for the dual value function and its derivatives. These results yield explicit feedback formulas for the optimal consumption, portfolio allocation, and wealth processes. We establish the duality theorem linking the primal and dual value functions and verify the regularity and convexity properties of the dual solution. Our results show that the upper and lower consumption bounds transform the linear Merton rule into a piecewise policy: consumption equals L when wealth is low, follows the unconstrained Merton ratio in the interior region, and is capped at H when wealth is high. Full article
Show Figures

Figure 1

40 pages, 794 KB  
Article
An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria
by Massimiliano Kaucic, Renato Pelessoni and Filippo Piccotto
Entropy 2025, 27(5), 480; https://doi.org/10.3390/e27050480 - 29 Apr 2025
Viewed by 1772
Abstract
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More [...] Read more.
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More precisely, the best-performing assets from the investable universe are identified using three financial criteria. The first criterion is based on mutual information, and it is employed to capture the microstructure of the stock market. The second one is the momentum, and the third is the upside-to-downside beta ratio. To calculate the preference weights used in the chosen multi-criteria decision-making procedure, two methods are compared, namely equal and entropy weighting. In the second stage, this work considers a portfolio optimization model where the objective function is a modified version of the Sharpe ratio, consistent with the choices of a rational agent even when faced with negative risk premiums. Additionally, the portfolio design incorporates a set of bound, budget, and cardinality constraints, together with a set of risk budgeting restrictions. To solve the resulting non-smooth programming problem with non-convex constraints, this paper proposes a variant of the distance-based parameter adaptation for success-history-based differential evolution with double crossover (DISH-XX) algorithm equipped with a hybrid constraint-handling approach. Numerical experiments on the US and European stock markets over the past ten years are conducted, and the results show that the flexibility of the proposed portfolio model allows the better control of losses, particularly during market downturns, thereby providing superior or at least comparable ex post performance with respect to several benchmark investment strategies. Full article
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)
Show Figures

Figure 1

22 pages, 912 KB  
Article
The Effects of Fossil Fuel Consumption-Related CO2 on Health Outcomes in South Africa
by Akinola Gbenga Wilfred and Abieyuwa Ohonba
Sustainability 2024, 16(11), 4751; https://doi.org/10.3390/su16114751 - 3 Jun 2024
Cited by 6 | Viewed by 2513
Abstract
The consumption of fossil fuel significantly contributes to the growth of South Africa’s economy but produces carbon dioxide (CO2), which is detrimental to environmental sustainability with overall effects on health outcomes. This study sought to (i) examine the impacts of fossil [...] Read more.
The consumption of fossil fuel significantly contributes to the growth of South Africa’s economy but produces carbon dioxide (CO2), which is detrimental to environmental sustainability with overall effects on health outcomes. This study sought to (i) examine the impacts of fossil energy consumption-related CO2 emissions on the under-five mortality and infant mortality rates in South Africa and (ii) analyse the causal relationship between fossil energy consumption, CO2 emissions, and mortality rates in South Africa. Linear and nonlinear ARDL bounds and the Toda–Yamamoto causality test were used to establish the equilibrium property in the long run and the causal effects of the models’ variables. Health outcome data include the under-five mortality rate (MTR1) and infant mortality rate (MTR2). Other explanatory variables include fossil energy consumption (FOC), inflation (Inf), carbon dioxide emissions (CO2), and government expenditure (GEH). It is evident from the results of linear ARDL that the first lag of the under-five mortality rate in the short run has a positive and significant impact on the under-five mortality rate in South Africa. Holding the other variables constant, the under-five mortality rate in South Africa would increase by 0.630% for every 1% increase in its lagged values. Fossil energy consumption has a positive and significant effect on the under-five mortality rate in South Africa. This significant relationship implies that a 1% increase in fossil energy consumption increases the under-five mortality rate per 1000 persons per year in South Africa by 0.418% in the short run, all things being equal. The results from the Toda–Yamamoto causality test revealed that there is no causality between the under-five mortality rate and both the consumption of fossil fuel and CO2 emissions in South Africa. The results from nonlinear ARDL presented four separate scenarios. In the short run, during increasing levels of CO2 in the initial period (lag of CO2), a 1% increase in CO2 would decrease the under-five mortality rate by 1.15%. During periods of decreasing levels of CO2 in the short run, a 1% increase in CO2 would increase the infant mortality rate by 0.66%. Again, during previous and current periods of decreasing levels of FEC, a 1% increase in FEC would increase the infant mortality rate by 0.45% and 0.32%, respectively. In the long run, during periods of increasing levels of CO2, a 1% increase in CO2 would decrease the infant mortality rate by 4.62% whereas during decreasing levels of CO2, a 1% increase in CO2 would increase the infant mortality rate by 2.3%. The risk posed by CO2 emissions and their effects on humans can then be minimised through a government expansionary policy within health programmes. Full article
Show Figures

Figure 1

19 pages, 1134 KB  
Review
Phthalates: The Main Issue in Quality Control in the Beverage Industry
by Alessia Iannone, Cristina Di Fiore, Fabiana Carriera, Pasquale Avino and Virgilio Stillittano
Separations 2024, 11(5), 133; https://doi.org/10.3390/separations11050133 - 26 Apr 2024
Cited by 6 | Viewed by 5395
Abstract
Phthalate esters (PAEs) are a group of chemicals used to improve the flexibility and durability of plastics. The chemical properties and the resistance to high temperatures promote their degradation and release into the environment. Food and beverages can be contaminated by PAEs through [...] Read more.
Phthalate esters (PAEs) are a group of chemicals used to improve the flexibility and durability of plastics. The chemical properties and the resistance to high temperatures promote their degradation and release into the environment. Food and beverages can be contaminated by PAEs through the migration from packaging material because they are not covalently bound to plastic and also via different kinds of environmental sources or during processing. For instance, alcoholic drinks in plastic containers are a particular risk, since the ethanol contained provides a good solubility for PAEs. According to its role as an endocrine disruptor compound and its adverse effects on the liver, kidney, and reproductive and respiratory systems, the International Agency on Research Cancer (IARC) classified di-(2-ethylhexyl) phthalate (DEHP) as a possible human carcinogen. For this reason, to control human exposure to PAEs, many countries prohibited their use in food as non-food substances. For example, in Europe, the Commission Regulation (EU) 2018/2005 restricts the use of DEHP, dibutyl phthalate (DBP), benzyl butyl phthalate (BBP), and diisobutyl phthalate (DiBP) to a concentration equal to or below 0.1 by weight in plasticizers in articles used by consumers or in indoor areas. There are reports from the US Food and Drug Administration (FDA) that some beverages (and food as well), particularly fruit juices, contain high levels of phthalates. In some cases, the deliberate adulteration of soft drinks with phthalate esters has been reported. This paper would like to show the difficulties of performing PAE analysis in beverage matrices, in particular alcoholic beverages, as well as the main solutions provided for quality control in the industrial branches. Full article
(This article belongs to the Section Analysis of Food and Beverages)
Show Figures

Figure 1

21 pages, 557 KB  
Article
Bidual Representation of Expectiles
by Alejandro Balbás, Beatriz Balbás, Raquel Balbás and Jean-Philippe Charron
Risks 2023, 11(12), 220; https://doi.org/10.3390/risks11120220 - 15 Dec 2023
Cited by 4 | Viewed by 2254
Abstract
Downside risk measures play a very interesting role in risk management problems. In particular, the value at risk (VaR) and the conditional value at risk (CVaR) have become very important instruments to address problems such as risk optimization, capital requirements, portfolio selection, pricing [...] Read more.
Downside risk measures play a very interesting role in risk management problems. In particular, the value at risk (VaR) and the conditional value at risk (CVaR) have become very important instruments to address problems such as risk optimization, capital requirements, portfolio selection, pricing and hedging issues, risk transference, risk sharing, etc. In contrast, expectile risk measures are not as widely used, even though they are both coherent and elicitable. This paper addresses the bidual representation of expectiles in order to prove further important properties of these risk measures. Indeed, the bidual representation of expectiles enables us to estimate and optimize them by linear programming methods, deal with optimization problems involving expectile-linked constraints, relate expectiles with VaR and CVaR by means of both equalities and inequalities, give VaR and CVaR hyperbolic upper bounds beyond the level of confidence, and analyze whether co-monotonic additivity holds for expectiles. Illustrative applications are presented. Full article
(This article belongs to the Special Issue Optimal Investment and Risk Management)
15 pages, 3387 KB  
Article
The Sequential Extraction of Municipal Solid Waste Incineration Bottom Ash: Heavy Metals Mobility and Sustainable Application of Ashes
by Yingzun He and Monika Kasina
Sustainability 2023, 15(19), 14638; https://doi.org/10.3390/su151914638 - 9 Oct 2023
Cited by 9 | Viewed by 3440
Abstract
This manuscript focuses on the sustainable utilization of municipal waste incineration ashes in construction, taking into account their substantial concentration of beneficial elements and the potential environmental pollution caused by the leaching of toxic elements due to naturally occurring processes. To assess heavy [...] Read more.
This manuscript focuses on the sustainable utilization of municipal waste incineration ashes in construction, taking into account their substantial concentration of beneficial elements and the potential environmental pollution caused by the leaching of toxic elements due to naturally occurring processes. To assess heavy metal mobility in ashes, a sequential extraction method based on the European Community Bureau of Reference (BCR) was applied. It enables the determination of heavy metal fractions and provides valuable insights into their potential environmental impact and bioavailability. More than 80% of Cd, and Zn, and over 75% of Cu, exhibited strong associations with the most mobile exchangeable fraction, while over 60% of Al and Fe were predominantly bound to reducible. The distribution of As and Cr was relatively balanced between exchangeable and oxidizable fractions, whereas 100% of Pb was exclusively associated with oxidizable fractions, indicating immobilization of this element in the ash. The calculated Risk Assessment Codes and Individual Contamination Factors indicated a quite high to very high risk level for the element’s mobility and environmental contamination. For elements like Pb, Cd, Cu, and Zn, higher concentrations in the samples are associated with higher overall environmental risk. For elements like As and Cr, higher concentrations in the samples are associated with lower overall environmental risk. Studied ash exhibits potential as a resource, but equally it demands rigorous environmental management to ensure responsible utilization. The observed metal mobilization underscores the necessity for stringent containment and treatment measures to mitigate the risk of environmental contamination. Full article
(This article belongs to the Special Issue Sustainable Waste Management and Utilization)
Show Figures

Figure 1

11 pages, 1421 KB  
Communication
HyperCys: A Structure- and Sequence-Based Predictor of Hyper-Reactive Druggable Cysteines
by Mingjie Gao and Stefan Günther
Int. J. Mol. Sci. 2023, 24(6), 5960; https://doi.org/10.3390/ijms24065960 - 22 Mar 2023
Cited by 7 | Viewed by 3225
Abstract
The cysteine side chain has a free thiol group, making it the amino acid residue most often covalently modified by small molecules possessing weakly electrophilic warheads, thereby prolonging on-target residence time and reducing the risk of idiosyncratic drug toxicity. However, not all cysteines [...] Read more.
The cysteine side chain has a free thiol group, making it the amino acid residue most often covalently modified by small molecules possessing weakly electrophilic warheads, thereby prolonging on-target residence time and reducing the risk of idiosyncratic drug toxicity. However, not all cysteines are equally reactive or accessible. Hence, to identify targetable cysteines, we propose a novel ensemble stacked machine learning (ML) model to predict hyper-reactive druggable cysteines, named HyperCys. First, the pocket, conservation, structural and energy profiles, and physicochemical properties of (non)covalently bound cysteines were collected from both protein sequences and 3D structures of protein–ligand complexes. Then, we established the HyperCys ensemble stacked model by integrating six different ML models, including K-nearest neighbors, support vector machine, light gradient boost machine, multi-layer perceptron classifier, random forest, and the meta-classifier model logistic regression. Finally, based on the hyper-reactive cysteines’ classification accuracy and other metrics, the results for different feature group combinations were compared. The results show that the accuracy, F1 score, recall score, and ROC AUC values of HyperCys are 0.784, 0.754, 0.742, and 0.824, respectively, after performing 10-fold CV with the best window size. Compared to traditional ML models with only sequenced-based features or only 3D structural features, HyperCys is more accurate at predicting hyper-reactive druggable cysteines. It is anticipated that HyperCys will be an effective tool for discovering new potential reactive cysteines in a wide range of nucleophilic proteins and will provide an important contribution to the design of targeted covalent inhibitors with high potency and selectivity. Full article
(This article belongs to the Special Issue Early-Stage Drug Discovery: Advances and Challenges)
Show Figures

Figure 1

18 pages, 1270 KB  
Article
Influence of the Internal Structure on the Integral Risk of a Complex System on the Example of the Risk Minimization Problem in a “Star” Type Structure
by Alexander Shiroky and Andrey Kalashnikov
Mathematics 2023, 11(4), 998; https://doi.org/10.3390/math11040998 - 15 Feb 2023
Cited by 2 | Viewed by 1653
Abstract
This paper is devoted to studying the influence of the structure of a complex system on its integral risk. When solving risk management problems, it often becomes necessary to take into account structural effects, which most often include risk transfer and failure propagation. [...] Read more.
This paper is devoted to studying the influence of the structure of a complex system on its integral risk. When solving risk management problems, it often becomes necessary to take into account structural effects, which most often include risk transfer and failure propagation. This study discusses the influence of the position of the elements of a protected system inside a fixed structure of the “star” type on its integral risk. The authors demonstrate that the problem of the optimal placement of elements in such a structure from the point of view of minimizing the risk cannot be precisely solved by analytical methods and propose an algorithm for solving it with bounded errors. For the case of equal expected damages in case of a successful attack of a system element, the authors calculate upper estimates for the relative error of solving the placement problem using the proposed algorithm and also propose a methodology for rapid risk assessment for systems with a “star” type structure. Finally, for the particular case when the risks of elements are in a certain ratio, the authors have found an exact solution to the problem of the optimal placement of elements. Full article
Show Figures

Figure 1

15 pages, 4298 KB  
Article
Computational Analysis of Concrete Flow in a Reinforced Bored Pile Using the Porous Medium Approach
by Thomas Kränkel, Daniel Weger, Karsten Beckhaus, Fabian Geppert, Christoph Gehlen and Jithender J. Timothy
Appl. Mech. 2022, 3(2), 481-495; https://doi.org/10.3390/applmech3020028 - 22 Apr 2022
Cited by 1 | Viewed by 2782
Abstract
In this paper, the flow of concrete in a reinforced bored pile is analysed using computational simulations. In order to reduce the computational time, a porous medium that equally mimics the presence of the reinforcement is used. Experimental measurements are used as bounds [...] Read more.
In this paper, the flow of concrete in a reinforced bored pile is analysed using computational simulations. In order to reduce the computational time, a porous medium that equally mimics the presence of the reinforcement is used. Experimental measurements are used as bounds on the material parameters describing the flow of fresh concrete. The influence of rheological properties of fresh concrete and the thickness of the porous medium that represents the reinforcements is analysed with a classical U-box simulation. Finally, casting of a bored pile is analysed using computational simulation implementing a porous medium representing the reinforcement cage. The concrete flow behavior and especially the filling of the concrete cover zone is analyzed for casting scenarios with different concretes varying in their rheological behavior. Simulations using the porous medium approach is 10x faster than simulations that explicitly model the reinforcements. Simulation results show that a good workability (low viscosity and low yield stress) of the initial batches of concrete must be maintained throughout pouring to avoid the risk of defect formation in the cover zone. Full article
Show Figures

Figure 1

20 pages, 1531 KB  
Article
Portfolio Constraints: An Empirical Analysis
by Guido Abate, Tommaso Bonafini and Pierpaolo Ferrari
Int. J. Financial Stud. 2022, 10(1), 9; https://doi.org/10.3390/ijfs10010009 - 20 Jan 2022
Cited by 6 | Viewed by 9429
Abstract
Mean-variance optimization often leads to unreasonable asset allocations. This problem has forced scholars and practitioners alike to introduce portfolio constraints. The scope of our study is to verify which type of constraint is more suitable for achieving efficient performance. We have applied the [...] Read more.
Mean-variance optimization often leads to unreasonable asset allocations. This problem has forced scholars and practitioners alike to introduce portfolio constraints. The scope of our study is to verify which type of constraint is more suitable for achieving efficient performance. We have applied the main techniques developed by the financial community, including classical weight, flexible, norm-based, variance-based, tracking error volatility, and beta constraints. We employed panel data on the monthly returns of the sector indices forming the MSCI All Country World Index from January 1995 to December 2020. The assessment of each strategy was based on out-of-sample performance, measured using a rolling window method with annual rebalancing. We observed that the best strategies are those subject to constraints derived from the equal-weighted model. If the goal is the best compromise between absolute return, efficiency, total risk, economic sustainability, diversification, and ease of implementation, the best solution is a portfolio subject to no short selling and bound either to the equal weighting or to TEV limits. Overall, we found that constrained optimization models represent an efficient alternative to classic investment strategies that provide substantial advantages to investors. Full article
Show Figures

Figure 1

18 pages, 1574 KB  
Article
A Smoothed Version of the Lassosum Penalty for Fitting Integrated Risk Models Using Summary Statistics or Individual-Level Data
by Georg Hahn, Dmitry Prokopenko, Sharon M. Lutz, Kristina Mullin, Rudolph E. Tanzi, Michael H. Cho, Edwin K. Silverman, Christoph Lange and on the behalf of the NHLBI Trans-Omics for Precision Medicine (TOPMed) Consortium
Genes 2022, 13(1), 112; https://doi.org/10.3390/genes13010112 - 6 Jan 2022
Cited by 3 | Viewed by 3598
Abstract
Polygenic risk scores are a popular means to predict the disease risk or disease susceptibility of an individual based on its genotype information. When adding other important epidemiological covariates such as age or sex, we speak of an integrated risk model. Methodological advances [...] Read more.
Polygenic risk scores are a popular means to predict the disease risk or disease susceptibility of an individual based on its genotype information. When adding other important epidemiological covariates such as age or sex, we speak of an integrated risk model. Methodological advances for fitting more accurate integrated risk models are of immediate importance to improve the precision of risk prediction, thereby potentially identifying patients at high risk early on when they are still able to benefit from preventive steps/interventions targeted at increasing their odds of survival, or at reducing their chance of getting a disease in the first place. This article proposes a smoothed version of the “Lassosum” penalty used to fit polygenic risk scores and integrated risk models using either summary statistics or raw data. The smoothing allows one to obtain explicit gradients everywhere for efficient minimization of the Lassosum objective function while guaranteeing bounds on the accuracy of the fit. An experimental section on both Alzheimer’s disease and COPD (chronic obstructive pulmonary disease) demonstrates the increased accuracy of the proposed smoothed Lassosum penalty compared to the original Lassosum algorithm (for the datasets under consideration), allowing it to draw equal with state-of-the-art methodology such as LDpred2 when evaluated via the AUC (area under the ROC curve) metric. Full article
(This article belongs to the Special Issue Statistical Genetics in Human Diseases)
Show Figures

Figure 1

38 pages, 763 KB  
Article
Distribution-Dependent Weighted Union Bound
by Luca Oneto and Sandro Ridella
Entropy 2021, 23(1), 101; https://doi.org/10.3390/e23010101 - 12 Jan 2021
Viewed by 3130
Abstract
In this paper, we deal with the classical Statistical Learning Theory’s problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to [...] Read more.
In this paper, we deal with the classical Statistical Learning Theory’s problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to state that PLR^(h),δqhR(h)UR^(h),δph1δ where R^(h) is the empirical errors, if it is possible to prove that P{R(h)L(R^(h),δ)}1δ and P{R(h)U(R^(h),δ)}1δ, when h, qh, and ph are chosen before seeing the data such that qh,ph[0,1] and hH(qh+ph)=1. If no a priori information is available qh and ph are set to 12m, namely equally distributed. This approach gives poor results since, as a matter of fact, a learning procedure targets just particular hypotheses, namely hypotheses with small empirical error, disregarding the others. In this work we set the qh and ph in a distribution-dependent way increasing the probability of being chosen to function with small true risk. We will call this proposal Distribution-Dependent Weighted UB (DDWUB) and we will retrieve the sufficient conditions on the choice of qh and ph that state that DDWUB outperforms or, in the worst case, degenerates into UB. Furthermore, theoretical and numerical results will show the applicability, the validity, and the potentiality of DDWUB. Full article
Show Figures

Figure 1

22 pages, 3796 KB  
Article
Relaxation of the Radio-Frequency Linewidth for Coherent-Optical Orthogonal Frequency-Division Multiplexing Schemes by Employing the Improved Extreme Learning Machine
by David Zabala-Blanco, Marco Mora, Cesar A. Azurdia-Meza, Ali Dehghan Firoozabadi, Pablo Palacios Játiva and Ismael Soto
Symmetry 2020, 12(4), 632; https://doi.org/10.3390/sym12040632 - 16 Apr 2020
Cited by 22 | Viewed by 3834
Abstract
A coherent optical (CO) orthogonal frequency division multiplexing (OFDM) scheme gives a scalable and flexible solution for increasing the transmission rate, being extremely robust to chromatic dispersion as well as polarization mode dispersion. Nevertheless, as any coherent-detection OFDM system, the overall system performance [...] Read more.
A coherent optical (CO) orthogonal frequency division multiplexing (OFDM) scheme gives a scalable and flexible solution for increasing the transmission rate, being extremely robust to chromatic dispersion as well as polarization mode dispersion. Nevertheless, as any coherent-detection OFDM system, the overall system performance is limited by laser phase noises. On the other hand, extreme learning machines (ELMs) have gained a lot of attention from the machine learning community owing to good generalization performance, negligible learning speed, and minimum human intervention. In this manuscript, a phase-error mitigation method based on the single-hidden layer feedforward network prone to the improved ELM algorithm for CO-OFDM systems is introduced for the first time. In the training step, two steps are distinguished. Firstly, pilots are used, which is very common in OFDM-based systems, to diminish laser phase noises as well as to correct frequency-selective impairments and, therefore, the bandwidth efficiency can be maximized. Secondly, the regularization parameter is included in the ELM to balance the empirical and structural risks, namely to minimize the root mean square error in the test stage and, consequently, the bit error rate (BER) metric. The operational principle of the real-complex (RC) ELM is analytically explained, and then, its sub-parameters (number of hidden neurons, regularization parameter, and activation function) are numerically found in order to enhance the system performance. For binary and quadrature phase-shift keying modulations, the RC-ELM outperforms the benchmark pilot-assisted equalizer as well as the fully-real ELM, and almost matches the common phase error (CPE) compensation and the ELM defined in the complex domain (C-ELM) in terms of the BER over an additive white Gaussian noise channel and different laser oscillators. However, both techniques are characterized by the following disadvantages: the CPE compensator reduces the transmission rate since an additional preamble is mandatory for channel estimation purposes, while the C-ELM requires a bounded and differentiable activation function in the complex domain and can not follow semi-supervised training. In the same context, the novel ELM algorithm can not compete with the CPE compensator and C-ELM for the 16-ary quadrature amplitude modulation. On the other hand, the novel ELM exposes a negligible computational cost with respect to the C-ELM and PAE methods. Full article
(This article belongs to the Special Issue Information Technologies and Electronics)
Show Figures

Figure 1

11 pages, 313 KB  
Article
Elementary Bounds on the Ruin Capital in a Diffusion Model of Risk
by Vsevolod K. Malinovskii
Risks 2014, 2(3), 249-259; https://doi.org/10.3390/risks2030249 - 8 Jul 2014
Cited by 1 | Viewed by 4427
Abstract
In a diffusion model of risk, we focus on the initial capital needed to make the probability of ruin within finite time equal to a prescribed value. It is defined as a solution of a nonlinear equation. The endeavor to write down and [...] Read more.
In a diffusion model of risk, we focus on the initial capital needed to make the probability of ruin within finite time equal to a prescribed value. It is defined as a solution of a nonlinear equation. The endeavor to write down and to investigate analytically this solution as a function of the premium rate seems not technically feasible. Instead, we obtain informative bounds for this capital in terms of elementary functions. Full article
Show Figures

Figure 1

Back to TopTop