Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (80)

Search Parameters:
Keywords = zero lower bound

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1815 KB  
Article
A Stability-Aware Adaptive Fractional-Order Speed Control Framework for IPMSM Electric Vehicles in Field-Weakening Operation
by Chih-Chung Chiu, Wei-Lung Mao and Feng-Chun Tai
Energies 2026, 19(5), 1326; https://doi.org/10.3390/en19051326 - 5 Mar 2026
Viewed by 168
Abstract
High-performance speed regulation of interior permanent magnet synchronous motor (IPMSM) drives in electric vehicle (EV) applications becomes particularly challenging in the field-weakening region, where voltage constraints, parameter variations, and nonlinear aerodynamic loads significantly affect the closed-loop stability. To address these challenges, this paper [...] Read more.
High-performance speed regulation of interior permanent magnet synchronous motor (IPMSM) drives in electric vehicle (EV) applications becomes particularly challenging in the field-weakening region, where voltage constraints, parameter variations, and nonlinear aerodynamic loads significantly affect the closed-loop stability. To address these challenges, this paper proposes a stability-aware adaptive fractional-order speed control framework for EV traction systems. The framework integrates a fractional-order PI (FOPI) core to provide iso-damping robustness, a bounded fuzzy gain-scheduling mechanism for real-time adaptation, and an offline multi-objective optimization layer for systematic parameter tuning. A Lyapunov-based qualitative analysis is provided to justify closed-loop ultimate boundedness under adaptive gain modulation and field-weakening constraints. The fuzzy scheduler is explicitly structured to regulate the error energy dissipation rate by modulating the proportional and integral gains while preserving the gain boundedness. The controller parameters are optimized using a diversity-driven fractional-order multi-objective PSO algorithm to balance the tracking accuracy and control effort. The proposed framework was validated using a high-fidelity MATLAB/Simulink–CarSim 2023 co-simulation platform under the aggressive US06 driving cycle. The results demonstrated a zero-overshoot transient response, robustness against a 2.5× inertia mismatch, and sustained performance under flux-linkage and inductance variations in deep field-weakening operation. Compared with conventional PI-based strategies, the proposed approach reduced the speed RMSE by 82%, lowered the current THD from 18.5% to 3.2%, and reduced the cumulative DC-link current-squared index by 6.7%. These results validate the practical robustness and computational feasibility of the proposed stability-aware framework for EV traction control. Full article
Show Figures

Graphical abstract

22 pages, 2982 KB  
Article
Adaptive Asymptotic Tracking Control of MIMO Nonlinear Systems Subject to Asymmetric Full-State Constraints: A Removing Feasibility Condition Approach
by Min Zhang, Kun Jiang, Baiyu Li, Muyu Li and Zhannan Guo
Mathematics 2026, 14(5), 806; https://doi.org/10.3390/math14050806 - 27 Feb 2026
Viewed by 161
Abstract
This work develops an adaptive control scheme for MIMO nonlinear non-lower-triangular systems with asymmetric full-state constraints and unknown gain functions. First, in order to maintain the state constraints, a function that depends solely on the states of the system is proposed to replace [...] Read more.
This work develops an adaptive control scheme for MIMO nonlinear non-lower-triangular systems with asymmetric full-state constraints and unknown gain functions. First, in order to maintain the state constraints, a function that depends solely on the states of the system is proposed to replace the traditional Lyapunov barrier function that relies on the error signal. By means of an affine transformation, the original system is reconstructed to the current system that releases prior knowledge of the gain functions and removes the state constraints. Second, a coordinate transformation is introduced and integrated into each step of the adaptive control design procedure, which circumvents the feasibility condition for intermediate input signals. Under the developed control strategy, all system states are bounded and remain within constraint sets at any moment. Simultaneously, the output signals asymptotically track the reference trajectories to zero. Finally, the feasibility of the presented strategy is demonstrated based on simulation examples. Full article
Show Figures

Figure 1

27 pages, 3600 KB  
Article
From Conventional to Modernised ERTMS Level 2: Steps Towards Rail Interoperability and Automation in Belgium
by Pavlo Holoborodko, Darius Bazaras and Nijolė Batarlienė
Sustainability 2026, 18(3), 1535; https://doi.org/10.3390/su18031535 - 3 Feb 2026
Viewed by 357
Abstract
In this scientific article, a quantitative assessment is carried out of the influence of the ERTMS modernisation factor on the practical efficiency of operation and resilience of the Belgian railway lines 50A/51A with the application of methodological triangulation in the MATLAB R2025a Update [...] Read more.
In this scientific article, a quantitative assessment is carried out of the influence of the ERTMS modernisation factor on the practical efficiency of operation and resilience of the Belgian railway lines 50A/51A with the application of methodological triangulation in the MATLAB R2025a Update 1 (25.1.0.2973910) software environment (discrete-event modelling, Petri nets, Markov reliability modelling, and correlation analysis). The modelling reveals that the scenario with an expanded level of automation increases the capacity from 18.3 to 26.0 trains over 2 h (+42.1%) and reduces the average waiting time from 1.53 min (baseline level) to 0.21 min—virtually the theoretical lower bound of zero under favourable conditions. The results of the block-occupancy analysis by means of Petri nets show that a more dynamic distribution of blocks provides higher capacity, and Markov chains reflect the reduction of the impact of control centre unavailability in developing communications and virtualisations. Spearman correlation analysis additionally shows coordinated improvement of the metrics of safety, digital protection, resilience, and performance. Relying on the modelling results, a phased roadmap is proposed, combining technical improvements (development of communication systems, readiness for automation, comparable management of rolling stock movement) with compliance with regulatory requirements and the goals of sustainable development, related to SDGs 9, 11, and 13. Full article
Show Figures

Figure 1

30 pages, 2256 KB  
Review
Brazil’s Biogas–Biomethane Production Potential: A Techno-Economic Inventory and Strategic Decarbonization Outlook
by Daniel Ignacio Travieso Fernández, Christian Jeremi Coronado Rodriguez, Einara Blanco Machín, Daniel Travieso Pedroso and João Andrade de Carvalho Júnior
Biomass 2026, 6(1), 4; https://doi.org/10.3390/biomass6010004 - 7 Jan 2026
Viewed by 1587
Abstract
Brazil possesses a large bioenergy resource, embedded in agro-industrial, livestock, and urban residues; this study quantifies its technical magnitude and associated energy value. An assessment was conducted by substrate, combining official statistics with literature-based yields and recovery factors. Biogas volumes were converted into [...] Read more.
Brazil possesses a large bioenergy resource, embedded in agro-industrial, livestock, and urban residues; this study quantifies its technical magnitude and associated energy value. An assessment was conducted by substrate, combining official statistics with literature-based yields and recovery factors. Biogas volumes were converted into biomethane using representative upgrading efficiencies, and thermal and electrical equivalents were derived from standard lower heating values and conversion efficiencies. Uncertainty bounds reflect the variability of feedstock yields and process performance. The national technical potential is estimated at roughly 80–85 billion Nm3/year of biogas, corresponding to ~43–45 billion Nm3/year of biomethane and around 168–174 TWh/year of electricity. Contributions are led by the sugar–energy complex (~one-third), followed by livestock and other agro-industrial residues (~one-third), while urban sanitation supplies ~8–10%. Potentials are concentrated in the Southeast, Center-West, and South, and current production represents only ~2–3% of the assessed potential. The findings indicate that realizing this potential requires targeted measure standardization for grid injection, support for pretreatment and co-digestion, access to credit, and alignment with instruments such as RenovaBio and “Metano Zero” to unlock significant methane-mitigation, air-quality, and decentralized energy-security benefits. Full article
Show Figures

Figure 1

31 pages, 825 KB  
Article
Simulation-Based Evaluation of Savings Potential for Hybrid Trolleybus Fleets
by Hermann von Kleist and Thomas Lehmann
World Electr. Veh. J. 2026, 17(1), 27; https://doi.org/10.3390/wevj17010027 - 6 Jan 2026
Viewed by 383
Abstract
Hybrid trolleybuses (HTBs) with in-motion charging (IMC) can extend zero-emission service using existing catenary, but high on-wire charging powers may concentrate loads and accelerate battery aging. We present a data-driven simulation that replays recorded high-resolution Controller Area Network (CAN) logs through a per-vehicle [...] Read more.
Hybrid trolleybuses (HTBs) with in-motion charging (IMC) can extend zero-emission service using existing catenary, but high on-wire charging powers may concentrate loads and accelerate battery aging. We present a data-driven simulation that replays recorded high-resolution Controller Area Network (CAN) logs through a per-vehicle electrical model with (Constant-Current/Constant-Voltage) (CC/CV) charging and a stress-map aging estimator, a configurable partial catenary overlay, and fleet aggregation by simple summation and an iterative node-voltage analysis of a resistor-network catenary model. A parameter sweep across battery sizes, upper state of charge (SoC) bounds, and charging power caps compares a minimal “charge-whenever-possible” policy with a per-vehicle lookahead (“oracle”) policy that spreads charging over available catenary time. Results show that lowering maximum charging power and/or the upper SoC bound reduces capacity fade, while energy-demand differences are small. Fleet load profiles are dominated by timetable-driven concurrency using 40 recorded days overlaid into one synthetic day: varying per-vehicle power or target SoC has little effect on peak demand; per-vehicle lookahead does not flatten the peak. The node-voltage analysis indicates catenary efficiency around 97% and fewer undervoltage events at lower charging powers. We conclude that per-vehicle policies can reduce battery stress, whereas peak shaving requires cooperative, fleet-level scheduling. Full article
(This article belongs to the Special Issue Zero Emission Buses for Public Transport)
Show Figures

Figure 1

15 pages, 1238 KB  
Article
Traffic-Driven Scaling of Digital Twin Proxy Pool in Vehicular Edge Computing
by Hao Zhu, Shuaili Bao, Li Jin and Guoan Zhang
Electronics 2025, 14(24), 4898; https://doi.org/10.3390/electronics14244898 - 12 Dec 2025
Viewed by 411
Abstract
This paper presents a traffic-driven scaling framework for a digital twin proxy pool (DTPP) in vehicular edge computing (VEC), designed to eliminate the latency and synchronization issues inherent in conventional digital twin (DT) migration approaches. The core innovation lies in replacing the migration [...] Read more.
This paper presents a traffic-driven scaling framework for a digital twin proxy pool (DTPP) in vehicular edge computing (VEC), designed to eliminate the latency and synchronization issues inherent in conventional digital twin (DT) migration approaches. The core innovation lies in replacing the migration of vehicle DTs between edge servers (ESs) with instantaneous switching within a pre-allocated pool of DT proxies, thereby achieving zero migration latency and continuous synchronization. The proposed architecture differentiates between short-term DTs (SDTs) hosted in edge-side in-memory databases for real-time, low-latency services, and long-term DTs (LDTs) in the cloud for historical data aggregation. A queuing-theoretic model formulates the DTPP as an M/M/c system, deriving a closed-form lower bound for the minimum number of proxies required to satisfy a predefined queuing-delay constraint, thus transforming quality-of-service targets into analytically computable resource allocations. The scaling mechanism operates on a cloud–edge collaborative principle: a cloud-based predictor, employing a TCN-Transformer fusion model, forecasts hourly traffic arrival rates to set a baseline proxy count, while edge-side managers perform monotonic, 5 min scale-ups based on real-time monitoring to absorb sudden traffic bursts without causing service jitter. Extensive evaluations were conducted using the PeMS dataset. The TCN-Transformer predictor significantly outperforms single-model baselines, achieving a mean absolute percentage error (MAPE) of 17.83%. More importantly, dynamic scaling at the ES reduces delay violation rates substantially—for instance, from 13.57% under static provisioning to just 1.35% when the minimum proxy count is 2—confirming the system’s ability to maintain service quality under highly dynamic conditions. These findings shows that the DTPP framework provides a robust solution for resource-efficient and latency-guaranteed DT services in VEC. Full article
Show Figures

Figure 1

10 pages, 302 KB  
Communication
Fractional Probit with Cross-Sectional Volatility: Bridging Heteroskedastic Probit and Fractional Response Models
by Songsak Sriboonchitta, Aree Wiboonpongse, Jittaporn Sriboonjit and Woraphon Yamaka
Econometrics 2025, 13(4), 43; https://doi.org/10.3390/econometrics13040043 - 3 Nov 2025
Cited by 1 | Viewed by 902
Abstract
This paper introduces a new econometric framework for modeling fractional outcomes bounded between zero and one. We propose the Fractional Probit with Cross-Sectional Volatility (FPCV), which specifies the conditional mean through a probit link and allows the conditional variance to depend on observable [...] Read more.
This paper introduces a new econometric framework for modeling fractional outcomes bounded between zero and one. We propose the Fractional Probit with Cross-Sectional Volatility (FPCV), which specifies the conditional mean through a probit link and allows the conditional variance to depend on observable heterogeneity. The model extends heteroskedastic probit methods to fractional responses and unifies them with existing approaches for proportions. Monte Carlo simulations demonstrate that the FPCV estimator achieves lower bias, more reliable inference, and superior predictive accuracy compared with standard alternatives. The framework is particularly suited to empirical settings where fractional outcomes display systematic variability across units, such as participation rates, market shares, health indices, financial ratios, and vote shares. By modeling both mean and variance, FPCV provides interpretable measures of volatility and offers a robust tool for empirical analysis and policy evaluation. Full article
Show Figures

Figure 1

20 pages, 917 KB  
Article
Numerical Investigation of Buckling Behavior of MWCNT-Reinforced Composite Plates
by Jitendra Singh, Ajay Kumar, Barbara Sadowska-Buraczewska, Wojciech Andrzejuk and Danuta Barnat-Hunek
Materials 2025, 18(14), 3304; https://doi.org/10.3390/ma18143304 - 14 Jul 2025
Viewed by 632
Abstract
The current study demonstrates the buckling properties of composite laminates reinforced with MWCNT fillers using a novel higher-order shear and normal deformation theory (HSNDT), which considers the effect of thickness in its mathematical formulation. The hybrid HSNDT combines polynomial and hyperbolic functions that [...] Read more.
The current study demonstrates the buckling properties of composite laminates reinforced with MWCNT fillers using a novel higher-order shear and normal deformation theory (HSNDT), which considers the effect of thickness in its mathematical formulation. The hybrid HSNDT combines polynomial and hyperbolic functions that ensure the parabolic shear stress profile and zero shear stress boundary condition at the upper and lower surface of the plate, hence removing the need for a shear correction factor. The plate is made up of carbon fiber bounded together with polymer resin matrix reinforced with MWCNT fibers. The mechanical properties are homogenized by a Halpin–Tsai scheme. The MATLAB R2019a code was developed in-house for a finite element model using C0 continuity nine-node Lagrangian isoparametric shape functions. The geometric nonlinear and linear stiffness matrices are derived using the principle of virtual work. The solution of the eigenvalue problem enables estimation of the critical buckling loads. A convergence study was carried out and model efficiency was corroborated with the existing literature. The model contains only seven degrees of freedom, which significantly reduces computation time, facilitating the comprehensive parametric studies for the buckling stability of the plate. Full article
(This article belongs to the Special Issue Mechanical Behavior of Advanced Composite Materials and Structures)
Show Figures

Figure 1

29 pages, 351 KB  
Article
The Computability of the Channel Reliability Function and Related Bounds
by Holger Boche and Christian Deppe
Algorithms 2025, 18(6), 361; https://doi.org/10.3390/a18060361 - 11 Jun 2025
Viewed by 1466
Abstract
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated [...] Read more.
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated functions, demonstrating that the reliability function is not Turing computable. This also holds true for functions related to the sphere packing bound and the expurgation bound. Additionally, we examine the R function and zero-error feedback capacity, as they are vital in the context of the reliability function. Both the R function and the zero-error feedback capacity are not Banach–Mazur computable. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
17 pages, 3455 KB  
Article
Segment Anything Model (SAM) and Medical SAM (MedSAM) for Lumbar Spine MRI
by Christian Chang, Hudson Law, Connor Poon, Sydney Yen, Kaustubh Lall, Armin Jamshidi, Vadim Malis, Dosik Hwang and Won C. Bae
Sensors 2025, 25(12), 3596; https://doi.org/10.3390/s25123596 - 7 Jun 2025
Cited by 3 | Viewed by 5854
Abstract
Lumbar spine Magnetic Resonance Imaging (MRI) is commonly used for intervertebral disc (IVD) and vertebral body (VB) evaluation during low back pain. Segmentation of these tissues can provide useful quantitative information such as shape and volume. The objective of the study was to [...] Read more.
Lumbar spine Magnetic Resonance Imaging (MRI) is commonly used for intervertebral disc (IVD) and vertebral body (VB) evaluation during low back pain. Segmentation of these tissues can provide useful quantitative information such as shape and volume. The objective of the study was to determine the performances of Segment Anything Model (SAM) and medical SAM (MedSAM), two “zero-shot” deep learning models, in segmenting lumbar IVD and VB from MRI images and compare against the nnU-Net model. This cadaveric study used 82 donor spines. Manual segmentation was performed to serve as ground truth. Two readers processed the spine MRI using SAM and MedSAM by placing points or drawing bounding boxes around regions of interest (ROI). The outputs were compared against ground truths to determine Dice score, sensitivity, and specificity. Qualitatively, results varied but overall, MedSAM produced more consistent results than SAM, but neither matched the performance of nnU-Net. Mean Dice scores for MedSAM were 0.79 for IVDs and 0.88 for VBs, and significantly higher (each p < 0.001) than those for SAM (0.64 for IVDs, 0.83 for VBs). Both were lower compared to nnU-Net (0.99 for IVD and VB). Sensitivity values also favored MedSAM. These results demonstrated the feasibility of “zero-shot” DL models to segment lumbar spine MRI. While performance falls short of recent models, these zero-shot models offer key advantages in not needing training data and faster adaptation to other anatomies and tasks. Validation of a generalizable segmentation model for lumbar spine MRI can lead to more precise diagnostics, follow-up, and enhanced back pain research, with potential cost savings from automated analyses while supporting the broader use of AI and machine learning in healthcare. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

37 pages, 498 KB  
Article
A General Model of Bertrand–Edgeworth Duopoly
by Blake A. Allison and Jason J. Lepore
Games 2025, 16(3), 26; https://doi.org/10.3390/g16030026 - 19 May 2025
Viewed by 2665
Abstract
This paper studies a class of two-player all-pay contests with externalities that encompass a general version of duopoly price competition. This all-pay contest formulation puts little restriction on production technologies, demand, and demand rationing. There are two types of possible equilibria: In the [...] Read more.
This paper studies a class of two-player all-pay contests with externalities that encompass a general version of duopoly price competition. This all-pay contest formulation puts little restriction on production technologies, demand, and demand rationing. There are two types of possible equilibria: In the first type of equilibrium, the lower bound to pricing is the same for each firm, and the probability of any pricing tie above this price is zero. Each firm’s equilibrium expected profit is their monopoly profit at the lower bound price. In the second type of equilibrium, one firm prices at the lower bound of the other firm’s average cost and other firm prices according to a non-degenerate mixed strategy. This type of equilibrium can only occur if production technologies are sufficiently different across firms. We derive necessary and sufficient conditions for the existence of pure strategy equilibrium and use these conditions to demonstrate the fragility of deterministic outcomes in pricing games. Full article
14 pages, 10819 KB  
Article
Formation and Dynamics of Night-Time Cold Air Pools in Peri-Urban Topographic Basins: A Case Study of Coimbra, Portugal
by António Manuel Rochette Cordeiro
Meteorology 2025, 4(1), 4; https://doi.org/10.3390/meteorology4010004 - 11 Feb 2025
Cited by 1 | Viewed by 1657
Abstract
This study investigates the formation of cold air pools during calm, anticyclonic winter nights in a topographic basin bounded by a medium-sized mountain to the east and near-flat terrain elsewhere. The main objective is to understand how local topography drives unique topoclimatic conditions—specifically [...] Read more.
This study investigates the formation of cold air pools during calm, anticyclonic winter nights in a topographic basin bounded by a medium-sized mountain to the east and near-flat terrain elsewhere. The main objective is to understand how local topography drives unique topoclimatic conditions—specifically cold air lakes and an inversion layer at approximately 100/120 m altitude—in a peri-urban depression where a major cement factory and several residential areas are located. To achieve this, the research design combined surface measurements (collected at 10:00 p.m., 3:00 a.m., 7:00 a.m., and 3:00 p.m.) using a motorized vehicle, with vertical measurements (at 7:00 a.m.) collected via two unmanned aerial vehicles (UAVs), with the three vehicles equipped with Tinytag data loggers. The Empirical Bayesian Kriging tool in ArcGIS Pro was employed to generate the surface temperature cartograms. The results show that shortly after sunset, a cold air layer of approximately 100–120 m thickness forms, with nocturnal air temperature variations of up to 8 °C on the night measurements. An inversion layer was detected at around 120–130 m, while near-zero wind speeds in the basin’s core facilitate the retention of cold air. Surface spatialization confirms earlier findings of a cold air lake and thermal belts on the basin’s perimeter, forming in the early evening and dissipating by late morning. A 3D visualization underscores the influence of the mountain in directing cold air downslope, leading to stabilization and stratification within the lower atmospheric layers. These findings carry significant health implications: air pollutants released by the cement plant tend to accumulate within the cold air pool and beneath the inversion layer, posing potential risks to nearby populations. Full article
Show Figures

Figure 1

37 pages, 979 KB  
Article
Variable-Length Coding with Zero and Non-Zero Privacy Leakage
by Amirreza Zamani and Mikael Skoglund
Entropy 2025, 27(2), 124; https://doi.org/10.3390/e27020124 - 24 Jan 2025
Cited by 3 | Viewed by 1807
Abstract
A private compression design problem is studied, where an encoder observes useful data Y, wishes to compress them using variable-length code, and communicates them through an unsecured channel. Since Y are correlated with the private attribute X, the encoder uses a [...] Read more.
A private compression design problem is studied, where an encoder observes useful data Y, wishes to compress them using variable-length code, and communicates them through an unsecured channel. Since Y are correlated with the private attribute X, the encoder uses a private compression mechanism to design an encoded message C and sends it over the channel. An adversary is assumed to have access to the output of the encoder, i.e., C, and tries to estimate X. Furthermore, it is assumed that both encoder and decoder have access to a shared secret key W. In this work, the design goal is to encode message C with the minimum possible average length that satisfies certain privacy constraints. We consider two scenarios: 1. zero privacy leakage, i.e., perfect privacy (secrecy); 2. non-zero privacy leakage, i.e., non-perfect privacy constraint. Considering the perfect privacy scenario, we first study two different privacy mechanism design problems and find upper bounds on the entropy of the optimizers by solving a linear program. We use the obtained optimizers to design C. In the two cases, we strengthen the existing bounds: 1. |X||Y|; 2. The realization of (X,Y) follows a specific joint distribution. In particular, considering the second case, we use two-part construction coding to achieve the upper bounds. Furthermore, in a numerical example, we study the obtained bounds and show that they can improve existing results. Finally, we strengthen the obtained bounds using the minimum entropy coupling concept and a greedy entropy-based algorithm. Considering the non-perfect privacy scenario, we find upper and lower bounds on the average length of the encoded message using different privacy metrics and study them in special cases. For achievability, we use two-part construction coding and extended versions of the functional representation lemma. Lastly, in an example, we show that the bounds can be asymptotically tight. Full article
(This article belongs to the Special Issue Information-Theoretic Security and Privacy)
Show Figures

Figure 1

17 pages, 313 KB  
Article
On the Extended Adjacency Eigenvalues of a Graph
by Alaa Altassan, Hilal A. Ganie and Yilun Shang
Information 2024, 15(10), 586; https://doi.org/10.3390/info15100586 - 26 Sep 2024
Cited by 2 | Viewed by 1933
Abstract
Let H be a graph of order n with m edges. Let di=d(vi) be the degree of the vertex vi. The extended adjacency matrix Aex(H) of H is an [...] Read more.
Let H be a graph of order n with m edges. Let di=d(vi) be the degree of the vertex vi. The extended adjacency matrix Aex(H) of H is an n×n matrix defined as Aex(H)=(bij), where bij=12didj+djdi, whenever vi and vj are adjacent and equal to zero otherwise. The largest eigenvalue of Aex(H) is called the extended adjacency spectral radius of H and the sum of the absolute values of its eigenvalues is called the extended adjacency energy of H. In this paper, we obtain some sharp upper and lower bounds for the extended adjacency spectral radius in terms of different graph parameters and characterize the extremal graphs attaining these bounds. We also obtain some new bounds for the extended adjacency energy of a graph and characterize the extremal graphs attaining these bounds. In both cases, we show our bounds are better than some already known bounds in the literature. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
33 pages, 1650 KB  
Article
Approximate Closed-Form Solutions for Pricing Zero-Coupon Bonds in the Zero Lower Bound Framework
by Jae-Yun Jun and Yves Rakotondratsimba
Mathematics 2024, 12(17), 2690; https://doi.org/10.3390/math12172690 - 29 Aug 2024
Viewed by 1577
Abstract
After the 2007 financial crisis, many central banks adopted policies to lower their interest rates; the dynamics of these rates cannot be captured using classical models. Recently, Meucci and Loregian proposed an approach to estimate nonnegative interest rates using the inverse-call transformation. Despite [...] Read more.
After the 2007 financial crisis, many central banks adopted policies to lower their interest rates; the dynamics of these rates cannot be captured using classical models. Recently, Meucci and Loregian proposed an approach to estimate nonnegative interest rates using the inverse-call transformation. Despite the fact that their work is distinguished from others in the literature by their consideration of practical aspects, some technical difficulties still remain, such as the lack of analytic expression for the zero-coupon bond (ZCB) price. In this work, we propose novel approximate closed-form solutions for the ZCB price in the zero lower bound (ZLB) framework, when the underlying shadow rate is assumed to follow the classical one-factor Vasicek model. Then, a filtering procedure is performed using the Unscented Kalman Filter (UKF) to estimate the unobservable state variable (the shadow rate), and the model calibration is proceeded by estimating the model parameters using the Particle Swarm Optimization (PSO) algorithm. Further, empirical illustrations are given and discussed using (as input data) the interest rates of the AAA-rated bonds compiled by the European Central Bank ranging from 6 September 2004 to 21 June 2012 (a period that concerns the ZLB framework). Our approximate closed-form solution is able to show a good match between the actual and estimated yield-rate values for short and medium time-to-maturity values, whereas, for long time-to-maturity values, it is able to estimate the trend of the yield rates. Full article
(This article belongs to the Special Issue Optimization Methods in Engineering Mathematics)
Show Figures

Figure 1

Back to TopTop