Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (246)

Search Parameters:
Keywords = claim frequency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1317 KB  
Article
Cost-Engineering Analysis of Radio Frequency Plus Heat for In-Shell Egg Pasteurization
by Daniela Bermudez-Aguirre, Joseph Sites, Sudarsan Mukhopadhyay and Brendan A. Niemira
Processes 2026, 14(2), 379; https://doi.org/10.3390/pr14020379 - 22 Jan 2026
Viewed by 24
Abstract
Salmonella spp. is a pathogenic microorganism linked to eggs and egg products. In-shell eggs are not required to be pasteurized in any country before they reach the consumer. The use of an emerging technology known as radio frequency has been successfully used to [...] Read more.
Salmonella spp. is a pathogenic microorganism linked to eggs and egg products. In-shell eggs are not required to be pasteurized in any country before they reach the consumer. The use of an emerging technology known as radio frequency has been successfully used to inactivate this pathogen inside in-shell eggs and claim pasteurization standards (5 - log reduction). The objective of this manuscript was to conduct the engineering cost of egg processing using a radio frequency pasteurizer and compare the processing cost to conventional thermal pasteurization for in-shell eggs. The ARS-patented radio frequency pasteurizer was used (40.68 MHz, 35 W) to pasteurize eggs in 24.5 min. The conventional thermal pasteurization (56.7 °C) required 60 min for the same level of inactivation. The techno-economic analysis (TEA) included information from stakeholders, egg processors and equipment manufacturers and was used together with energy balances and some key assumptions. Calculations for the engineering cost were made based on the required energy for each system, showing that the radio frequency required a third of the total cost of electricity to pasteurize eggs in a year compared with thermal, based on utilities costs in PA. Other utilities such as water and steam were also minor for radio frequency pasteurization. After two years of operation, the projected additional cost of processing is ~USD 0.19 per egg for the radio frequency system, compared with USD 0.22 per egg for conventional thermal treatment, largely due to volume-based amortization of capital costs and lower annual operating costs for the RF process. Radio frequency thus could be an option to pasteurize eggs in farms from PA and potentially in other states, using the system developed by our research team, while reducing energy consumption and increasing return on investment. Full article
Show Figures

Figure 1

26 pages, 766 KB  
Article
Regression Extensions of the New Polynomial Exponential Distribution: NPED-GLM and Poisson–NPED Count Models with Applications in Engineering and Insurance
by Halim Zeghdoudi, Sandra S. Ferreira, Vinoth Raman and Dário Ferreira
Computation 2026, 14(1), 26; https://doi.org/10.3390/computation14010026 - 21 Jan 2026
Viewed by 168
Abstract
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological [...] Read more.
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological extensions: (i) a generalized linear model (NPED-GLM) in which the distribution parameter depends on covariates, and (ii) a Poisson–NPED count regression model suitable for overdispersed and heavy-tailed count data. Likelihood-based inference, asymptotic properties, and simulation studies are developed to investigate the performance of the estimators. Applications to engineering failure-count data and insurance claim frequencies illustrate the advantages of the proposed models relative to classical Poisson, negative binomial, and Poisson–Lindley regressions. These developments substantially broaden the applicability of the NPED in actuarial science, reliability engineering, and applied statistics. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

27 pages, 2838 KB  
Article
An Empirical Analysis of Running-Behavior Influencing Factors for Crashes with Different Economic Losses
by Peng Song, Yiping Wu, Hongpeng Zhang, Jian Rong, Ning Zhang, Jun Ma and Xiaoheng Sun
Urban Sci. 2026, 10(1), 45; https://doi.org/10.3390/urbansci10010045 - 12 Jan 2026
Viewed by 165
Abstract
Miniature commercial trucks constitute a critical component of urban freight systems but face elevated crash risk due to distinctive driving patterns, frequent operation, and variable loads. This study quantifies how long-term and short-term driving behaviors jointly shape crash economic loss levels and identifies [...] Read more.
Miniature commercial trucks constitute a critical component of urban freight systems but face elevated crash risk due to distinctive driving patterns, frequent operation, and variable loads. This study quantifies how long-term and short-term driving behaviors jointly shape crash economic loss levels and identifies factors most strongly associated with severe claims. A driver-level dataset linking multi-source running behavior indicators, vehicle attributes, and insurance claims is constructed, and an enhanced Wasserstein generative adversarial network with Euclidean distance is employed to synthesize minority crash samples and alleviate class imbalance. Crash economic loss levels are modeled using a random-effects generalized ordinal logit specification, and model performance is compared with a generalized ordered logit benchmark. Marginal effects analysis is used to evaluate the influence of pre-collision driving states (straight, turning, reversing, rolling, following closely) and key behavioral indicators. Results indicate significant effects of inter-provincial duration and count ratios, morning and empty-trip frequencies, no-claim discount coefficients, and vehicle age on crash economic loss, with prolonged speeding duration and fatigued mileage associated with major losses, whereas frequent speeding and fatigue episodes are primarily linked to minor claims. These findings clarify causal patterns for miniature commercial truck crashes with different economic losses and provide an empirical basis for targeted safety interventions and refined insurance pricing. Full article
(This article belongs to the Special Issue Urban Traffic Control and Innovative Planning)
Show Figures

Figure 1

24 pages, 1781 KB  
Article
Embedding Time-Frequency Transform Neural Networks for Efficient Fault Diagnosis
by Yanfeng Chai, Lihong Zhang, Rui Zhang, Qiang Zhang and Yang Zhang
Appl. Sci. 2025, 15(24), 13082; https://doi.org/10.3390/app152413082 - 12 Dec 2025
Viewed by 458
Abstract
To address the challenges of non-stationary signals, noise corruption, and limited interpretability in intelligent fault diagnosis, we propose a Time-Frequency Dual Transformation (TFDT) framework. By embedding a trainable Time-Frequency Convolution (TFConv) layer into a convolutional neural network, TFDT integrates physics-informed time-frequency transforms—STTF, Morlet [...] Read more.
To address the challenges of non-stationary signals, noise corruption, and limited interpretability in intelligent fault diagnosis, we propose a Time-Frequency Dual Transformation (TFDT) framework. By embedding a trainable Time-Frequency Convolution (TFConv) layer into a convolutional neural network, TFDT integrates physics-informed time-frequency transforms—STTF, Morlet wavelet, and Chirplet—into the learning process. This allows the model to adaptively extract fault-sensitive features while maintaining physical interpretability. Experimental results on the CWRU dataset show that TFDT outperforms standard CNNs and fixed-transform pipelines in accuracy, convergence speed, and robustness under noisy conditions. Ablation studies confirm the critical role of the TFConv layer, and t-SNE visualizations reveal discriminative and compact feature clusters, supporting the model’s interpretability claims. Full article
Show Figures

Figure 1

19 pages, 280 KB  
Article
Determinants and Transmission Channels of Financial Cycle Synchronization in EU Member States
by Matei-Nicolae Kubinschi, Robert-Adrian Grecu and Nicoleta Sîrbu
J. Risk Financial Manag. 2025, 18(12), 690; https://doi.org/10.3390/jrfm18120690 - 3 Dec 2025
Viewed by 426
Abstract
This paper investigates the determinants and transmission channels underlying the synchronization between financial and business cycles across European Union (EU) member states. For the empirical approach, we combine frequency-domain filtering techniques with spillover index analysis to track cross-country macro-financial interlinkages. We measure financial [...] Read more.
This paper investigates the determinants and transmission channels underlying the synchronization between financial and business cycles across European Union (EU) member states. For the empirical approach, we combine frequency-domain filtering techniques with spillover index analysis to track cross-country macro-financial interlinkages. We measure financial cycle correlations and spillovers in terms of common exposures to trade linkages, overlapping systemic risk episodes, and bilateral financial claims. An important finding is that financial and business cycles tend to move together, largely due to shared macro-financial conditions and systemic stress episodes. While the data reveal strong co-movement between these cycles, the analysis does not imply a specific direction of causality. In particular, it remains possible that shifts in financial conditions can amplify or even precede business-cycle fluctuations, as seen during major crises. The focus of this study is, therefore, on the interdependence and synchronization of these cycles rather than on causal sequencing. The analysis combines complementary filtering and variance-decomposition methods to quantify the interdependencies shaping EU financial stability, providing a basis for enhanced macroprudential policy coordination. The policy implications for macroprudential authorities entail taking into account cross-border effects and spillovers when implementing instruments for taming the financial cycle. Full article
(This article belongs to the Special Issue Business, Finance, and Economic Development)
24 pages, 1845 KB  
Review
Conundrum of Hydrologic Research: Insights from the Evolution of Flood Frequency Analysis
by Fahmidah Ummul Ashraf, William H. Pennock and Ashish D. Borgaonkar
CivilEng 2025, 6(4), 66; https://doi.org/10.3390/civileng6040066 - 2 Dec 2025
Viewed by 759
Abstract
Given the apparent gap between scientific research and engineering practice, this paper tracks the dominating perspectives that have shaped the growth of hydrological research. Based on five eras, dominated with specific paradigms and/or ideologies, this paper highlights the punctuated growth of flood frequency [...] Read more.
Given the apparent gap between scientific research and engineering practice, this paper tracks the dominating perspectives that have shaped the growth of hydrological research. Based on five eras, dominated with specific paradigms and/or ideologies, this paper highlights the punctuated growth of flood frequency analysis comparative to the enormous progress made in hydrological modeling can be claimed by the 20th century. The historical narrative underpinning this inquiry indicates that progress in hydrological understanding can be characterized by two contrasting claims: modeling breakthroughs and inconclusive results. Contradicting statistical assumptions, complex modeling structures, the standardization of specific techniques, and the absence of any unified physical meaning of the research results brought an apparent conflict between the scope of hydrologic research and the scope of end users, i.e., civil engineers. Some hydrologists argue that the debates associated with hydrologic progress, i.e., the evolution of statistical methods, dating back to the 1960s remain unaddressed, with each era introducing additional uncertainty, questions, and concerns. Progress, for it to happen, needs synthesis among scientists, engineers, and stakeholders. This paper concludes that, in a similar way to how physicists acknowledge the conflicts between quantum and Newtonian physics, hydrology too can benefit from acknowledging divergent principles emerging from engineering practice. While many advanced analytical tools—though varied in form—are grounded in the assumption that past data can predict future conditions, the contrasting view that past data cannot always do so represents a key philosophical foundation for resilience-based civil engineering design. Acknowledging contrasting philosophies describing the nature of reality can help illuminate the conundrum in the scope of hydrological research and can enable synthesis activities aimed at ‘putting the puzzle together’. Full article
(This article belongs to the Section Water Resources and Coastal Engineering)
Show Figures

Figure 1

19 pages, 6576 KB  
Article
Adaptive Fuzzy Fixed-Time Trajectory Tracking Control for a Piezoelectric-Driven Microinjector
by Rungeng Zhang, Zehao Wu, Weijian Zhang and Qingsong Xu
Micromachines 2025, 16(12), 1332; https://doi.org/10.3390/mi16121332 - 26 Nov 2025
Viewed by 331
Abstract
This paper proposes an adaptive fuzzy fixed-time control (AF-FxT-C) scheme for a piezoelectric-driven microinjector. The inherent hysteresis of the piezoelectric actuator is treated as an unknown nonlinearity. A fuzzy logic system is employed to approximate this hysteresis, along with other lumped disturbances, while [...] Read more.
This paper proposes an adaptive fuzzy fixed-time control (AF-FxT-C) scheme for a piezoelectric-driven microinjector. The inherent hysteresis of the piezoelectric actuator is treated as an unknown nonlinearity. A fuzzy logic system is employed to approximate this hysteresis, along with other lumped disturbances, while an adaptive law is designed to improve approximation accuracy. To address the challenge of inconsistent initial states caused by frequent start-stop operations, a fixed-time control law is developed via a second-order backstepping approach. This guarantees that the upper bound of the system’s settling time is independent of the initial conditions, which is a claim rigorously substantiated by a theoretical stability analysis. The simulation and experimental results validate the effectiveness of the proposed method. The method also maintains robust tracking performance across reference signals of varying frequencies and amplitudes, demonstrating its potential for industrial microinjection applications. Full article
Show Figures

Figure 1

17 pages, 3563 KB  
Article
Using Sphere Symmetry Breaking to Calculate SCHENBERG’s Antenna Quadrupolar Frequencies
by Natan Vanelli Garcia, Fabio da Silva Bortoli, Nadja Simao Magalhaes, Sergio Turano de Souza and Carlos Frajuca
Symmetry 2025, 17(11), 1871; https://doi.org/10.3390/sym17111871 - 5 Nov 2025
Viewed by 293
Abstract
Gravitational waves (GW) play an important role in the understanding of several astrophysical objects, like neutron stars and black holes. One technology used to detect them involves massive objects that vibrate as GW cross it, and the detectors built are, accordingly, of the [...] Read more.
Gravitational waves (GW) play an important role in the understanding of several astrophysical objects, like neutron stars and black holes. One technology used to detect them involves massive objects that vibrate as GW cross it, and the detectors built are, accordingly, of the resonant-mass type. SCHENBERG is a resonant-mass GW detector, built in Brazil, whose antenna is a spherical, 65 cm in diameter mass made of a CuAl alloy, and its quadrupole vibrational modes would be excited by GW, as predicted by general relativity. The chosen alloy can be cooled down to mK temperatures with a good mechanical quality factor. The quadrupole mode frequencies were measured at 4K, and a frequency band of about 67.5 Hz was found, but when the antenna was simulated in SolidWorks FEM software version 2010–2011 (as well as in Ansys SpaceClaimTM), the band obtained for a free sphere was different—around 30 Hz. When the holes for the suspension were included in the simulation, the same discrepancy persisted. In this work, gravity was included in the FEM simulation, and we show that the bandwidth results are even smaller. We were then able to obtain a bandwidth close to the measured one by including a small deviation from the vertical axle, as well as variations on the sphere microstructure, which are assumptions that break the symmetry of a perfect, homogeneous free sphere. We believe that the microstructure variations are due to differences in the cooling time during the sphere casting. As for a good mechanical quality factor, the sphere was not submitted to homogenization. With these additions to the FEM simulation, a reasonable frequency distribution was found, consistent with the one measured for SCHENBERG’s antenna. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

20 pages, 526 KB  
Article
Chain Ladder Under Aggregation of Calendar Periods
by Greg Taylor
Risks 2025, 13(11), 215; https://doi.org/10.3390/risks13110215 - 3 Nov 2025
Viewed by 407
Abstract
The chain ladder model is defined by a set of assumptions about the claim array to which it is applied. It is, in practice, applied to claim arrays whose data relate to different frequencies, e.g., yearly, quarterly, monthly, weekly, etc. There is sometimes [...] Read more.
The chain ladder model is defined by a set of assumptions about the claim array to which it is applied. It is, in practice, applied to claim arrays whose data relate to different frequencies, e.g., yearly, quarterly, monthly, weekly, etc. There is sometimes a tacit assumption that one can shift between these frequencies at will, and that the model will remain applicable. It is not obvious that this is the case. One needs to check whether a model whose assumptions hold for annual data will continue to hold for a quarterly (for example) representation of the same data. The present paper studies this question in the case of preservation of calendar periods, i.e., (in the example) annual calendar periods are dissected into quarters. The study covers the two most common forms of chain ladder model, namely the Tweedie chain ladder and Mack chain ladder. The conclusion is broadly, if not absolutely, negative. Certain parameter sets can indeed be found for which the chain ladder structure is maintained under a change in data frequency. However, while it may be technically possible to maintain the chain ladder model under such a change to the data, it is not possible in any reasonable, practical sense. Full article
Show Figures

Figure 1

30 pages, 1354 KB  
Article
Driving Behavior and Insurance Pricing: A Framework for Analysis and Some Evidence from Italian Data Using Zero-Inflated Poisson (ZIP) Models
by Paola Fersini, Michele Longo and Giuseppe Melisi
Risks 2025, 13(11), 214; https://doi.org/10.3390/risks13110214 - 3 Nov 2025
Viewed by 2408
Abstract
Usage-Based Insurance (UBI), also referred to as telematics-based insurance, has been experiencing a growing global diffusion. In addition to being well established in countries such as Italy, the United States, and the United Kingdom, UBI adoption is also accelerating in emerging markets such [...] Read more.
Usage-Based Insurance (UBI), also referred to as telematics-based insurance, has been experiencing a growing global diffusion. In addition to being well established in countries such as Italy, the United States, and the United Kingdom, UBI adoption is also accelerating in emerging markets such as Japan, South Africa, and Brazil. In Japan, telematics insurance has shown significant growth in recent years, with a steadily increasing subscription rate. In South Africa, UBI adoption ranks among the highest worldwide, with market penetration placing the country among the top three globally, just after the United States and Italy. In Brazil, UBI adoption is expanding, supported by government initiatives promoting road safety and innovation in the insurance sector. According to a MarketsandMarkets report of February 2025, the global UBI market is expected to grow from USD 43.38 billion in 2023 to USD 70.46 billion by 2030, with a compound annual growth rate (CAGR) of 7.2% over the forecast period. This growth is driven by the increasing adoption of both electric and internal combustion vehicles equipped with integrated telematics systems, which enable insurers to collect data on driving behavior and to tailor insurance premiums accordingly. In this paper, we analyze a large dataset consisting of trips recorded over five years from 100,000 policyholders across the Italian territory through the installation of black-box devices. Using univariate and multivariate statistical analyses, as well as Generalized Linear Models (GLMs) with Zero-Inflated Poisson distribution, we examine claims frequency and assess the relevance of various synthetic indicators of driving behavior, with the aim of identifying those that are most significant for insurance pricing. Full article
(This article belongs to the Special Issue Innovations in Non-Life Insurance Pricing and Reserving)
Show Figures

Figure 1

21 pages, 2898 KB  
Article
Natural Language Processing-Based Model for Litigation Outcome Prediction: Decision-Making Support for Residential Building Defect Alternative Dispute Resolution
by Chang-won Jung, Jae-jun Kim and Joo-sung Lee
Appl. Sci. 2025, 15(21), 11565; https://doi.org/10.3390/app152111565 - 29 Oct 2025
Viewed by 641
Abstract
Defects occurring during the maintenance phase of residential buildings not only undermine the quality of life of residents but also lead to disputes with contractors, which often escalate into litigation rather than being resolved through alternative dispute resolution (ADR), thereby increasing social and [...] Read more.
Defects occurring during the maintenance phase of residential buildings not only undermine the quality of life of residents but also lead to disputes with contractors, which often escalate into litigation rather than being resolved through alternative dispute resolution (ADR), thereby increasing social and economic burdens. While previous studies have mainly focused on identifying the causes of defects, developing classification systems, and improving institutional frameworks, few have sought to predict litigation outcomes from precedent data to support decision-making during pre-litigation dispute resolution. This paper proposes a natural language processing-based multimodal and multitask prediction model that learns from precedent data using information available prior to litigation, such as the claims and evidence of plaintiffs and defendants and the claimed amounts. The proposed model simultaneously predicts judgment outcomes and grant ratios in defect-related disputes and can help to enhance the persuasiveness and voluntariness of ADR by informing parties about the likelihood of settlement and the potential risks of litigation. Furthermore, this paper proposes a decision-support framework for rational and evidence-based dispute resolution which can reduce stakeholder uncertainty and ultimately lower the frequency of litigation related to residential building defects. Full article
(This article belongs to the Special Issue Applied Computer Methods in Building Engineering)
Show Figures

Figure 1

27 pages, 596 KB  
Article
Inherent Addiction Mechanisms in Video Games’ Gacha
by Sagguneswaraan Thavamuni, Mohd Nor Akmal Khalid and Hiroyuki Iida
Information 2025, 16(10), 890; https://doi.org/10.3390/info16100890 - 13 Oct 2025
Viewed by 5800
Abstract
Gacha games, particularly those using Free-to-Play (F2P) models, have become increasingly popular yet controversial due to their addictive mechanics, often likened to gambling. This study investigates the inherent addictive mechanisms of Gacha games, focusing on Genshin Impact, a leading title in the genre. [...] Read more.
Gacha games, particularly those using Free-to-Play (F2P) models, have become increasingly popular yet controversial due to their addictive mechanics, often likened to gambling. This study investigates the inherent addictive mechanisms of Gacha games, focusing on Genshin Impact, a leading title in the genre. We analyze the interplay between reward frequency, game attractiveness, and player addiction using the Game Refinement theory and the Motion in Mind framework. Our analysis identifies a critical threshold at approximately 55 pulls per rare item (N55), with a corresponding gravity-in-mind value of 7.4. Beyond this point, the system exhibits gambling-like dynamics, as indicated by Game Refinement and Motion in Mind metrics. This threshold was measured using empirical gacha data collected from Genshin Impact players and analyzed through theoretical models. While not claiming direct causal evidence of player behavior change, the results highlight a measurable boundary where structural design risks fostering addiction-like compulsion. The study contributes theoretical insights with ethical implications for game design, by identifying critical thresholds in reward frequency and game dynamics that mark the shift toward gambling-like reinforcement. The methodologies, including quantitative analysis and empirical data, ensure robust results contributing to responsible digital entertainment discourse. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Human-Computer Interaction)
Show Figures

Graphical abstract

16 pages, 313 KB  
Article
The Virgin Mary’s Image Usage in Albigensian Crusade Primary Sources
by Eray Özer and Meryem Gürbüz
Histories 2025, 5(4), 49; https://doi.org/10.3390/histories5040049 - 10 Oct 2025
Viewed by 1378
Abstract
The image of the Virgin Mary appears with increasing frequency in written sources from the 12th and 13th centuries compared to earlier periods. Three major works produced by four eyewitness authors of the Albigensian Crusade (Historia Albigensis, Chronica, and Canso [...] Read more.
The image of the Virgin Mary appears with increasing frequency in written sources from the 12th and 13th centuries compared to earlier periods. Three major works produced by four eyewitness authors of the Albigensian Crusade (Historia Albigensis, Chronica, and Canso de la Crozada) reflect on and respond to this popular theme. These sources focus on the Albigensian Crusade against heretical groups, particularly the Cathars, and employ the Virgin Mary motif for various purposes. The Virgin Mary is presented as a Catholic model for women drawn to Catharism (a movement in which female spiritual leadership was also present) as a divine protector of the just side in war and as a means of legitimizing the authors’ claims. While Mary appears sporadically in Peter of Vaux-de-Cernay’s Historia Albigensis, she is extensively invoked in the Canso by both William and his anonymous successor. In contrast, the image of the Virgin Mary is scarcely mentioned in Chronica, likely due to the narrative’s intended audience and objectives. This article aims to provide a comparative analysis of how the image of the Virgin Mary is utilized in these primary sources from the Albigensian Crusade and to offer a new perspective on the relationship between historical events and authors’ intentions, laying the groundwork for further research. Full article
(This article belongs to the Section Cultural History)
17 pages, 803 KB  
Article
Bootstrap Initialization of MLE for Infinite Mixture Distributions with Applications in Insurance Data
by Aceng Komarudin Mutaqin
Risks 2025, 13(10), 196; https://doi.org/10.3390/risks13100196 - 4 Oct 2025
Viewed by 747
Abstract
Maximum likelihood estimation (MLE) in infinite mixture distributions often lacks closed-form solutions, requiring numerical methods such as the Newton–Raphson algorithm. Selecting appropriate initial values is a critical challenge in these procedures. This study introduces a bootstrap-based approach to determine initial parameter values for [...] Read more.
Maximum likelihood estimation (MLE) in infinite mixture distributions often lacks closed-form solutions, requiring numerical methods such as the Newton–Raphson algorithm. Selecting appropriate initial values is a critical challenge in these procedures. This study introduces a bootstrap-based approach to determine initial parameter values for MLE, employing both nonparametric and parametric bootstrap methods to generate the mixing distribution. Monte Carlo simulations across multiple cases demonstrate that the bootstrap-based approaches, especially the nonparametric bootstrap, provide reliable and efficient initialization and yield consistent maximum likelihood estimates even when raw moments are undefined. The practical applicability of the method is illustrated using three empirical datasets: third-party liability claims in Indonesia, automobile insurance claim frequency in Australia, and total car accident costs in Spain. The results indicate stable convergence, accurate parameter estimation, and improved reliability for actuarial applications, including premium calculation and risk assessment. The proposed approach offers a robust and versatile tool both for research and in practice in complex or nonstandard mixture distributions. Full article
Show Figures

Figure 1

27 pages, 1024 KB  
Review
Audio-Visual Entrainment Neuromodulation: A Review of Technical and Functional Aspects
by Masoud Rahmani, Leonor Josefina Romero Lauro and Alberto Pisoni
Brain Sci. 2025, 15(10), 1070; https://doi.org/10.3390/brainsci15101070 - 30 Sep 2025
Viewed by 2225
Abstract
Audiovisual Entrainment (AVE) is a non-invasive, non-pharmacological neuromodulation approach that aims to align brain activity with externally delivered auditory and visual rhythms. This review surveys AVE’s historical development, technical parameters (e.g., frequency, phase, waveform, color, intensity, presentation mode), components and delivery methods, reported [...] Read more.
Audiovisual Entrainment (AVE) is a non-invasive, non-pharmacological neuromodulation approach that aims to align brain activity with externally delivered auditory and visual rhythms. This review surveys AVE’s historical development, technical parameters (e.g., frequency, phase, waveform, color, intensity, presentation mode), components and delivery methods, reported clinical applications, and safety considerations. Given the heterogeneity of AVE protocols and terminology, we conducted a structured narrative review (PubMed, Scopus, Google Scholar; earliest records to July 2025), including human and animal studies that met an operational definition of regulated AVE and consistent administration of specified auditory and visual frequencies, with critical methodological details reported. We highlight AVE’s accessibility and versatility, outline a stepwise parameter reporting framework to support standardization, and discuss putative mechanisms via sensory and oscillatory pathways. However, current findings are heterogeneous and include null or limited effects. Mechanistic understanding and parameter optimization remain insufficiently developed, and premature claims of efficacy are not warranted. Rigorous, standardized, and adequately controlled studies are needed before AVE can be considered a reliable therapeutic tool. Full article
(This article belongs to the Section Neuropsychology)
Show Figures

Figure 1

Back to TopTop