Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,608)

Search Parameters:
Keywords = Monte Carlo algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2072 KB  
Article
Carbon Balance Analysis of Agricultural Production System in Lanzhou City (2000–2023)
by Jinxiang Wang, Xu Cui, Panliang Liu, Yaling Zhao, Guohua Chang, Chao Wang, Liyang Xue, Yabian Wang and Tianpeng Gao
Agriculture 2026, 16(10), 1080; https://doi.org/10.3390/agriculture16101080 - 15 May 2026
Abstract
Strengthening the carbon sequestration function of agriculture and reducing carbon emissions during production are critical for enhancing the carbon neutrality capacity of agricultural systems. This study focuses on Lanzhou City in the arid northwest region of China, and uses the emission factor method [...] Read more.
Strengthening the carbon sequestration function of agriculture and reducing carbon emissions during production are critical for enhancing the carbon neutrality capacity of agricultural systems. This study focuses on Lanzhou City in the arid northwest region of China, and uses the emission factor method to analyze carbon emissions and crop carbon sequestration within the local agricultural production system (2000–2023). The results indicate that plastic film and fertilizers, as agricultural production inputs, contribute substantially to the total carbon emissions of the planting industry, while the annual average carbon emissions from sheep account for approximately half of the total annual carbon emissions from animal husbandry. The annual average carbon sequestration of crops is 366,057 tons, with an average annual growth rate of 1.1%. The ratio of crop carbon sequestration to the total carbon emissions from planting and animal husbandry is approximately 2.1:1. Although the carbon sequestration of crops has increased over time, its average annual growth rate remains lower than that of carbon emissions from planting and animal husbandry, resulting in an Agricultural Sustainable Development Index of 54%. Therefore, further efforts are needed to control carbon emissions and increase the carbon sequestration capacity of crops to improve the sustainability of agriculture development in the region. Finally, the Monte Carlo algorithm is used to simulate and predict future carbon emissions from animal husbandry within the agricultural production system, thereby obtaining the relative trends in total carbon emissions from pigs, cows, and sheep over a given period. Limiting the scale and growth rate of major livestock populations can help limit the increase in carbon emissions from animal husbandry. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

39 pages, 990 KB  
Article
Spontaneous Volunteer Task Assignment in the Acute Phase of Disaster Response: A Rolling-Horizon MIP Approach
by Berk Özel, Bülent Sezen and Yavuz Selim Balcıoğlu
Sustainability 2026, 18(10), 4915; https://doi.org/10.3390/su18104915 - 14 May 2026
Abstract
This paper presents a dynamic multi-period mixed-integer programming model for the Disaster Volunteer Task Assignment Problem (DVTAP) that advances the humanitarian logistics literature through an integrated treatment of features that have previously appeared only in isolation. Unlike prior formulations that assume volunteer surplus [...] Read more.
This paper presents a dynamic multi-period mixed-integer programming model for the Disaster Volunteer Task Assignment Problem (DVTAP) that advances the humanitarian logistics literature through an integrated treatment of features that have previously appeared only in isolation. Unlike prior formulations that assume volunteer surplus or steady-state conditions, our model reflects the acute-phase reality where tasks far exceed available volunteers and new task arrivals diminish over time as the disaster stabilizes. We incorporate makespan as an optimization objective alongside deprivation-weighted response time, skill matching, workload balance, and volunteer reliability. Ideal-nadir normalization ensures that all objective components contribute meaningfully regardless of their native units. The approach proceeds in two stages. First, we formulate and solve a single-period baseline MIP under volunteer surplus using the CBC solver at four scales (10 to 500 tasks). All four instances are solved to proven optimality, achieving 80 to 100% task coverage with skill-matching rates of 76.9 to 99.6%. Second, we develop a rolling-horizon algorithm that decomposes the multi-period problem into sequential epoch-level MIPs with state transitions, non-homogeneous Poisson task arrivals, fatigue accumulation, and task surplus conditions where the initial task-to-volunteer ratio exceeds 3:1. Computational experiments on three dynamic scenarios (up to 559 mean cumulative tasks) demonstrate that the algorithm achieves mean task completion rates of 84.21 ± 1.92% (Large-Dynamic), 93.74 ± 2.07% (Small-Dynamic), and 94.59 ± 2.03% (Medium-Dynamic) (mean ± standard deviation across 30 Monte Carlo replications) within a 15 h planning horizon, with per-epoch skill-matching rates of 11 to 20% (substantially lower than the static baseline due to triage-mode epochs that force all-volunteer assignment regardless of skill fit). The results reveal a clear regime transition: early epochs operate under severe task surplus where triage dominates, while later epochs transition to volunteer surplus where optimization of secondary objectives becomes feasible. Comparison against a skill-aware greedy heuristic confirms that the MIP’s advantage lies in global multi-objective coordination. This research contributes both a validated mathematical framework and a practical algorithmic approach for multi-period volunteer assignment under demand decay, extending prior work by Sperling and Schryenthrough explicit Poisson dynamics, fatigue state modeling, and makespan optimization. Full article
(This article belongs to the Special Issue Sustainable Disaster Management and Community Resilience)
Show Figures

Figure 1

30 pages, 2617 KB  
Article
Time-Efficient Multi-Region SAR Imaging with Heterogeneous UAVs: Joint Task Assignment and Path Planning
by Deyu Song, Xiangyin Zhang, Baichuan Wang, Yalin Zhong, Yuan Yao and Kaiyu Qin
Remote Sens. 2026, 18(10), 1558; https://doi.org/10.3390/rs18101558 - 13 May 2026
Viewed by 17
Abstract
Unmanned aerial vehicles (UAVs) provide a highly flexible platform for synthetic aperture radar (SAR), enabling efficient, high-quality imaging in remote sensing applications. In realistic imaging missions, regions of interest (ROIs) usually have different sizes and spatial distributions. While deploying SAR-UAVs with heterogeneous flight [...] Read more.
Unmanned aerial vehicles (UAVs) provide a highly flexible platform for synthetic aperture radar (SAR), enabling efficient, high-quality imaging in remote sensing applications. In realistic imaging missions, regions of interest (ROIs) usually have different sizes and spatial distributions. While deploying SAR-UAVs with heterogeneous flight and imaging capabilities can improve mission time efficiency, realizing this improvement depends critically on task assignment and path planning. In this paper, the joint task assignment and path planning problem for heterogeneous SAR-UAVs in multi-region imaging missions is addressed. First, flight and imaging models of SAR-UAVs are established, and a constrained optimization problem is formulated to minimize the mission completion time. Then, an improved clustering strategy based on area-density and cost prediction (ADCP) is proposed to align ROI-dependent imaging workloads with heterogeneous SAR-UAV capabilities, thereby leveraging capability advantages and reducing the mission completion time. Finally, a discrete secretary bird optimization algorithm (DSBOA) is developed to generate feasible, high-quality paths. To accelerate convergence, UAV paths are encoded as waypoint sequences, and a mutation-based operator is introduced to update the population. Extensive Monte Carlo simulations show that the proposed approach consistently outperforms the baselines in mission completion time, demonstrating its effectiveness in improving time efficiency for multi-region SAR imaging missions. Ablation experiments further confirm the independent contributions of the proposed ADCP method and DSBOA algorithm. Full article
Show Figures

Figure 1

30 pages, 1792 KB  
Article
Integrating ENSO Climate Risk into Flood Catastrophe Bonds for Disaster Risk Financing: An Asset-Pricing Framework
by Riza Andrian Ibrahim, Heru Santoso and Sukono
J. Risk Financial Manag. 2026, 19(5), 357; https://doi.org/10.3390/jrfm19050357 - 13 May 2026
Viewed by 12
Abstract
Empirical evidence shows that the El Niño-Southern Oscillation (ENSO) influences the frequency–damage relationship for floods. However, ENSO is generally not incorporated into indemnity-trigger modeling of Flood Catastrophe Bonds (FCBs), resulting in an incomplete representation of claim events. Therefore, this study aims to develop [...] Read more.
Empirical evidence shows that the El Niño-Southern Oscillation (ENSO) influences the frequency–damage relationship for floods. However, ENSO is generally not incorporated into indemnity-trigger modeling of Flood Catastrophe Bonds (FCBs), resulting in an incomplete representation of claim events. Therefore, this study aims to develop an FCB pricing model that incorporates ENSO as an external systematic risk factor affecting the indemnity trigger. The trigger is formulated as a doubly stochastic compound Poisson process, with its intensity modeled as an autoregressive integrated moving-average with exogenous variables. Bond prices are then derived by integrating the trigger process with the Cox-Ingersoll-Ross model under an arbitrage-free risk-neutral framework. To obtain stable numerical solutions, a Monte Carlo-based algorithm is also developed. Numerical simulations using data from Bandung Regency, Indonesia, show stable estimates under the relative Monte Carlo standard error measure. Then, incorporating ENSO empirically improves flood-intensity forecasting accuracy, as indicated by lower MAPE, MAE, RMSE, and Theil’s U. It also produces statistically significant price differences across all common maturities. This study advances the theoretical and practical pricing of FCBs by directly linking climate-driven flood intensity to indemnity triggers, equipping practitioners to quantify risk better and to set sustainable disaster risk financing, particularly in ENSO-affected regions. Full article
(This article belongs to the Special Issue Sustainable Finance and Climate Transition)
Show Figures

Figure 1

56 pages, 2973 KB  
Article
On Least Squares Approximations of Shapley Values and Applications to Interpretable Machine Learning
by Tim Pollmann and Jochen Staudacher
Foundations 2026, 6(2), 18; https://doi.org/10.3390/foundations6020018 - 11 May 2026
Viewed by 96
Abstract
The Shapley value is the predominant point-valued solution concept in cooperative game theory and has recently become a foundational method in interpretable machine learning. In this domain, a prevailing strategy for circumventing the computational intractability of exact Shapley values is to approximate them [...] Read more.
The Shapley value is the predominant point-valued solution concept in cooperative game theory and has recently become a foundational method in interpretable machine learning. In this domain, a prevailing strategy for circumventing the computational intractability of exact Shapley values is to approximate them via a weighted least squares optimization framework. In this paper, we investigate an existing algorithmic framework for weighted least squares Shapley approximation, assessing its feasibility for feature attribution. Methodologically, we conduct a theoretical variance analysis within a Monte Carlo sampling framework, investigate an approach for sample reuse across strata, and establish a relation to Unbiased KernelSHAP. Our analysis reveals three main findings: (i) a structural equivalence between least squares sampling and Unbiased KernelSHAP; (ii) the non-zero covariance between sampled coalitions introduced by reusing samples across strata in one of the existing least squares-based approaches; and (iii) the absence of a universally optimal sampling strategy across tasks. We validate these results empirically on several cooperative games and practical machine learning problems. Full article
(This article belongs to the Section Mathematical Sciences)
26 pages, 6079 KB  
Article
Stochastic Optimal Scheduling Method for Vehicle–Grid Collaborative Interaction Considering Source-Load Uncertainties
by Yongbiao Yang and Haichuan Zhang
World Electr. Veh. J. 2026, 17(5), 255; https://doi.org/10.3390/wevj17050255 - 9 May 2026
Viewed by 200
Abstract
During the process of vehicle–grid interaction, the charging load of electric vehicles shows significant uncertainty, which is driven by multiple user behavior variables: including the differentiated characteristics of users’ daily travel needs, as well as personalized charging habits, random charging periods, and dynamic [...] Read more.
During the process of vehicle–grid interaction, the charging load of electric vehicles shows significant uncertainty, which is driven by multiple user behavior variables: including the differentiated characteristics of users’ daily travel needs, as well as personalized charging habits, random charging periods, and dynamic changes in charging power demands. To address the scheduling challenges arising from the uncertainty of electric vehicle loads in the interaction between electric vehicles and the power grid, this paper proposes a multi-objective optimization scheduling method for the interaction between electric vehicles and the power grid, which takes into account the uncertainty of power sources and loads. This method can enhance the economic operation level of the power grid, increase the acceptance capacity of renewable energy, and improve the stability of the system. Firstly, this paper proposes an improved K-means clustering algorithm, which combines Monte Carlo sampling to achieve the generation and reduction of scenarios for electric vehicle load and photovoltaic output. Secondly, a scheduling framework based on the vehicle–grid collaborative interaction mode is constructed, and a random optimization scheduling model for photovoltaic storage electric vehicles is established. Finally, an example of a photovoltaic storage charging station in an industrial park is used for verification. The simulation results demonstrate the economic feasibility and effectiveness of this scheduling strategy. Full article
(This article belongs to the Section Automated and Connected Vehicles)
Show Figures

Figure 1

24 pages, 2908 KB  
Article
Transformer-Augmented MCTS for Aircraft Landing Problem
by Jie Hu, Shuai Zhang, Xiaorong Feng and Xinglong Wang
Aerospace 2026, 13(5), 438; https://doi.org/10.3390/aerospace13050438 - 8 May 2026
Viewed by 203
Abstract
The aircraft landing problem (ALP) poses significant challenges for traditional Monte Carlo Tree Search (MCTS) due to its vast search space and reliance on inefficient random simulations. To overcome these limitations, this paper proposes a novel Transformer-Augmented Monte Carlo Tree Search (TMCTS) algorithm. [...] Read more.
The aircraft landing problem (ALP) poses significant challenges for traditional Monte Carlo Tree Search (MCTS) due to its vast search space and reliance on inefficient random simulations. To overcome these limitations, this paper proposes a novel Transformer-Augmented Monte Carlo Tree Search (TMCTS) algorithm. Our approach integrates a reinforcement learning framework that incorporates key operational constraints, including wake turbulence separation and time windows, and employs a cost function aimed at minimizing both delay time and fuel consumption. A core innovation is the replacement of the conventional random simulation phase in MCTS with a Transformer-based value predictor. This leverages the Transformer’s superior ability to model sequences and capture global dependencies among flights, thereby dramatically accelerating search convergence. Specifically, we designed a two-head Transformer network (comprising policy and value heads) to provide informed prior knowledge, which effectively guides the selection and expansion steps of the MCTS tree. The model is trained within an Actor–Critic framework, utilizing behavior cloning for pre-training followed by reinforcement learning for fine-tuning. Experimental evaluations on the standard OR-Library benchmark demonstrate that our TMCTS method significantly reduces scheduling deviation compared to state-of-the-art baselines (including FCFS, DPALO+GA, DPALO+PSO, and CPLEX). Moreover, it achieves a 93.7% reduction in computation time relative to the CPLEX method, highlighting its superior efficiency and practical applicability for real-time scheduling. Full article
(This article belongs to the Special Issue AI, Machine Learning and Automation for Air Traffic Control (ATC))
Show Figures

Figure 1

30 pages, 2747 KB  
Article
LF-TF-CPO: A Survivability-Oriented Min–Max Optimization Algorithm for Multi-UAV Coverage Planning in Mountainous Terrains
by Jiayong Li and Yifan Xia
Drones 2026, 10(5), 356; https://doi.org/10.3390/drones10050356 - 7 May 2026
Viewed by 235
Abstract
Multi-UAV coverage planning in complex mountainous environments is often constrained by idealized energy modeling, the “wood barrel effect” of traditional global energy minimization paradigms, and a lack of dynamic fault tolerance. To address these limitations, this study proposes a survivability-oriented Min–Max optimization architecture [...] Read more.
Multi-UAV coverage planning in complex mountainous environments is often constrained by idealized energy modeling, the “wood barrel effect” of traditional global energy minimization paradigms, and a lack of dynamic fault tolerance. To address these limitations, this study proposes a survivability-oriented Min–Max optimization architecture driven by the novel Lévy–Flight Terrain-Following Constrained Planning Optimization (LF-TF-CPO) algorithm. Coupling a high-fidelity 3D topographical matrix with a nonlinear aerodynamic energy model, the framework prioritizes individual UAV safety. Monte Carlo simulations demonstrate that LF-TF-CPO compresses the average maximum individual energy consumption to 665.64 kJ, preserving an adequate operational margin below the 950 kJ physical redline to absorb unmodeled aerodynamic perturbations while ensuring a 31.30 min mission duration. Ablation studies verify the Min–Max objective mitigates localized overloads with a marginal 0.4% energy trade-off. Furthermore, an emergency recovery protocol validates dynamic resilience across simultaneous and cascading failures by consistently stabilizing post-failure peak loads within safe margins. Notably, statistical evaluations establish a robust empirical sweet spot (<!-- MathType@Translator@5@5@MathML2 (no namespace).tdl@MathML 2.0 (no namespace)@ --> Full article
(This article belongs to the Special Issue UAV Swarm Intelligent Control and Decision-Making)
20 pages, 2495 KB  
Article
Adaptive UAV Visual Localisation Based on Improved Gradient-Damping Newton Method
by Xunli Zhou, Ancheng Fang, Song Fu, Jiaming Liu, Xiaoge Zhang, Xiong Liao and Jianwei Zhang
Electronics 2026, 15(10), 1974; https://doi.org/10.3390/electronics15101974 - 7 May 2026
Viewed by 237
Abstract
The role of unmanned aerial vehicles (UAVs) in time-sensitive missions such as low-altitude reconnaissance and disaster rescue has gained increasing significance. To address the challenge of visual localisation for UAVs operating in complex terrains under Global Navigation Satellite System (GNSS)-denied environments, this paper [...] Read more.
The role of unmanned aerial vehicles (UAVs) in time-sensitive missions such as low-altitude reconnaissance and disaster rescue has gained increasing significance. To address the challenge of visual localisation for UAVs operating in complex terrains under Global Navigation Satellite System (GNSS)-denied environments, this paper proposes an improved adaptive gradient-damped Newton approach to mitigate the trade-off between terrain non-convexity and computational real-time performance. The proposed approach incorporates a terrain-gradient-based dynamic step-size adjustment mechanism that adaptively captures non-linear terrain characteristics in real time and effectively reduces the numerical oscillations typically observed in steep regions when using the standard Newton method. In addition, a tightly coupled vision–geometry framework was developed to constrain cumulative drift during long-range flight. Monte Carlo simulation results demonstrate that the proposed algorithm maintains submeter localisation accuracy while achieving approximately a three-fold improvement in computational efficiency compared with traditional grid-based methods, and a 27.4% increase in convergence speed relative to the standard Newton method. Experiments conducted under high-noise conditions and highly undulating terrains indicate that the approach exhibits strong convergence stability, offering a computationally efficient and robust solution for UAV navigation. Full article
Show Figures

Figure 1

64 pages, 788 KB  
Article
From Biased to Unbiased: Theory and Benchmarks for a New Monte Carlo Solver of Fredholm Integral Equations
by Venelin Todorov and Ivan Dimov
Axioms 2026, 15(5), 338; https://doi.org/10.3390/axioms15050338 - 4 May 2026
Viewed by 197
Abstract
We investigate biased and unbiased Monte Carlo algorithms for solving Fredholm integral equations of the second kind and for estimating linear functionals of their solutions. Fredholm integral equations provide a common mathematical framework in uncertainty quantification, Bayesian inference, physics, finance, engineering modeling, telecommunication [...] Read more.
We investigate biased and unbiased Monte Carlo algorithms for solving Fredholm integral equations of the second kind and for estimating linear functionals of their solutions. Fredholm integral equations provide a common mathematical framework in uncertainty quantification, Bayesian inference, physics, finance, engineering modeling, telecommunication systems, signal processing, and other applied problems where system responses depend on distributed, uncertain, or noise-affected inputs. The comparison covers Crude Monte Carlo and Markov Chain Monte Carlo baselines, modified Sobol quasi–Monte Carlo schemes (MSS variants), the classical Unbiased Stochastic Algorithm (USA), and a new variance-controlled unbiased estimator, the Novel Unbiased Stochastic Algorithm (NUSA). NUSA preserves unbiasedness via a randomized-trajectory representation while improving stability through two mechanisms: adaptive absorption control, governed by a parameter Pd that regulates the effective trajectory length, and kernel-weight normalization based on an auxiliary proposal density to curb heavy-tailed weight products. Extensive experiments in one- and multi-dimensional settings (including regular and discontinuous kernels and weak/strong coupling regimes) show that NUSA consistently reduces dispersion and achieves smaller errors than USA under identical sampling budgets. In representative tests, NUSA attains relative errors below 10−3 and improves average accuracy by approximately 30–50% compared with USA, while maintaining near-linear runtime scaling in N and competitive scaling with dimension. Although NUSA is moderately more expensive per run than USA, the variance reduction yields a superior accuracy–cost trade-off, especially near strong-coupling regimes and in higher dimensions where standard unbiased estimators become variance-limited. Full article
(This article belongs to the Special Issue Numerical Analysis and Applied Mathematics, 2nd Edition)
Show Figures

Figure 1

27 pages, 3290 KB  
Article
Neural Network Copulas for Generating Synthetic Test Data Preserving Psychometric Properties
by Juyoung Jung, Minho Lee and Won-Chan Lee
J. Intell. 2026, 14(5), 77; https://doi.org/10.3390/jintelligence14050077 - 2 May 2026
Viewed by 383
Abstract
In intelligence research, the sharing of item response data from cognitive ability assessments is often restricted by privacy concerns, while traditional parametric simulation methods frequently fail to capture complex response dependencies. This study proposes a neural network copula (NNC) framework for generating synthetic [...] Read more.
In intelligence research, the sharing of item response data from cognitive ability assessments is often restricted by privacy concerns, while traditional parametric simulation methods frequently fail to capture complex response dependencies. This study proposes a neural network copula (NNC) framework for generating synthetic dichotomous item response data that preserves essential psychometric properties without revealing sensitive examinee information. By decoupling the modeling of marginal item probabilities from the dependence structure using a deep autoencoder and kernel density estimation, the framework accommodates the discrete nature of binary item response data while minimizing distributional assumptions. Validation against large-scale empirical data demonstrated high correspondence across multiple facets. At the data consistency level, the NNC-based synthetic data reproduced total score distributions and inter-item correlations. Psychometrically, the method yielded consistent item characteristic curve parameter estimates, item fit statistics, and test information functions. Furthermore, Monte Carlo replications demonstrated algorithmic stability and inferential precision. Full article
Show Figures

Figure 1

25 pages, 2126 KB  
Article
Crying Wolf in Cyberspace: A Cybersecurity Dynamics Study of Alarm Fatigue Attacks
by Enrico Barbierato
Information 2026, 17(5), 434; https://doi.org/10.3390/info17050434 - 1 May 2026
Viewed by 376
Abstract
Modern cyber–physical infrastructures rely heavily on alarm and notification systems to direct human attention when abnormal conditions occur. These mechanisms support timely and safe responses by informing operators and occupants about potential hazards. At the same time, research in human factors has shown [...] Read more.
Modern cyber–physical infrastructures rely heavily on alarm and notification systems to direct human attention when abnormal conditions occur. These mechanisms support timely and safe responses by informing operators and occupants about potential hazards. At the same time, research in human factors has shown that repeated or excessive alerts can weaken vigilance, slow reactions, and reduce confidence in warning systems. This behavioral pattern is commonly described as alarm fatigue. This paper examines how that vulnerability can be exploited intentionally. We refer to this adversarial strategy as alarm poisoning: the deliberate injection of false or misleading alerts in order to increase alarm pressure, erode trust in the monitoring infrastructure, and degrade organizational responsiveness over time. To study this process, we develop a stochastic Cybersecurity Dynamics model representing the interaction among attackers, defenders, alarm infrastructure, and a population of employees. Employee behavior is modeled through evolving trust and fatigue levels, while the overall system is formulated as a continuous–time Markov chain and simulated using the Gillespie Stochastic Simulation Algorithm. A Monte–Carlo campaign is used to analyze the resulting socio–technical dynamics under alternative attacker strategies. The study evaluates time-dependent trust, fatigue, and alarm-pressure trajectories, the distribution of times to behavioral collapse, and defender timing through Trust–Resilience–Agility–Mitigation (TRAM) metrics. The revised analysis also includes replication-sufficiency diagnostics, one-at-a-time sensitivity analysis, and threshold-robustness checks for the collapse criterion. The results show that false alarms with high perceived severity drive alarm pressure upward and degrade trust faster than nuisance-dominated campaigns, even when the total fake-alarm intensity is held constant across strategies. Collapse timing remains highly variable across stochastic realizations, and a non-negligible fraction of runs do not reach the collapse threshold within the simulation horizon. Sensitivity analysis indicates that the main qualitative ranking of attacker strategies is robust across most tested perturbations, with fatigue recovery and defender escalation emerging as particularly influential mechanisms. Overall, the findings support the view that alarm poisoning is a credible socio–technical attack vector and highlight the importance of rapid mitigation, robust alarm management, and human-centered defensive design in cyber–physical security systems. Full article
(This article belongs to the Special Issue Generative AI for Data Privacy and Anomaly Detection)
Show Figures

Figure 1

37 pages, 11499 KB  
Article
Automated Mid-Surface Mesh Generation Method for Automotive Plastic Parts Based on Deep Learning
by Hongbin Tang, Zehui Huang, Jingchun Wang, Jianjiao Deng, Shibin Wang, Zhiguo Zhang and Zhenjiang Wu
Vehicles 2026, 8(5), 96; https://doi.org/10.3390/vehicles8050096 - 1 May 2026
Viewed by 424
Abstract
Automotive plastic parts present multiple challenges for Computer-Aided Engineering (CAE) simulation modeling, including complex thin-walled geometries, difficulties in meshing fine features (e.g., clips and snap-fits), and time-consuming manual processing with inconsistent quality. To address these issues, this paper proposes an automated method for [...] Read more.
Automotive plastic parts present multiple challenges for Computer-Aided Engineering (CAE) simulation modeling, including complex thin-walled geometries, difficulties in meshing fine features (e.g., clips and snap-fits), and time-consuming manual processing with inconsistent quality. To address these issues, this paper proposes an automated method for generating mid-surface meshes. The proposed approach integrates AI-based feature recognition, point cloud registration, and geometric fitting. First, a specialized point cloud dataset consisting of 132,000 samples of plastic part features was constructed. Using a PointNet++ model, precise semantic segmentation of typical features, such as clips and backing plates, was achieved. Subsequently, a library of typical features was established, and an FPFH-ICP point cloud registration strategy was implemented. Based on the matching rate, an adaptive selection between two processing paths, direct standard mesh replacement and segmentation-fitting generation was performed. For features with low matching rates, a suite of segmentation-fitting algorithms was proposed. These algorithms incorporate incomplete cylinder parameter extraction, Monte Carlo boundary identification, and internal point cloud reordering, thereby facilitating high-quality mid-surface mesh generation for complex topological structures. Finally, experimental validation was conducted on typical automotive interior plastic parts as well as on new cross-platform vehicle models. The results demonstrate that the proposed method reduces mesh modeling time by 67% while preserving the accuracy of geometric feature restoration. The mesh quality compliance rate increases from 52.27% to 90.9% with the proposed method, reaching a level comparable to that of professional manual meshing. In cross-platform validation, the proposed method maintained high accuracy. Consequently, this approach significantly enhances the intelligence and engineering reliability of CAE pre-processing, providing effective technical support for the automated simulation modeling of complex thin-walled components. Full article
Show Figures

Figure 1

20 pages, 9567 KB  
Article
Enhancing Battery Consistency Through Physics-Machine Learning Integration: A Calendering Process-Oriented Optimization Strategy
by Wenhao Zhu, Yankun Liao, Gang Wu and Fei Lei
Energies 2026, 19(9), 2186; https://doi.org/10.3390/en19092186 - 30 Apr 2026
Viewed by 192
Abstract
Manufacturing tolerances inevitably induce cell-to-cell inconsistencies. These inconsistent cells are connected in series and parallel to form battery packs, which will affect the safety and reliability of the battery system. This study presents a novel optimization framework integrating the multi-level physical model with [...] Read more.
Manufacturing tolerances inevitably induce cell-to-cell inconsistencies. These inconsistent cells are connected in series and parallel to form battery packs, which will affect the safety and reliability of the battery system. This study presents a novel optimization framework integrating the multi-level physical model with machine learning to improve battery consistency from the manufacturing perspective. The multi-level physical modeling approach is applied to establish the link between the parameter deviations of the calendering process and the battery inconsistency performance. Based on the multi-level physical model, the Monte Carlo method is used to describe parameter deviations and generate datasets of electrochemical properties. The coefficients of variations in battery capacity and resistance are calculated as the consistency evaluation index based on these datasets. The proposed optimization approach applies machine learning to reduce the computational cost of the multi-level physical simulations due to lots of Monte Carlo simulations. Combined with the multi-level physical model and neural network model, the multi-objective particle swarm optimization algorithm is adopted to provide the optimal calendering process parameter deviations by achieving the trade-off between battery consistency performance and manufacturing cost. Results indicate that the battery consistency performance is improved by controlling the precision of the calendering process and manufacturing cost. This approach can effectively give feedback and guidance to the inverse design of the manufacturing process. Full article
Show Figures

Figure 1

18 pages, 3351 KB  
Article
Monte Carlo Simulations of Thermal Behavior in Two-Block Spin-Crossover Structures
by Jorge Linares, Catherine Cazelles, Pierre Richard Dahoo and Kamel Boukheddaden
Symmetry 2026, 18(5), 757; https://doi.org/10.3390/sym18050757 - 28 Apr 2026
Viewed by 351
Abstract
Molecular spin-crossover (SCO) compounds constitute prototypical systems exhibiting first-order phase transitions. These transitions involve an abrupt switch between two well-defined states with distinctly different magnetic, optical, and vibrational properties. One state is diamagnetic (low-spin), while the other is paramagnetic (high-spin). Upon heating, the [...] Read more.
Molecular spin-crossover (SCO) compounds constitute prototypical systems exhibiting first-order phase transitions. These transitions involve an abrupt switch between two well-defined states with distinctly different magnetic, optical, and vibrational properties. One state is diamagnetic (low-spin), while the other is paramagnetic (high-spin). Upon heating, the transition occurs at a characteristic temperature, Tup. Upon cooling, it takes place at a lower temperature, Tdown < Tup, thereby giving rise to thermal hysteresis. Accordingly, each SCO compound is defined by a distinct pair of transition temperatures, Tup and Tdown. The investigation of these molecular solids is of great importance, both for elucidating first-order phase transitions—including the potential emergence of re-entrant phases—and for their broad range of prospective applications. The critical temperatures Tup and Tdown are pivotal in defining their practical utility. We present a strategy to modify and tune the transition temperatures of spin-crossover (SCO) compounds to suit different applications. The approach combines a given SCO material with layers of a second SCO system, enabling precise control of the characteristic temperatures of the resulting heterostructure. We illustrate this method with three case studies that span the 100 K–400 K temperature range. All simulations were performed using Monte Carlo methods within the Metropolis algorithm framework. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

Back to TopTop