Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (21,894)

Search Parameters:
Keywords = trade

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2247 KB  
Article
Sustainability-Oriented Planning of Capacitor Banks for Loss Reduction and Voltage Improvement in Radial Distribution Feeders
by Edwin Albuja-Calo and Jorge Muñoz-Pilco
Sustainability 2026, 18(8), 4025; https://doi.org/10.3390/su18084025 - 17 Apr 2026
Abstract
Radial distribution feeders are especially sensitive to reactive-power deficits, which increase technical losses, deteriorate voltage profiles, reduce energy efficiency, and indirectly raise the emissions associated with the energy required to supply those losses. In this context, this paper proposes a sustainability-oriented planning methodology [...] Read more.
Radial distribution feeders are especially sensitive to reactive-power deficits, which increase technical losses, deteriorate voltage profiles, reduce energy efficiency, and indirectly raise the emissions associated with the energy required to supply those losses. In this context, this paper proposes a sustainability-oriented planning methodology for the location and sizing of capacitor banks in radial distribution feeders, aimed at jointly improving technical performance, economic viability, and sustainability-related energy benefits. The problem is formulated as a discrete multi-objective model and solved through a constructive Greedy heuristic combined with backward/forward sweep load-flow evaluation, considering commercially available capacitor sizes. The methodology is validated on the IEEE 34-bus feeder, a demanding benchmark that remains less frequently used than the IEEE 33- and 69-bus systems in recent capacitor-planning studies. Seven scenarios are analyzed, from the uncompensated base case to configurations with up to six capacitor banks. The results show that all compensated scenarios improve feeder performance, reducing active losses from 25.3327 kW to a minimum of 20.1468 kW, equivalent to a maximum reduction of 20.47%, and increasing the minimum nodal voltage from 0.95528 p.u. to 0.97038 p.u. From a purely financial perspective, the one-bank scenario yields the highest net present value (USD 16,358.86), whereas the two-bank scenario emerges as the most balanced solution within the evaluated set, with annual savings of USD 5432.29 and a net present value of USD 11,497.58. Overall, the results confirm that capacitor-bank planning should be addressed as a trade-off among electrical efficiency, voltage support, profitability, and sustainability-oriented benefits. The proposed framework provides a simple, reproducible, and interpretable planning tool for radial distribution feeders. Full article
(This article belongs to the Special Issue Smart Grid and Sustainable Energy Systems)
Show Figures

Figure 1

42 pages, 3651 KB  
Review
Recent Progress of Structural Design, Fabrication Processes, and Applications of Flexible Acceleration Sensors
by Yuting Wang, Zhidi Chen, Peng Chen, Jie Mei, Jiayue Kuang, Chang Li, Zhijun Zhou and Xiaobo Long
Sensors 2026, 26(8), 2499; https://doi.org/10.3390/s26082499 - 17 Apr 2026
Abstract
Flexible acceleration sensors demonstrate revolutionary potential in healthcare, structural vibration monitoring, and consumer electronics owing to their unique conformal adhesion capability and mechanical adaptability. However, current academic research presents two distinct paradigms for realizing flexibility: one is the hybridly flexible sensor, which incorporates [...] Read more.
Flexible acceleration sensors demonstrate revolutionary potential in healthcare, structural vibration monitoring, and consumer electronics owing to their unique conformal adhesion capability and mechanical adaptability. However, current academic research presents two distinct paradigms for realizing flexibility: one is the hybridly flexible sensor, which incorporates traditional micro-electro-mechanical System (MEMS) acceleration sensor chips with flexible packaging/substrates; the other is the intrinsically flexible sensor, whose sensing unit and substrate are entirely composed of flexible materials enabled by microstructural design. This review first analyzes the fundamental differences and design challenges between these two flexible architectures. It then systematically elucidates five core sensing mechanisms—capacitive, piezoresistive, triboelectric, piezoelectric, and electromagnetic—comparing their working principles, material systems, structural designs, and performance metrics. Among these, piezoelectric and triboelectric types exhibit distinctive advantages in self-powering capability, whereas resistive and capacitive approaches offer greater ease of integration. Furthermore, the applications of intrinsically flexible acceleration sensors in structural health monitoring, wearable devices, automotive safety, and other fields are discussed, with particular emphasis on their unique strengths in real-time vibration monitoring. Finally, the review summarizes existing challenges, such as the trade-off between sensitivity and flexibility, and provides theoretical insights to guide future innovations in intrinsically flexible acceleration sensor technology. Full article
(This article belongs to the Special Issue 2D Materials for Advanced Sensing Technology)
29 pages, 13022 KB  
Article
A 2-GS/s 35.9-fJ/conv.-step Voltage–Time Hybrid Pipelined ADC with Digital Background Calibration in 28-nm CMOS
by Yuan Chang, Chenghao Zhang, Yihang Yang, Chaoyang Zhang, Maliang Liu, Dongdong Chen and Yintang Yang
Micromachines 2026, 17(4), 495; https://doi.org/10.3390/mi17040495 - 17 Apr 2026
Abstract
This paper presents a 2-GS/s voltage–time hybrid pipelined analog-to-digital converter (ADC) with a 14-bit digital output, implemented in a 28-nm CMOS process. To alleviate the gain–bandwidth–power trade-off in deeply scaled technologies, the proposed architecture employs a SHA-less front-end and a low-gain inverter-based push–pull [...] Read more.
This paper presents a 2-GS/s voltage–time hybrid pipelined analog-to-digital converter (ADC) with a 14-bit digital output, implemented in a 28-nm CMOS process. To alleviate the gain–bandwidth–power trade-off in deeply scaled technologies, the proposed architecture employs a SHA-less front-end and a low-gain inverter-based push–pull RA for energy-efficient coarse quantization. The residue is then transferred to the time domain via a highly linear constant-current voltage-to-time converter (CC-VTC) and digitized by a four-channel time-interleaved gated-ring-oscillator (GRO) TDC. To recover dynamic linearity degraded by low-gain amplification and interleaving mismatches, a multiplier-less digital background calibration engine is implemented. Leveraging mean absolute value (MAV) statistics and dither-injected least-mean-squares (LMS) algorithms, it effectively compensates for inter-channel and interstage errors with minimal hardware overhead. The prototype occupies an active area of 0.16 mm2. At 2 GS/s, the ADC achieves a Nyquist SNDR of 63.42 dB and an SFDR of 73.71 dB, corresponding to an ENOB of 10.24 bits. Consuming 86.9 mW from a 1-V supply, it achieves a Walden FoM of 35.9 fJ/conv.-step. Measurement results from multiple chips under a wide range of operating conditions verify the robustness of the proposed ADC. Full article
(This article belongs to the Section D1: Semiconductor Devices)
17 pages, 6497 KB  
Article
Optimization Trade-Offs in Memristor-Based Crossbar Arrays for MAC Acceleration
by Hassen Aziza, Hanzhi Xun, Moritz Fieback, Mottaqiallah Taouil and Said Hamdioui
Electronics 2026, 15(8), 1710; https://doi.org/10.3390/electronics15081710 - 17 Apr 2026
Abstract
Vector–matrix multiplication (VMM), implemented through multiply–accumulate (MAC) operations, represents the dominant computational primitive in many artificial intelligence (AI) workloads. When executed on conventional von Neumann architectures, VMM operations suffer from important energy consumption and latency due to the separation between memory and processing [...] Read more.
Vector–matrix multiplication (VMM), implemented through multiply–accumulate (MAC) operations, represents the dominant computational primitive in many artificial intelligence (AI) workloads. When executed on conventional von Neumann architectures, VMM operations suffer from important energy consumption and latency due to the separation between memory and processing units. To overcome these limitations, crossbar arrays built from Resistive Random Access Memory (RRAM) cells have been proposed for accelerating VMM computations. In this work, we investigate the key optimization trade-offs associated with implementing RRAM-based neural networks for classification applications. A simple two-layer neural network is first defined and trained in software to generate the weight matrices and bias parameters. Next, three hardware implementation scenarios are evaluated depending on whether negative floating-point numbers are used: Positive Weights Only (PWO), Positive and Negative Weights Only (PNWO), and Positive and Negative Weights with Biases (PNWB). The different implementations are analyzed at the hardware level by examining classification accuracy, energy efficiency, latency, and area overhead. The study further incorporates important RRAM limitations, including restricted conductance range and device variability. Hardware results show that the PWO scenario offers the lowest energy consumption (189 fJ/MAC) and area overhead but results in the lowest accuracy. PNWO and PNWB significantly improve accuracy (+177% and +180%) but increase energy consumption (+63% and +87%) and area (×2 and ×2.1). Under variability effects, PWO achieves better accuracy (94.65%), followed by PNWO (93.11%) and PNWB (92.11%). Full article
(This article belongs to the Special Issue Prospective of Semiconductor Memory Devices)
Show Figures

Figure 1

26 pages, 2277 KB  
Review
EV-Centric Technical Virtual Power Plants in Active Distribution Networks: An Integrative Review of Physical Constraints, Bidding, and Control
by Youzhuo Zheng, Hengrong Zhang, Anjiang Liu, Yue Li, Shuqing Hao, Yu Miao, Chong Han and Siyang Liao
Energies 2026, 19(8), 1945; https://doi.org/10.3390/en19081945 - 17 Apr 2026
Abstract
The accelerated low-carbon transition of power systems and the widespread integration of Electric Vehicles (EVs) present both severe operational challenges and substantial flexible regulation potential for Active Distribution Networks (ADNs). This paper provides an integrative review of the coordinated control and multi-market bidding [...] Read more.
The accelerated low-carbon transition of power systems and the widespread integration of Electric Vehicles (EVs) present both severe operational challenges and substantial flexible regulation potential for Active Distribution Networks (ADNs). This paper provides an integrative review of the coordinated control and multi-market bidding mechanisms for EV-centric Technical Virtual Power Plants (TVPPs). Moving beyond descriptive surveys, this review systematically synthesizes the fragmented literature across three critical dimensions: (1) the physical-economic bidirectional mapping, which considers nonlinear power flow constraints and node voltage limits within the TVPP framework; (2) multi-market coupling mechanisms, evolving from unilateral energy bidding to coordinated participation in carbon trading and ancillary services; and (3) real-time control strategies, critically evaluating the trade-offs between optimization techniques (e.g., Model Predictive Control) and cutting-edge artificial intelligence approaches (e.g., Deep Reinforcement Learning) in mitigating battery degradation. Furthermore, a transparent review methodology is adopted to ensure literature rigor. By explicitly outlining the boundaries between TVPPs, Commercial VPPs (CVPPs), and EV aggregators, this paper identifies core unresolved trade-offs among aggregation fidelity, market complexity, and communication latency, providing evidence-backed pathways for future engineering demonstrations and V2G applications. Full article
(This article belongs to the Collection "Electric Vehicles" Section: Review Papers)
Show Figures

Figure 1

34 pages, 1954 KB  
Article
Parameter-Coupled Offset Min-Sum Decoding with Edge-Type Differentiation for MET-LDPC Codes
by Ying You, Guodong Su and Weiwei Lin
Mathematics 2026, 14(8), 1352; https://doi.org/10.3390/math14081352 - 17 Apr 2026
Abstract
To improve the decoding performance of multi-edge type low-density parity-check (MET-LDPC) codes, this paper proposes an edge-type differentiated parameter coupling offset min-sum (EDPC-OMS) decoding algorithm. The contributions are threefold. First, we replace the traditional uniform compensation with edge-type differentiated compensation, resolving the mismatch [...] Read more.
To improve the decoding performance of multi-edge type low-density parity-check (MET-LDPC) codes, this paper proposes an edge-type differentiated parameter coupling offset min-sum (EDPC-OMS) decoding algorithm. The contributions are threefold. First, we replace the traditional uniform compensation with edge-type differentiated compensation, resolving the mismatch between the decoding model and code structure. Second, we introduce a parameter coupling mechanism that enables joint optimization of multiple edge types while maintaining differentiated configurations. Third, a practically feasible design combining precomputation and look-up tables enables dynamic parameter adjustment with moderate additional overhead, achieving a favorable performance–complexity trade-off. Simulation results over additive white Gaussian noise (AWGN) channels and Rayleigh fading channels demonstrate that the proposed algorithm adaptively selects offset factors according to channel conditions and edge types without introducing significant computational complexity, effectively lowering the bit error rate and enhancing decoding capability. Full article
(This article belongs to the Section E: Applied Mathematics)
19 pages, 1775 KB  
Article
A Reproducible Monte Carlo Framework for Evaluating Cost–Latency Trade-Offs in Cloud Continuum
by Enrico Barbierato, Emanuele Goldoni and Daniele Tessera
Electronics 2026, 15(8), 1708; https://doi.org/10.3390/electronics15081708 - 17 Apr 2026
Abstract
Parallel, data-intensive applications are now commonly executed on infrastructures that combine Cloud, Fog, and Edge resources. In these environments, execution takes place on devices with markedly different computational power and over networks whose latency and bandwidth can fluctuate over time. Under these conditions, [...] Read more.
Parallel, data-intensive applications are now commonly executed on infrastructures that combine Cloud, Fog, and Edge resources. In these environments, execution takes place on devices with markedly different computational power and over networks whose latency and bandwidth can fluctuate over time. Under these conditions, overall performance is influenced not only by processing speed but also by communication delays arising from data dependencies between tasks. This leads to a basic issue: whether scheduling strategies developed under computation-focused assumptions continue to perform well once communication costs are made explicit. This work examines the behavior of simple and widely adopted scheduling heuristics when network effects are modeled directly within the system. No new scheduling algorithms are introduced. Instead, the analysis focuses on how execution time and monetary cost change for deterministic parallel workloads deployed on hierarchical Cloud–Edge infrastructures exposed to stochastic latency and bandwidth variations. For this purpose, we introduce CLOWNSim, a lightweight discrete-event simulation framework that supports large-scale Monte Carlo experiments on fixed task graphs, allowing infrastructural and scheduling effects to be examined independently of workload variability. The experimental analysis covers fully centralized Cloud deployments, intermediate Fog configurations, and resource-constrained IoT scenarios. Scheduling policies based on computational speed, execution cost, or random device selection are evaluated across these settings. In Cloud and Fog environments, communication latency and data transfers represent a substantial portion of the overall makespan, weakening the impact of scheduling decisions driven primarily by computation. In IoT scenarios, limited processing capacity becomes the main limiting factor, while communication overhead remains present but less influential in comparison. The results indicate that performance trends across the Cloud–Edge continuum cannot be attributed to scheduler choice alone. Execution behavior arises from the combined effects of workload structure, placement decisions, and network properties, with different elements becoming dominant depending on the deployment context. The proposed simulation framework offers a practical way to study these interactions and to assess cost–performance trade-offs under communication conditions that reflect realistic operating environments. Full article
(This article belongs to the Special Issue Advances in Mobile Networked Systems)
Show Figures

Figure 1

22 pages, 7320 KB  
Article
Impacts of Vertical Variation in Canopy Structures on Shelterbelt Windbreak Effectiveness: A Large-Eddy Simulation Study
by Yanqun Liu, Jingxue Wang, Wenchao Chen, Mao Xu, Yu Zhang, Luca Patruno and Weilin Li
Forests 2026, 17(4), 498; https://doi.org/10.3390/f17040498 - 17 Apr 2026
Abstract
Shelterbelts are increasingly used to mitigate strong wind damage, but the complex canopy structures create challenges for numerical studies of windbreak effectiveness, such as the trade-off between computational cost and accuracy of results. To address these challenges and accurately investigate the downstream wind [...] Read more.
Shelterbelts are increasingly used to mitigate strong wind damage, but the complex canopy structures create challenges for numerical studies of windbreak effectiveness, such as the trade-off between computational cost and accuracy of results. To address these challenges and accurately investigate the downstream wind fields, most conventional studies represent shelterbelts as rectangular porous media with a uniformly distributed aerodynamic resistance coefficient. However, due to the vertical variation in canopy diameter and the irregular distribution of leaf density, the aerodynamic resistance of natural shelterbelts becomes nonuniform accordingly. To quantify the discrepancies arising from this simplification, this study first proposes a non-destructive approach to calculate canopy porosity profiles, which are further used to derive aerodynamic resistance at different heights. Then, by comparing the results obtained from the conventional and proposed approaches in Large-Eddy Simulations, the discrepancies caused by ignoring the vertical variation in canopy structures are analyzed. Finally, these discrepancies are further investigated for double-row shelterbelts. The results show that ignoring the vertical variation in canopy diameter leads to significant differences in windbreak effectiveness, especially for the downstream velocity and pressure fields at the top and middle heights of the canopy. The proposed approach provides a computationally efficient and more accurate representation of near-surface wind fields downstream of shelterbelts, thereby contributing to the accurate prediction of local wind fields for meteorological services. Full article
(This article belongs to the Section Forest Ecology and Management)
Show Figures

Figure 1

23 pages, 4209 KB  
Article
Analysis of Spatiotemporal Variations and Driving Factors of Carbon Storage Based on the PLUS-InVEST-OPGD Model: A Case Study of Tai’an City
by Haoyu Tang, Bohan Zhao, Miao Wang, Fuming Cui, Kaixuan Wang and Yue Pan
Sustainability 2026, 18(8), 4017; https://doi.org/10.3390/su18084017 - 17 Apr 2026
Abstract
Urban sprawl constantly reconfigures the land use pattern, and such transformations may significantly modify regional carbon stocks. Utilizing Tai’an City as the study site, this research established a comprehensive integrated Patch-generating Land Use Simulation (PLUS), Integrated Valuation of Ecosystem Services and Trade-offs (InVEST), [...] Read more.
Urban sprawl constantly reconfigures the land use pattern, and such transformations may significantly modify regional carbon stocks. Utilizing Tai’an City as the study site, this research established a comprehensive integrated Patch-generating Land Use Simulation (PLUS), Integrated Valuation of Ecosystem Services and Trade-offs (InVEST), and Optimal Parameters-based Geographical Detector (OPGD) system to reconstruct carbon storage shifts from 2000 to 2020, project its reaction to four diverse development trajectories in 2030, and investigate the drivers underlying spatial disparities. The results indicate a persistent decline in carbon storage throughout the past two decades, with peak concentrations primarily gathered in mountain regions dominated by forest and grassland, whereas lesser amounts were grouped in urban and suburban areas defined by built-up land. Compared to 2020, the projected carbon stock in 2030 drops by 1,803,966 t under the natural growth trajectory and by 2,417,778 t under the high-quality economic growth pathway, whereas it rises by 47,326 t under cultivated land conservation and by 7679 t under ecological conservation. Elevation represents the most crucial driver among the selected variables in clarifying the spatial fluctuation of carbon storage (q = 0.3985), followed by slope (0.3323), mean annual temperature (0.2382), and the Normalized Difference Vegetation Index (NDVI) (0.1219). The synergy between elevation and NDVI produces the highest integrated explanatory power (q = 0.4906). These outcomes imply that constraining construction land growth while protecting agricultural and ecological land is vital for preserving and enhancing regional carbon sink potential. Full article
11 pages, 1112 KB  
Article
Predicting Stock Market Risk Using Machine Learning Classification Models
by Seol-Hyun Noh
Risks 2026, 14(4), 92; https://doi.org/10.3390/risks14040092 - 17 Apr 2026
Abstract
This study aims to predict stock market risk and improve preparedness for potential economic crises by identifying sharp declines in stock returns using classification-based machine learning models. Using ten years of KOSPI 200 index data (2015 to 2024), a daily return series was [...] Read more.
This study aims to predict stock market risk and improve preparedness for potential economic crises by identifying sharp declines in stock returns using classification-based machine learning models. Using ten years of KOSPI 200 index data (2015 to 2024), a daily return series was constructed. A day was labeled a risk event (1) if its return fell below the 5th percentile of the returns observed over the preceding 100 trading days, indicating a sharp decline. Nine classification models—Logistic Regression, k-nearest Neighbor, Decision Tree, Random Forest, Linear Discriminant Analysis, Naive Bayes, Quadratic Discriminant Analysis, AdaBoost, and Gradient Boosting—were trained and validated. Among these, Logistic Regression demonstrated the strongest overall performance across multiple evaluation metrics, including accuracy, non-risk F1 score, risk F1 score, and AUC. Full article
(This article belongs to the Special Issue AI for Financial Risk Perception)
22 pages, 6370 KB  
Article
Interpretable Data-Driven Prediction, Optimization, and Decision-Making for Coking Coal Flotation
by Ying Wang and Deqian Cui
Processes 2026, 14(8), 1289; https://doi.org/10.3390/pr14081289 - 17 Apr 2026
Abstract
Coking coal flotation is a typical nonlinear, multi-variable, and multi-objective process in which concentrate quality and combustible matter recovery must be balanced under fluctuating feed and operating conditions. To improve both predictive reliability and decision support, this study proposes an integrated data-driven framework [...] Read more.
Coking coal flotation is a typical nonlinear, multi-variable, and multi-objective process in which concentrate quality and combustible matter recovery must be balanced under fluctuating feed and operating conditions. To improve both predictive reliability and decision support, this study proposes an integrated data-driven framework that combines particle swarm optimization-back propagation (PSO-BP) prediction, SHapley Additive exPlanations (SHAP) based interpretation, Non-dominated Sorting Genetic Algorithm II (NSGA-II) optimization, and entropy-weighted Technique for Order Preference by Similarity to Ideal Solution (Entropy-TOPSIS) decision-making. After three-sigma outlier screening, 2000 valid distributed control system (DCS) samples were retained for model development and temporal holdout evaluation, and an additional 200 later-period industrial samples were used for independent validation. The data were partitioned chronologically, with months 1–4, month 5, and month 6 used for training, validation, and temporal holdout testing, respectively, while the months 7–8 dataset was reserved for later-period validation. The results show that PSO-BP consistently outperformed conventional BP under both temporal holdout and later-period validation. SHAP analysis identified raw coal ash and collector dosage as the dominant factors for product-quality prediction, while collector dosage and frother dosage contributed most strongly to tailing heat of combustion. NSGA-II further revealed the trade-off among clean coal ash, clean coal sulfur, and tailing heat of combustion, and Entropy-TOPSIS converted the Pareto-optimal candidate set into a practically balanced operating recommendation. Sensitivity and robustness analyses indicated acceptable stability of both the optimization process and the final decision result. Overall, the proposed framework provides an interpretable prediction–optimization–decision workflow for coking coal flotation and offers a practical basis for future DCS-assisted intelligent regulation. Full article
(This article belongs to the Special Issue Mineral Processing Equipments and Cross-Disciplinary Approaches)
Show Figures

Figure 1

28 pages, 2566 KB  
Article
Optimal Hydraulic Design of Flexible-Lined Channels Using the VegyRap QGIS Tool with Cost and Reliability Analysis
by Ahmed M. Tawfik and Mohamed H. Elgamal
Water 2026, 18(8), 957; https://doi.org/10.3390/w18080957 - 17 Apr 2026
Abstract
Previous approaches to flexible-lined channel design typically isolate least-cost cross-section optimization from parameter uncertainty, or restrict reliability analysis to specific cases, limited failure modes, and proprietary codes. This paper presents VegyRap, an open-source QGIS-based plugin with an intuitive graphical user interface that unites [...] Read more.
Previous approaches to flexible-lined channel design typically isolate least-cost cross-section optimization from parameter uncertainty, or restrict reliability analysis to specific cases, limited failure modes, and proprietary codes. This paper presents VegyRap, an open-source QGIS-based plugin with an intuitive graphical user interface that unites these traditionally disjointed, sequential tasks into a single computational framework. The tool guides designers sequentially through: (i) terrain-driven longitudinal profile optimization using dynamic programming; (ii) least-cost cross-sectional optimization for riprap and vegetated linings; and (iii) multi-mode probabilistic reliability analysis coupled with dual risk–cost Pareto optimization. To seamlessly handle the stochastic behavior of uncertain variables, the framework features built-in statistical distributions and allows users to flexibly evaluate up to four distinct failure modes: overtopping, erosion, sedimentation, and near-critical flow oscillation. The framework’s capabilities are demonstrated through nine diverse design examples, incorporating benchmark validations against published studies and a comprehensive real-world case study in Wadi Al-Arja, Saudi Arabia. Results highlight that for vegetated channels, a hierarchical two-phase design logic is essential to satisfy both establishment-phase stability (Class E) and long-term conveyance (Class B). While benchmark comparisons show VegyRap achieves consistent cost reductions of 10–15% over traditional methods, the case study demonstrates that deterministic least-cost solutions can carry non-negligible failure probabilities. By utilizing marginal efficiency analysis to identify cost-effective enhancements, the integrated Pareto-based dual optimization produces transparent trade-off surfaces, empowering practitioners to transition from a single least-cost solution to a defensible, risk-calibrated preferred alternative. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
22 pages, 1032 KB  
Article
Sustainable Bridge Construction Decisions Using Fuzzy MCDM: A Comprehensive Comparison of AHP–VIKOR, BWM–VIKOR, and TOPSIS
by Alaa ElMarkaby and Ahmed Elyamany
Sustainability 2026, 18(8), 4013; https://doi.org/10.3390/su18084013 - 17 Apr 2026
Abstract
The selection of bridge construction systems significantly influences the sustainability of infrastructure projects, encompassing both economic and environmental dimensions. This study presents a comparative assessment of three hybrid fuzzy Multi-Criteria Decision-Making (MCDM) techniques, Fuzzy AHP–VIKOR, Fuzzy TOPSIS, and Fuzzy BWM–VIKOR, for choosing optimum [...] Read more.
The selection of bridge construction systems significantly influences the sustainability of infrastructure projects, encompassing both economic and environmental dimensions. This study presents a comparative assessment of three hybrid fuzzy Multi-Criteria Decision-Making (MCDM) techniques, Fuzzy AHP–VIKOR, Fuzzy TOPSIS, and Fuzzy BWM–VIKOR, for choosing optimum bridge construction system during the preliminary design phases. Each method was applied consistently, integrating project-specific criteria and construction alternatives. The comparison extended beyond the final rankings to assess computational efficiency, sensitivity to input variations, ease of implementation, and stability. Expert opinions were gathered using semi-structured interviews and questionnaires to reflect the practical circumstances of bridge engineering in the field. The results show distinct strengths and trade-offs among the techniques, offering valuable insights for researchers and industry professionals alike. This study contributes to the knowledge base by explaining how different fuzzy MCDM methods are used in real-world bridge construction projects. These outcomes improve the methodological rigor of decision science and support more robust decision-making frameworks in bridge engineering. Full article
24 pages, 912 KB  
Article
Advanced Insurance Risk Modeling for Pseudo-New Customers Using Balanced Ensembles and Transformer Architectures
by Finn L. Solly, Raquel Soriano-Gonzalez, Angel A. Juan and Antoni Guerrero
Risks 2026, 14(4), 91; https://doi.org/10.3390/risks14040091 - 17 Apr 2026
Abstract
In insurance portfolios, classifying customers without a prior history at a given company is particularly challenging due to the absence of historical behavior, extreme class imbalance, heavy-tailed loss distributions, and strict operational constraints. Traditional machine learning approaches, including the baseline methodology proposed in [...] Read more.
In insurance portfolios, classifying customers without a prior history at a given company is particularly challenging due to the absence of historical behavior, extreme class imbalance, heavy-tailed loss distributions, and strict operational constraints. Traditional machine learning approaches, including the baseline methodology proposed in previous studies, typically optimize global predictive accuracy and therefore fail to capture business-critical outcomes, especially the identification of high-risk clients. This study extends the existing approach by evaluating two complementary business-aware classification strategies: (i) a balanced bagging ensemble specifically designed to handle class imbalance and maximize expected profit under explicit customer-omission constraints, and (ii) a lightweight Transformer-based architecture capable of learning richer feature representations. Both approaches incorporate the asymmetric financial cost structure of insurance and operate under operational selection limits. The empirical analysis is conducted on a proprietary large-scale auto insurance dataset comprising 51,618 customers and is complemented by validation on nine synthetic datasets to assess robustness. Model performance is evaluated using statistical tests (ANOVA, Friedman, and pair-wise comparisons) together with business-oriented metrics. The results show that both proposed approaches consistently outperform the baseline methodology (p < 0.001) in terms of profit, with the ensemble offering a better balance of performance and efficiency, while the Transformer shows stronger robustness and generalization under data perturbations. The balanced ensemble provides the most favourable trade-off between predictive performance, robustness, interpretability, and computational efficiency, making it suitable for deployment in regulated insurance environments, while the Transformer achieves competitive results and exhibits stronger generalization under data perturbations. The proposed approach aligns machine learning with actuarial portfolio optimization by explicitly integrating profit-driven objectives and operational constraints, offering two practical and scalable solutions for risk-based decision-making in real-world insurance settings. Full article
(This article belongs to the Special Issue Artificial Intelligence Risk Management)
34 pages, 1312 KB  
Article
Geometry-Aware Conformal Calibration of Entropic Soft-Min Operators for Machine Learning and Reinforcement Learning
by J. Ernesto Solanes and Aitana Francés-Falip
Electronics 2026, 15(8), 1704; https://doi.org/10.3390/electronics15081704 - 17 Apr 2026
Abstract
Entropic soft-min operators are widely used to obtain smooth approximations of minimum and argmin mechanisms in optimization, machine learning, and reinforcement learning. The quality of this approximation is controlled by an inverse temperature parameter that governs the trade-off between smoothness and fidelity, yet [...] Read more.
Entropic soft-min operators are widely used to obtain smooth approximations of minimum and argmin mechanisms in optimization, machine learning, and reinforcement learning. The quality of this approximation is controlled by an inverse temperature parameter that governs the trade-off between smoothness and fidelity, yet its selection is usually based on global heuristics or worst-case bounds that do not account for the geometry of the candidate cost vector. This study investigates the calibration of the inverse temperature parameter from a geometry-aware perspective, with explicit guarantees on the approximation error between the entropic soft-min and the exact minimum value. After establishing the structural properties of the relaxation error, including monotonicity with respect to the inverse temperature and its dependence on the geometry of the near-optimal set, we introduce a conformal calibration rule that selects the smallest inverse temperature, ensuring that a prescribed upper quantile of the approximation error remains below a target tolerance with distribution-free finite-sample validity. The resulting selector adapts to the geometry distribution represented in the calibration population and provides a principled alternative to mean-based and worst-case tuning rules. Numerical experiments, including geometry-controlled benchmarks and a contextual bandit setting illustrating the impact of geometry-aware calibration on decision-making under estimated action values, show that the proposed method accurately tracks oracle calibration temperatures, preserves the desired operator-level coverage, and makes explicit how geometric heterogeneity governs the effective sharpness required by the soft-min approximation. Additional shifted evaluations illustrate the role of exchangeability in the validity guarantee and the consequences of transferring temperatures across populations with different near-optimal geometries. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Back to TopTop