Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (192)

Search Parameters:
Keywords = probability distribution gap

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 33004 KB  
Article
Sampling-Based Path Planning and Semantic Navigation for Complex Large-Scale Environments
by Shakeeb Ahmad and J. Sean Humbert
Robotics 2025, 14(11), 149; https://doi.org/10.3390/robotics14110149 (registering DOI) - 24 Oct 2025
Abstract
This article proposes a multi-agent path planning and decision-making solution for high-tempo field robotic operations, such as search-and-rescue, in large-scale unstructured environments. As a representative example, the subterranean environments can span many kilometers and are loaded with challenges such as limited to no [...] Read more.
This article proposes a multi-agent path planning and decision-making solution for high-tempo field robotic operations, such as search-and-rescue, in large-scale unstructured environments. As a representative example, the subterranean environments can span many kilometers and are loaded with challenges such as limited to no communication, hazardous terrain, blocked passages due to collapses, and vertical structures. The time-sensitive nature of these operations inherently requires solutions that are reliably deployable in practice. Moreover, a human-supervised multi-robot team is required to ensure that mobility and cognitive capabilities of various agents are leveraged for efficiency of the mission. Therefore, this article attempts to propose a solution that is suited for both air and ground vehicles and is adapted well for information sharing between different agents. This article first details a sampling-based autonomous exploration solution that brings significant improvements with respect to the current state of the art. These improvements include relying on an occupancy grid-based sample-and-project solution to terrain assessment and formulating the solution-search problem as a constraint-satisfaction problem to further enhance the computational efficiency of the planner. In addition, the demonstration of the exploration planner by team MARBLE at the DARPA Subterranean Challenge finals is presented. The inevitable interaction of heterogeneous autonomous robots with human operators demands the use of common semantics for reasoning across the robot and human teams making use of different geometric map capabilities suited for their mobility and computational resources. To this end, the path planner is further extended to include semantic mapping and decision-making into the framework. Firstly, the proposed solution generates a semantic map of the exploration environment by labeling position history of a robot in the form of probability distributions of observations. The semantic reasoning solution uses higher-level cues from a semantic map in order to bias exploration behaviors toward a semantic of interest. This objective is achieved by using a particle filter to localize a robot on a given semantic map followed by a Partially Observable Markov Decision Process (POMDP)-based controller to guide the exploration direction of the sampling-based exploration planner. Hence, this article aims to bridge an understanding gap between human and a heterogeneous robotic team not just through a common-sense semantic map transfer among the agents but by also enabling a robot to make use of such information to guide its lower-level reasoning in case such abstract information is transferred to it. Full article
(This article belongs to the Special Issue Autonomous Robotics for Exploration)
14 pages, 2426 KB  
Article
Assessing Fault Slip Probability and Controlling Factors in Shale Gas Hydraulic Fracturing
by Kailong Wang, Wei Lian, Jun Li and Yanxian Wu
Eng 2025, 6(10), 272; https://doi.org/10.3390/eng6100272 - 11 Oct 2025
Viewed by 237
Abstract
Fault slips induced by hydraulic fracturing are the primary mechanism of casing de-formation during deep shale gas development in Sichuan’s Luzhou Block, where de-formation rates reach 51% and severely compromise productivity. To address a critical gap in existing research on quantitative risk assessment [...] Read more.
Fault slips induced by hydraulic fracturing are the primary mechanism of casing de-formation during deep shale gas development in Sichuan’s Luzhou Block, where de-formation rates reach 51% and severely compromise productivity. To address a critical gap in existing research on quantitative risk assessment systems, we developed a probabilistic model integrating pore pressure evolution dynamics with Monte Carlo simulations to quantify slip risks. The model incorporates key operational parameters (pumping pressure, rate, and duration) and geological factors (fault friction coefficient, strike/dip angles, and horizontal stress difference) validated through field data, showing >90% slip probability in 60% of deformed well intervals. The results demonstrate that prolonged high-intensity fracturing increases slip probability by 32% under 80–100 MPa pressure surges. Meanwhile, an increase in the friction coefficient from 0.40 to 0.80 reduces slip probability by 6.4% through elevated critical pore pressure. Fault geometry exhibits coupling effects: the risk of low-dip faults reaches its peak when strike parallels the maximum horizontal stress, whereas high-dip faults show a bimodal high-risk distribution at strike angles of 60–120°; here, the horizontal stress difference is directly proportional to the slip probability. We propose optimizing fracturing parameters, controlling operation duration, and avoiding high-risk fault geometries as mitigation strategies, providing a scientific foundation for enhancing the safety and efficiency of shale gas development. Full article
Show Figures

Figure 1

21 pages, 1271 KB  
Article
Feasibility and Limitations of Generalized Grover Search Algorithm-Based Quantum Asymmetric Cryptography: An Implementation Study on Quantum Hardware
by Tzung-Her Chen and Wei-Hsiang Hung
Electronics 2025, 14(19), 3821; https://doi.org/10.3390/electronics14193821 - 26 Sep 2025
Viewed by 362
Abstract
The emergence of quantum computing poses significant threats to conventional public-key cryptography, driving the urgent need for quantum-resistant cryptographic solutions. While quantum key distribution addresses secure key exchange, its dependency on symmetric keys and point-to-point limitations present scalability constraints. Quantum Asymmetric Encryption (QAE) [...] Read more.
The emergence of quantum computing poses significant threats to conventional public-key cryptography, driving the urgent need for quantum-resistant cryptographic solutions. While quantum key distribution addresses secure key exchange, its dependency on symmetric keys and point-to-point limitations present scalability constraints. Quantum Asymmetric Encryption (QAE) offers a promising alternative by leveraging quantum mechanical principles for security. This paper presents the first practical implementation of a QAE protocol on IBM Quantum devices, building upon the theoretical framework originally proposed by Yoon et al. We develop a generalized Grover Search Algorithm (GSA) framework that supports non-standard initial quantum states through novel diffusion operator designs, extending its applicability beyond idealized conditions. The complete QAE protocol, including key generation, encryption, and decryption stages, is translated into executable quantum circuits and evaluated on both IBM Quantum simulators and real quantum hardware. Experimental results demonstrate significant scalability challenges, with success probabilities deteriorating considerably for larger systems. The 2-qubit implementation achieves near-perfect accuracy (100% on the simulator, and 93.88% on the hardware), while performance degrades to 78.15% (simulator) and 45.84% (hardware) for 3 qubits, and declines critically to 48.08% (simulator) and 7.63% (hardware) for 4 qubits. This degradation is primarily attributed to noise and decoherence effects in current Noisy Intermediate-Scale Quantum (NISQ) devices, highlighting the limitations of single-iteration GSA approaches. Our findings underscore the critical need for enhanced hardware fidelity and algorithmic optimization to advance the practical viability of quantum cryptographic systems, providing valuable insights for bridging the gap between theoretical quantum cryptography and real-world implementations. Full article
Show Figures

Figure 1

28 pages, 6622 KB  
Article
Bayesian Spatio-Temporal Trajectory Prediction and Conflict Alerting in Terminal Area
by Yangyang Li, Yong Tian, Xiaoxuan Xie, Bo Zhi and Lili Wan
Aerospace 2025, 12(9), 855; https://doi.org/10.3390/aerospace12090855 - 22 Sep 2025
Viewed by 499
Abstract
Precise trajectory prediction in the airspace of a high-density terminal area (TMA) is crucial for Trajectory Based Operations (TBO), but frequent aircraft interactions and maneuvering behaviors can introduce significant uncertainties. Most existing approaches use deterministic deep learning models that lack uncertainty quantification and [...] Read more.
Precise trajectory prediction in the airspace of a high-density terminal area (TMA) is crucial for Trajectory Based Operations (TBO), but frequent aircraft interactions and maneuvering behaviors can introduce significant uncertainties. Most existing approaches use deterministic deep learning models that lack uncertainty quantification and explicit spatial awareness. To address this gap, we propose the BST-Transformer, a Bayesian spatio-temporal deep learning framework that produces probabilistic multi-step trajectory forecasts and supports probabilistic conflict alerting. The framework first extracts temporal and spatial interaction features via spatio-temporal attention encoders and then uses a Bayesian decoder with variational inference to yield trajectory distributions. Potential conflicts are evaluated by Monte Carlo sampling of the predictive distributions to produce conflict probabilities and alarm decisions. Experiments based on real SSR data from the Guangzhou TMA show that this model performs exceptionally well in improving prediction accuracy by reducing MADE 60.3% relative to a deterministic ST-Transformer with analogous reductions in horizontal and vertical errors (MADHE and MADVE), quantifying uncertainty and significantly enhancing the system’s ability to identify safety risks, and providing strong support for intelligent air traffic management with uncertainty perception capabilities. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

24 pages, 4793 KB  
Article
Developing Rainfall Spatial Distribution for Using Geostatistical Gap-Filled Terrestrial Gauge Records in the Mountainous Region of Oman
by Mahmoud A. Abd El-Basir, Yasser Hamed, Tarek Selim, Ronny Berndtsson and Ahmed M. Helmi
Water 2025, 17(18), 2695; https://doi.org/10.3390/w17182695 - 12 Sep 2025
Viewed by 539
Abstract
Arid mountainous regions are vulnerable to extreme hydrological events such as floods and droughts. Providing accurate and continuous rainfall records with no gaps is crucial for effective flood mitigation and water resource management in these and downstream areas. Satellite data and geospatial interpolation [...] Read more.
Arid mountainous regions are vulnerable to extreme hydrological events such as floods and droughts. Providing accurate and continuous rainfall records with no gaps is crucial for effective flood mitigation and water resource management in these and downstream areas. Satellite data and geospatial interpolation can be employed for this purpose and to provide continuous data series. However, it is essential to thoroughly assess these methods to avoid an increase in errors and uncertainties in the design of flood protection and water resource management systems. The current study focuses on the mountainous region in northern Oman, which covers approximately 50,000 square kilometers, accounting for 16% of Oman’s total area. The study utilizes data from 279 rain gauges spanning from 1975 to 2009, with varying annual data gaps. Due to the limited accuracy of satellite data in arid and mountainous regions, 51 geospatial interpolations were used to fill data gaps to yield maximum annual and total yearly precipitation data records. The root mean square error (RMSE) and correlation coefficient (R) were used to assess the most suitable geospatial interpolation technique. The selected geospatial interpolation technique was utilized to generate the spatial distribution of annual maxima and total yearly precipitation over the study area for the period from 1975 to 2009. Furthermore, gamma, normal, and extreme value families of probability density functions (PDFs) were evaluated to fit the rain gauge gap-filled datasets. Finally, maximum annual precipitation values for return periods of 2, 5, 10, 25, 50, and 100 years were generated for each rain gauge. The results show that the geostatistical interpolation techniques outperformed the deterministic interpolation techniques in generating the spatial distribution of maximum and total yearly records over the study area. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

23 pages, 355 KB  
Article
Two Types of Geometric Jensen–Shannon Divergences
by Frank Nielsen
Entropy 2025, 27(9), 947; https://doi.org/10.3390/e27090947 - 11 Sep 2025
Viewed by 778
Abstract
The geometric Jensen–Shannon divergence (G-JSD) has gained popularity in machine learning and information sciences thanks to its closed-form expression between Gaussian distributions. In this work, we introduce an alternative definition of the geometric Jensen–Shannon divergence tailored to positive densities which does not normalize [...] Read more.
The geometric Jensen–Shannon divergence (G-JSD) has gained popularity in machine learning and information sciences thanks to its closed-form expression between Gaussian distributions. In this work, we introduce an alternative definition of the geometric Jensen–Shannon divergence tailored to positive densities which does not normalize geometric mixtures. This novel divergence is termed the extended G-JSD, as it applies to the more general case of positive measures. We explicitly report the gap between the extended G-JSD and the G-JSD when considering probability densities, and show how to express the G-JSD and extended G-JSD using the Jeffreys divergence and the Bhattacharyya distance or Bhattacharyya coefficient. The extended G-JSD is proven to be an f-divergence, which is a separable divergence satisfying information monotonicity and invariance in information geometry. We derive a corresponding closed-form formula for the two types of G-JSDs when considering the case of multivariate Gaussian distributions that is often met in applications. We consider Monte Carlo stochastic estimations and approximations of the two types of G-JSD using the projective γ-divergences. Although the square root of the JSD yields a metric distance, we show that this is no longer the case for the two types of G-JSD. Finally, we explain how these two types of geometric JSDs can be interpreted as regularizations of the ordinary JSD. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
24 pages, 2422 KB  
Article
Autonomous Coverage Path Planning Model for Maritime Search and Rescue with UAV Application
by Chuxiong Zhang, Ning Huang and Chaoxian Wu
J. Mar. Sci. Eng. 2025, 13(9), 1735; https://doi.org/10.3390/jmse13091735 - 9 Sep 2025
Viewed by 579
Abstract
Maritime transport is vital to the global economy, yet the frequency of natural disasters at sea continues to rise, resulting in more persons falling overboard. Therefore, effective maritime search and rescue (SAR) hinges on accurately predicting the probable distribution of drifting victims and [...] Read more.
Maritime transport is vital to the global economy, yet the frequency of natural disasters at sea continues to rise, resulting in more persons falling overboard. Therefore, effective maritime search and rescue (SAR) hinges on accurately predicting the probable distribution of drifting victims and on rapidly devising an optimal search plan. Conventional SAR operations either rely on rigid, pre-defined patterns or employ reinforcement-learning techniques that yield non-unique solutions and incur excessive computational time. To overcome these shortcomings, we propose an adaptive SAR framework that integrates three modules: (i) the AP98 maritime-drift model, (ii) Monte Carlo particle simulation, and (iii) a mixed-integer linear programming (MILP) model. First, Monte Carlo particles are propagated through the AP98 model to generate a probability density map of the victim’s location. Subsequently, the MILP model maximizes the cumulative probability of rescue success while minimizing a composite cost index, producing optimal UAV search trajectories solved via Gurobi. Experimental results on a 10 km × 10 km scenario with five UAVs show that, compared with traditional parallel-line search, the proposed MILP approach increases cumulative success probability by 12.4% within the first twelve search steps, eliminates path overlap entirely, and converges in 9.5 s with an optimality gap of 0.79%, thereby demonstrating both efficiency and real-time viability. When MIPFocus (a solver setting in Gurobi that controls the emphasis of the Mixed Integer Programming solver) aims at the optimal solution and uses the parallel solution method at the same time, the best result is achieved. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

29 pages, 1132 KB  
Article
Generating Realistic Synthetic Patient Cohorts: Enforcing Statistical Distributions, Correlations, and Logical Constraints
by Ahmad Nader Fasseeh, Rasha Ashmawy, Rok Hren, Kareem ElFass, Attila Imre, Bertalan Németh, Dávid Nagy, Balázs Nagy and Zoltán Vokó
Algorithms 2025, 18(8), 475; https://doi.org/10.3390/a18080475 - 1 Aug 2025
Viewed by 909
Abstract
Large, high-quality patient datasets are essential for applications like economic modeling and patient simulation. However, real-world data is often inaccessible or incomplete. Synthetic patient data offers an alternative, and current methods often fail to preserve clinical plausibility, real-world correlations, and logical consistency. This [...] Read more.
Large, high-quality patient datasets are essential for applications like economic modeling and patient simulation. However, real-world data is often inaccessible or incomplete. Synthetic patient data offers an alternative, and current methods often fail to preserve clinical plausibility, real-world correlations, and logical consistency. This study presents a patient cohort generator designed to produce realistic, statistically valid synthetic datasets. The generator uses predefined probability distributions and Cholesky decomposition to reflect real-world correlations. A dependency matrix handles variable relationships in the right order. Hard limits block unrealistic values, and binary variables are set using percentiles to match expected rates. Validation used two datasets, NHANES (2021–2023) and the Framingham Heart Study, evaluating cohort diversity (general, cardiac, low-dimensional), data sparsity (five correlation scenarios), and model performance (MSE, RMSE, R2, SSE, correlation plots). Results demonstrated strong alignment with real-world data in central tendency, dispersion, and correlation structures. Scenario A (empirical correlations) performed best (R2 = 86.8–99.6%, lowest SSE and MAE). Scenario B (physician-estimated correlations) also performed well, especially in a low-dimensions population (R2 = 80.7%). Scenario E (no correlation) performed worst. Overall, the proposed model provides a scalable, customizable solution for generating synthetic patient cohorts, supporting reliable simulations and research when real-world data is limited. While deep learning approaches have been proposed for this task, they require access to large-scale real datasets and offer limited control over statistical dependencies or clinical logic. Our approach addresses this gap. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 6348 KB  
Article
Building Envelope Thermal Anomaly Detection Using an Integrated Vision-Based Technique and Semantic Segmentation
by Shayan Mirzabeigi, Ryan Razkenari and Paul Crovella
Buildings 2025, 15(15), 2672; https://doi.org/10.3390/buildings15152672 - 29 Jul 2025
Viewed by 1484
Abstract
Infrared thermography is a common approach used in building inspection for identifying building envelope thermal anomalies that cause energy loss and occupant thermal discomfort. Detecting these anomalies is essential to improve the thermal performance of energy-inefficient buildings through energy retrofit design and correspondingly [...] Read more.
Infrared thermography is a common approach used in building inspection for identifying building envelope thermal anomalies that cause energy loss and occupant thermal discomfort. Detecting these anomalies is essential to improve the thermal performance of energy-inefficient buildings through energy retrofit design and correspondingly reduce operational energy costs and environmental impacts. A thermal bridge is an unwanted conductive heat transfer. On the other hand, an infiltration/exfiltration anomaly is an uncontrollable convective heat transfer, typically happening around windows and doors, but it can also be due to a defect that comprises a building envelope’s integrity. While the existing literature underscores the significance of automatic thermal anomaly identification and offers insights into automated methodologies, there is a notable gap in addressing an automated workflow that leverages building envelope component segmentation for enhanced detection accuracy. Consequently, an automatic thermal anomaly identification workflow from visible and thermal images was developed to test it, utilizing segmented building envelope information compared to a workflow without any semantic segmentation. Therefore, building envelope images (e.g., walls and windows) were segmented based on a U-Net architecture compared to a more conventional semantic segmentation approach. The results were discussed to better understand the importance of the availability of training data and for scaling the workflow. Then, thermal anomaly thresholds for different target domains were detected using probability distributions. Finally, thermal anomaly masks of those domains were computed. This study conducted a comprehensive examination of a campus building in Syracuse, New York, utilizing a drone-based data collection approach. The case study successfully detected diverse thermal anomalies associated with various envelope components. The proposed approach offers the potential for immediate and accurate in situ thermal anomaly detection in building inspections. Full article
Show Figures

Figure 1

19 pages, 1167 KB  
Article
A Reservoir Group Flood Control Operation Decision-Making Risk Analysis Model Considering Indicator and Weight Uncertainties
by Tangsong Luo, Xiaofeng Sun, Hailong Zhou, Yueping Xu and Yu Zhang
Water 2025, 17(14), 2145; https://doi.org/10.3390/w17142145 - 18 Jul 2025
Cited by 1 | Viewed by 579
Abstract
Reservoir group flood control scheduling decision-making faces multiple uncertainties, such as dynamic fluctuations of evaluation indicators and conflicts in weight assignment. This study proposes a risk analysis model for the decision-making process: capturing the temporal uncertainties of flood control indicators (such as reservoir [...] Read more.
Reservoir group flood control scheduling decision-making faces multiple uncertainties, such as dynamic fluctuations of evaluation indicators and conflicts in weight assignment. This study proposes a risk analysis model for the decision-making process: capturing the temporal uncertainties of flood control indicators (such as reservoir maximum water level and downstream control section flow) through the Long Short-Term Memory (LSTM) network, constructing a feasible weight space including four scenarios (unique fixed value, uniform distribution, etc.), resolving conflicts among the weight results from four methods (Analytic Hierarchy Process (AHP), Entropy Weight, Criteria Importance Through Intercriteria Correlation (CRITIC), Principal Component Analysis (PCA)) using game theory, defining decision-making risk as the probability that the actual safety level fails to reach the evaluation threshold, and quantifying risks based on the First-Order Second-Moment (FOSM) method. Case verification in the cascade reservoirs of the Qiantang River Basin of China shows that the model provides a risk assessment framework integrating multi-source uncertainties for flood control scheduling decisions through probabilistic description of indicator uncertainties (e.g., Zmax1 with μ = 65.3 and σ = 8.5) and definition of weight feasible regions (99% weight distribution covered by the 3σ criterion), filling the methodological gap in risk quantification during the decision-making process in existing research. Full article
(This article belongs to the Special Issue Flood Risk Identification and Management, 2nd Edition)
Show Figures

Figure 1

42 pages, 2145 KB  
Article
Uncertainty-Aware Predictive Process Monitoring in Healthcare: Explainable Insights into Probability Calibration for Conformal Prediction
by Maxim Majlatow, Fahim Ahmed Shakil, Andreas Emrich and Nijat Mehdiyev
Appl. Sci. 2025, 15(14), 7925; https://doi.org/10.3390/app15147925 - 16 Jul 2025
Cited by 1 | Viewed by 2517
Abstract
In high-stakes decision-making environments, predictive models must deliver not only high accuracy but also reliable uncertainty estimations and transparent explanations. This study explores the integration of probability calibration techniques with Conformal Prediction (CP) within a predictive process monitoring (PPM) framework tailored to healthcare [...] Read more.
In high-stakes decision-making environments, predictive models must deliver not only high accuracy but also reliable uncertainty estimations and transparent explanations. This study explores the integration of probability calibration techniques with Conformal Prediction (CP) within a predictive process monitoring (PPM) framework tailored to healthcare analytics. CP is renowned for its distribution-free prediction regions and formal coverage guarantees under minimal assumptions; however, its practical utility critically depends on well-calibrated probability estimates. We compare a range of post-hoc calibration methods—including parametric approaches like Platt scaling and Beta calibration, as well as non-parametric techniques such as Isotonic Regression and Spline calibration—to assess their impact on aligning raw model outputs with observed outcomes. By incorporating these calibrated probabilities into the CP framework, our multilayer analysis evaluates improvements in prediction region validity, including tighter coverage gaps and reduced minority error contributions. Furthermore, we employ SHAP-based explainability to explain how calibration influences feature attribution for both high-confidence and ambiguous predictions. Experimental results on process-driven healthcare data indicate that the integration of calibration with CP not only enhances the statistical robustness of uncertainty estimates but also improves the interpretability of predictions, thereby supporting safer and robust clinical decision-making. Full article
(This article belongs to the Special Issue Digital Innovations in Healthcare)
Show Figures

Figure 1

20 pages, 853 KB  
Review
Dengue and Flavivirus Co-Infections: Challenges in Diagnosis, Treatment, and Disease Management
by Rosmen Sufi Aiman Sabrina, Nor Azila Muhammad Azami and Wei Boon Yap
Int. J. Mol. Sci. 2025, 26(14), 6609; https://doi.org/10.3390/ijms26146609 - 10 Jul 2025
Cited by 2 | Viewed by 1743
Abstract
Co-infections of dengue serotypes and dengue with other flaviviruses pose substantial hurdles in disease diagnosis, treatment options, and disease management. The overlapping geographic distributions and mosquito vectors significantly enhance the probability of co-infections. Co-infections may result in more severe disease outcomes due to [...] Read more.
Co-infections of dengue serotypes and dengue with other flaviviruses pose substantial hurdles in disease diagnosis, treatment options, and disease management. The overlapping geographic distributions and mosquito vectors significantly enhance the probability of co-infections. Co-infections may result in more severe disease outcomes due to elevated viral loads, modulation of the immune response, and antibody enhancement. Cross-reactivity in serological assays and the likeness of clinical presentations add to the ongoing challenges in disease diagnosis. Molecular diagnostics such as reverse transcription polymerase chain reaction (RT-PCR) and next-generation sequencing (NGS) are, therefore, employed for more specific disease diagnosis although requiring substantial resources. Despite the advancements, specific anti-flaviviral therapy is still limited, hence the urgency for further investigative research into various therapeutic approaches, including peptide inhibitors, host-targeted therapies, and RNA-based interventions. This review discusses the epidemiology, clinical ramifications, and diagnostic obstacles associated with flavivirus co-infections whilst assessing prospective strategies for better disease prevention, treatment, and management. Addressing these critical gaps is essential for disease mitigation whilst improving patient management especially in regions where co-circulation of flaviviruses is common and their diseases are highly endemic. Full article
(This article belongs to the Section Molecular Microbiology)
Show Figures

Figure 1

38 pages, 12308 KB  
Article
Taxonomic Revision of the Catostemma Clade (Malvaceae/Bombacoideae/Adansonieae)
by Carlos Daniel Miranda Ferreira, William Surprison Alverson, José Fernando A. Baumgratz and Massimo G. Bovini
Plants 2025, 14(14), 2085; https://doi.org/10.3390/plants14142085 - 8 Jul 2025
Viewed by 804
Abstract
The Catostemma clade comprises three genera: Aguiaria, Catostemma, and Scleronema. These genera are representatives of the tribe Adansonieae, and are part of the subfamily Bombacoideae of the Malvaceae family. Taxonomic studies of these genera are scarce and limited to isolated [...] Read more.
The Catostemma clade comprises three genera: Aguiaria, Catostemma, and Scleronema. These genera are representatives of the tribe Adansonieae, and are part of the subfamily Bombacoideae of the Malvaceae family. Taxonomic studies of these genera are scarce and limited to isolated publications of new species or regional floras. We reviewed their taxonomy, morphology, and geography, and assessed gaps in our knowledge of this group. We carried out a bibliographic survey, an analysis of herbarium collections, and collected new material in Brazilian forests. Here, we provide an identification key, nomenclatural revisions, morphological descriptions, taxonomic comments, geographic distribution maps, illustrations, and analyses of the conservation status for all species. We also discuss probable synapomorphies of the clade, to advance our understanding of phylogenetic relationships within the Adansonieae tribe of Bombacoideae. In total, we recognize 16 species: 1 Aguiaria, 12 Catostemma, and 3 Scleronema, of which 7 are endemic to Brazil, 1 to Colombia, and 1 to Venezuela. Two species are ranked as Critically Endangered (CR), and four as Data Deficient (DD). Full article
(This article belongs to the Special Issue Plant Diversity and Classification)
Show Figures

Figure 1

18 pages, 1130 KB  
Article
Robust Optimization of Active Distribution Networks Considering Source-Side Uncertainty and Load-Side Demand Response
by Renbo Wu and Shuqin Liu
Energies 2025, 18(13), 3531; https://doi.org/10.3390/en18133531 - 4 Jul 2025
Cited by 1 | Viewed by 515
Abstract
Aiming to solve optimization scheduling difficulties caused by the double uncertainty of source-side photovoltaic (PV) output and load-side demand response in active distribution networks, this paper proposes a two-stage distribution robust optimization method. First, the first-stage model with the objective of minimizing power [...] Read more.
Aiming to solve optimization scheduling difficulties caused by the double uncertainty of source-side photovoltaic (PV) output and load-side demand response in active distribution networks, this paper proposes a two-stage distribution robust optimization method. First, the first-stage model with the objective of minimizing power purchase cost and the second-stage model with the co-optimization of active loss, distributed power generation cost, PV abandonment penalty, and load compensation cost under the worst probability distribution are constructed, and multiple constraints such as distribution network currents, node voltages, equipment outputs, and demand responses are comprehensively considered. Secondly, the second-order cone relaxation and linearization technique is adopted to deal with the nonlinear constraints, and the inexact column and constraint generation (iCCG) algorithm is designed to accelerate the solution process. The solution efficiency and accuracy are balanced by dynamically adjusting the convergence gap of the main problem. The simulation results based on the improved IEEE33 bus system show that the proposed method reduces the operation cost by 5.7% compared with the traditional robust optimization, and the cut-load capacity is significantly reduced at a confidence level of 0.95. The iCCG algorithm improves the computational efficiency by 35.2% compared with the traditional CCG algorithm, which verifies the effectiveness of the model in coping with the uncertainties and improving the economy and robustness. Full article
Show Figures

Figure 1

31 pages, 1107 KB  
Article
Length–Weight Distribution of Non-Zero Elements in Randomized Bit Sequences
by Christoph Lange, Andreas Ahrens, Yadu Krishnan Krishnakumar and Olaf Grote
Sensors 2025, 25(12), 3825; https://doi.org/10.3390/s25123825 - 19 Jun 2025
Viewed by 687
Abstract
Randomness plays an important role in data communication as well as in cybersecurity. In the simulation of communication systems, randomized bit sequences are often used to model a digital source information stream. Cryptographic outputs should look more random than deterministic in order to [...] Read more.
Randomness plays an important role in data communication as well as in cybersecurity. In the simulation of communication systems, randomized bit sequences are often used to model a digital source information stream. Cryptographic outputs should look more random than deterministic in order to provide an attacker with as little information as possible. Therefore, the investigation of randomness, especially in cybersecurity, has attracted a lot of attention and research activities. Common tests regarding randomness are hypothesis-based and focus on analyzing the distribution and independence of zero and non-zero elements in a given random sequence. In this work, a novel approach grounded in a gap-based burst analysis is presented and analyzed. Such approaches have been successfully implemented, e.g., in data communication systems and data networks. The focus of the current work is on detecting deviations from the ideal gap-density function describing randomized bit sequences. For testing and verification purposes, the well-researched post-quantum cryptographic CRYSTALS suite, including its Kyber and Dilithium schemes, is utilized. The proposed technique allows for quickly verifying the level of randomness in given cryptographic outputs. The results for different sequence-generation techniques are presented, thus validating the approach. The results show that key-encapsulation and key-exchange algorithms, such as CRYSTALS-Kyber, achieve a lower level of randomness compared to digital signature algorithms, such as CRYSTALS-Dilithium. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Back to TopTop