Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (177)

Search Parameters:
Keywords = hitting probabilities

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1346 KB  
Article
Cache-Based Resource Allocation and Auxiliary Beamforming Optimization Method for Marine Non-Terrestrial Networks
by Kang Du and Ying Zhang
Information 2026, 17(3), 277; https://doi.org/10.3390/info17030277 - 10 Mar 2026
Abstract
In maritime non-terrestrial networks, long-distance links and resource heterogeneity give rise to significant system transmission delays. As an effective mechanism for alleviating link pressure and enhancing efficiency, caching has been widely employed in this scenario. However, existing studies have predominantly focused on the [...] Read more.
In maritime non-terrestrial networks, long-distance links and resource heterogeneity give rise to significant system transmission delays. As an effective mechanism for alleviating link pressure and enhancing efficiency, caching has been widely employed in this scenario. However, existing studies have predominantly focused on the separate optimization of caching or beamforming, often overlooking the joint potential of the physical and network layers as well as the impact of dynamic channel variations. This paper develops a system model for cache-assisted maritime non-terrestrial networks that incorporates hit status, two-hop link characteristics, and a dynamic channel switching mechanism to characterize link instability. Accordingly, an optimization framework for the joint design of caching and beamforming is presented with the aim of minimizing average delay. The proposed algorithm decomposes the mixed non-convex problem into two sub-problems through a hierarchical alternating strategy, and integrates semidefinite relaxation, successive convex approximation, and a greedy mechanism to attain an efficient solution. Numerical results verify the rapid convergence of the joint design of caching and beamforming algorithm and demonstrate that the semidefinite relaxation relaxation achieves a rank-one recovery probability of over 92.8%, ensuring near-optimal beamforming design. Simulation results demonstrate that the joint design robustly outperforms various state-of-the-art benchmarks in delay performance and effectively circumvents physical-layer bottlenecks as network scales expand or satellite antenna resources become constrained. More importantly, the proactive caching design maintains a significant “robustness gap” against isolated optimization schemes, thereby offering solid theoretical support and a practical pathway for cross-layer collaborative optimization in integrated air-ground-space communication systems. Full article
(This article belongs to the Special Issue 2nd Edition of 5G Networks and Wireless Communication Systems)
Show Figures

Graphical abstract

22 pages, 3110 KB  
Article
Data-Driven Predictive Maintenance for Aircraft Components Through Sparse Event Logs
by Fulin Sezenoğlu Çetin, Ufuk Üngör, Emre Koyuncu and İbrahim Özkol
Aerospace 2026, 13(1), 110; https://doi.org/10.3390/aerospace13010110 - 22 Jan 2026
Viewed by 498
Abstract
Effective predictive maintenance is crucial for ensuring aircraft reliability, reducing operational disruptions, and supporting spare part inventory management in airline operations. However, maintenance data is often sparse, with irregular observations, missing records, and imbalanced failure distributions, making accurate forecasting a significant challenge. This [...] Read more.
Effective predictive maintenance is crucial for ensuring aircraft reliability, reducing operational disruptions, and supporting spare part inventory management in airline operations. However, maintenance data is often sparse, with irregular observations, missing records, and imbalanced failure distributions, making accurate forecasting a significant challenge. This study proposes a data-driven framework for maintenance prediction under sparse observational data. We implement and compare two distinct methodologies: survival analysis via DeepHit for time-to-event prediction, and a latent space classifier with autoencoder backbone. Each method is evaluated on historical aircraft maintenance logs and component installation records, addressing challenges posed by limited and imbalanced datasets. Both models are trained and tested on ten years of maintenance logs and component installation records sourced from an airline MRO (Maintenance, Repair and Overhaul) company that services a fleet of more than 500 aircraft, offering a realistic and scalable setting for fleet-wide maintenance analysis. The latent space classifier demonstrates superior overall performance and consistency across diverse components and prediction horizons compared to DeepHit, which is constrained by its sensitivity to probability thresholds. The encoder-based method effectively transfers knowledge from high-data components to those with sparse maintenance histories, enabling reliable maintenance forecasting and enhanced inventory planning for large-scale airline operations. Full article
(This article belongs to the Collection Air Transportation—Operations and Management)
Show Figures

Figure 1

34 pages, 2281 KB  
Article
Spatiotemporal Lattice-Constrained Event Linking and Automatic Labeling for Cross-Document Accident Reports
by Wenhua Zeng, Wenhu Tang, Diping Yuan, Bo Zhang and Yuhui Zeng
Appl. Sci. 2026, 16(2), 595; https://doi.org/10.3390/app16020595 - 6 Jan 2026
Viewed by 287
Abstract
Constructing reusable accident-text corpora is hindered by anonymization, heterogeneous sources, and sparse labels, which complicate cross-document event linking. We propose a spatiotemporal lattice-constrained approach that encodes administrative hierarchies and temporal granularity, defines domain-informed consistency criteria, instantiates spatial/temporal relations via a subset of RCC-8 [...] Read more.
Constructing reusable accident-text corpora is hindered by anonymization, heterogeneous sources, and sparse labels, which complicate cross-document event linking. We propose a spatiotemporal lattice-constrained approach that encodes administrative hierarchies and temporal granularity, defines domain-informed consistency criteria, instantiates spatial/temporal relations via a subset of RCC-8 and Allen’s interval algebra, estimates anchor weights via smoothing with monotonic projection, and fuses signals using a constrained monotonic network with explicit probability calibration. An active-learning decision rule—combining maximum probability with a probability-gap criterion—supports scalable automatic labeling, and controlled augmentation leverages instruction-tuned LLMs under lattice constraints. Experiments show competitive ranking (Hit@1 = 41.51%, Hit@5 = 77.33%) and discrimination (ROC-AUC = 87.34%), with the best F1 (62.46%). The method yields the lowest calibration errors (Brier = 0.14; ECE = 1.97%), maintains performance across sources, and exhibits the smallest F1 fluctuation across thresholds (Δ = 1.7%). In deployment-oriented analyses, it auto-labels 77.7% of cases with 97.51% accuracy among high-confidence outputs while routing 22.3% to review, where the true-positive rate is 81.46%. These findings indicate that integrating structured constraints with calibrated probabilistic fusion enables accurate, auditable, and scalable event linking for accident-corpus construction. Full article
Show Figures

Figure 1

16 pages, 979 KB  
Article
Performance Analysis of Cache-Enabled Millimeter-Wave Downlink Time Division Duplexing Networks with Cooperative Base Stations
by P. V. Muralikrishna, Kadiyam Sridevi and T. Venkata Ramana
Electronics 2025, 14(23), 4765; https://doi.org/10.3390/electronics14234765 - 4 Dec 2025
Viewed by 367
Abstract
The highly directional narrow-beam operation in mmWave networks, while effective at suppressing interference, lacks adaptability to dynamic traffic variations and blockages compared to D-TDD and JT schemes. D-TDD efficiently mitigates DL–UL cross-interference during asymmetric traffic. At the same time, joint transmission coordinates multiple [...] Read more.
The highly directional narrow-beam operation in mmWave networks, while effective at suppressing interference, lacks adaptability to dynamic traffic variations and blockages compared to D-TDD and JT schemes. D-TDD efficiently mitigates DL–UL cross-interference during asymmetric traffic. At the same time, joint transmission coordinates multiple base stations to deliver phase-aligned signals, converting interference into useful combined power and ensuring stable links under dynamic slot changes. However, these adaptive regimes are often overlooked in recent mmWave designs, leading to degraded communication performance. This work proposes D-TDD-based cooperative caching (DTCC) mmWave networks, where randomly distributed base stations with local caches enhance reliability and reduce backhaul load. Closed-form expressions for the cache hit probability and the average content success probability (ASP) are derived under the proposed DTCC framework. Popularity-based caching strategies with both equal and variable file sizes are analysed to maximise network-level performance. The simulation results validate that the proposed DTCC framework consistently enhances ASP in dense small-cell deployments, offering notable reliability gains over conventional single-BS (SBS) and static TDD (S-TDD)-based cooperative caching approaches. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Wireless Communications)
Show Figures

Figure 1

25 pages, 11153 KB  
Article
Structure-Guided Identification of JAK2 Inhibitors: From Similarity to Stability and Specificity
by Muhammad Yasir, Jinyoung Park, Jongseon Choe, Jin-Hee Han, Eun-Taek Han, Won Sun Park and Wanjoo Chun
Future Pharmacol. 2025, 5(4), 66; https://doi.org/10.3390/futurepharmacol5040066 - 5 Nov 2025
Cited by 1 | Viewed by 1680
Abstract
Background/Objectives: Janus kinase 2 (JAK2) is a pivotal signaling protein implicated in various hematological malignancies and inflammatory disorders, making it a compelling target for therapeutic intervention. Methods: In this study, we employed an integrative computational approach combining ligand-based screening, pharmacophore modeling, [...] Read more.
Background/Objectives: Janus kinase 2 (JAK2) is a pivotal signaling protein implicated in various hematological malignancies and inflammatory disorders, making it a compelling target for therapeutic intervention. Methods: In this study, we employed an integrative computational approach combining ligand-based screening, pharmacophore modeling, molecular docking, molecular dynamics (MD) simulations, and MM/PBSA free energy calculations to identify JAK2 inhibitors from the ChEMBL database. A comprehensive virtual screening of over 1,900,000 compounds was conducted using Tanimoto similarity and a validated pharmacophore model, resulting in the identification of 39 structurally promising candidates. Docking analyses prioritized compounds with favorable interaction energies, while MD simulations over 100 ns assessed the dynamic behavior and binding stability of top hits. Results: Four compounds, CHEMBL4169802, CHEMBL4162254, CHEMBL4286867, and CHEMBL2208033, exhibited consistently superior performance, forming stable hydrogen bonds, favorable RMSD profiles (≤0.5 nm), and strong binding interactions, including salt bridges. Notably, the binding free energies revealed ΔG values as low as −29.91 kcal/mol, surpassing that of the reference inhibitor, momelotinib (−24.17 kcal/mol). Conclusions: Among these, CHEMBL4169802 emerged as the most promising candidate due to its synergistic electrostatic and hydrophobic interactions. Collectively, our results highlight these compounds as probable, JAK2-selective inhibitors with strong potential for further biological validation and optimization. Full article
Show Figures

Graphical abstract

37 pages, 3305 KB  
Article
An Exploratory Eye-Tracking Study of Breast-Cancer Screening Ads: A Visual Analytics Framework and Descriptive Atlas
by Ioanna Yfantidou, Stefanos Balaskas and Dimitra Skandali
J. Eye Mov. Res. 2025, 18(6), 64; https://doi.org/10.3390/jemr18060064 - 4 Nov 2025
Viewed by 969
Abstract
Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable [...] Read more.
Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable pipeline that distinguishes early capture from long-term processing. Areas of Interest are divided into design-influential categories and graphed with two complementary measures: first hit and time to first fixation for entry and a tie-aware pairwise dominance model for dwell that produces rankings and an “early-vs.-sticky” quadrant visualization. Across creatives, pictorial and symbolic features were more likely to capture the first glance when they were perceptually dominant, while layouts containing centralized headlines or institutional cues deflected entry to the message and source. Prolonged attention was consistently focused on blocks of text, locations, and badges of authoring over ornamental pictures, demarcating the functional difference between capture and processing. Subgroup differences indicated audience-sensitive shifts: Older and household families shifted earlier toward source cues, more educated audiences shifted toward copy and locations, and younger or single viewers shifted toward symbols and images. Internal diagnostics verified that pairwise matrices were consistent with standard dwell summaries, verifying the comparative approach. The atlas converts the patterns into design-ready heuristics: defend sticky and early pieces, encourage sticky but late pieces by pushing them toward probable entry channels, de-clutter early but not sticky pieces to convert to processing, and re-think pieces that are neither. In practice, the diagnostics can be incorporated into procurement, pretesting, and briefs by agencies, educators, and campaign managers in order to enhance actionability without sacrificing segmentation of audiences. As an exploratory investigation, this study invites replication with larger and more diverse samples, generalizations to dynamic media, and associations with downstream measures such as recall and uptake of services. Full article
Show Figures

Figure 1

19 pages, 7595 KB  
Article
Probabilistic Forecasting Model for Tropical Cyclone Intensity Based on Diffusion Model
by Jingjia Luo, Peng Yang and Fan Meng
Remote Sens. 2025, 17(21), 3600; https://doi.org/10.3390/rs17213600 - 31 Oct 2025
Viewed by 1542
Abstract
Reliable forecasting of tropical cyclone (TC) intensity—particularly rapid intensification (RI) events—remains a major challenge in meteorology, largely due to the inherent difficulty of accurately quantifying predictive uncertainty. Traditional numerical approaches are computationally expensive, while statistical models often fail to capture the highly nonlinear [...] Read more.
Reliable forecasting of tropical cyclone (TC) intensity—particularly rapid intensification (RI) events—remains a major challenge in meteorology, largely due to the inherent difficulty of accurately quantifying predictive uncertainty. Traditional numerical approaches are computationally expensive, while statistical models often fail to capture the highly nonlinear relationships involved. Mainstream machine learning models typically provide only deterministic point forecasts and lack the ability to represent uncertainty. To address this limitation, we propose Tropical Cyclone Diffusion Model (TCDM), the first conditional diffusion-based probabilistic forecasting framework for TC intensity. TCDM integrates multimodal meteorological data, including satellite imagery, re-analysis fields, and environmental predictors, to directly generate the full probability distribution of future intensities. Experimental results show that TCDM not only achieves highly competitive deterministic accuracy (low MAE and RMSE; high R2), but also delivers high-quality probabilistic forecasts (low CRPS; high PICP). Moreover, it substantially improves RI detection by achieving higher hit rates with fewer false alarms. Compared with traditional ensemble-based methods, TCDM provides a more efficient and flexible approach to probabilistic forecasting, offering valuable support for TC risk assessment and disaster preparedness. Full article
Show Figures

Figure 1

14 pages, 2443 KB  
Article
Numerical Study on Infrared Radiation Signatures of Debris During Projectile Impact Damage Process
by Wenqiang Gao, Teng Zhang and Qinglin Niu
Computation 2025, 13(10), 244; https://doi.org/10.3390/computation13100244 - 19 Oct 2025
Viewed by 602
Abstract
High-speed impact is a critical mechanism for structural damage. The infrared signatures generated during fragment formation provide essential data for damage assessment, protective system design, and target identification. This study investigated an aluminum alloy blunt projectile penetrating a target plate by employing smoothed [...] Read more.
High-speed impact is a critical mechanism for structural damage. The infrared signatures generated during fragment formation provide essential data for damage assessment, protective system design, and target identification. This study investigated an aluminum alloy blunt projectile penetrating a target plate by employing smoothed particle hydrodynamics to simulate the debris ejection thermal and infrared properties. The infrared signatures of the debris clouds were calculated using Mie scattering theory under a spherical particle approximation. The reverse Monte Carlo methodology was applied to solve the radiative transfer equations and quantify the infrared emission characteristics. The infrared radiation characteristics of the debris cloud were investigated for projectile impact velocities of 800, 1000, and 1200 m/s. The results showed that the anterior debris regions reached peak temperatures of approximately 1200 K, with a temperature rise of 150–200 K per 200 m/s velocity increase behind the target. The medium-wave (3–5 μm) infrared intensity of the debris cloud was higher than the long-wave (8–12 μm) infrared intensity. The development of debris clouds enhanced the dispersion effect and slowed the increase in infrared radiation intensity in the same time interval. This study provides theoretical foundations for the dynamic infrared radiation characteristics of fragments generated by high-velocity projectile impacts. The infrared radiation characteristics within typical spectral bands can be utilized to assess hit probability and kill effectiveness. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

36 pages, 4432 KB  
Article
Statistics of Global Stochastic Optimisation: How Many Steps to Hit the Target?
by Godehard Sutmann
Mathematics 2025, 13(20), 3269; https://doi.org/10.3390/math13203269 - 13 Oct 2025
Viewed by 529
Abstract
Random walks are considered in a one-dimensional monotonously decreasing energy landscape. To reach the minimum within a region Ωϵ, a number of downhill steps have to be performed. A stochastic model is proposed which captures this random downhill walk and to [...] Read more.
Random walks are considered in a one-dimensional monotonously decreasing energy landscape. To reach the minimum within a region Ωϵ, a number of downhill steps have to be performed. A stochastic model is proposed which captures this random downhill walk and to make a prediction for the average number of steps, which are needed to hit the target. Explicit expressions in terms of a recurrence relation are derived for the density distribution of a downhill random walk as well as probability distribution functions to hit a target region Ωϵ within a given number of steps. For the case of stochastic optimisation, the number of rejected steps between two successive downhill steps is also derived, providing a measure for the average total number of trial steps. Analytical results are obtained for generalised random processes with underlying polynomial distribution functions. Finally the more general case of non-monotonously decreasing energy landscapes is considered for which results of the monotonous case are transferred by applying the technique of decreasing rearrangement. It is shown that the global stochastic optimisation can be fully described analytically, which is verified by numerical experiments for a number of different distribution and objective functions. Finally we discuss the transition to higher dimensional objective functions and discuss the change in computational complexity for the stochastic process. Full article
(This article belongs to the Special Issue Statistics for Stochastic Processes)
Show Figures

Figure 1

29 pages, 14740 KB  
Article
Cloud Mask Detection by Combining Active and Passive Remote Sensing Data
by Chenxi He, Zhitong Wang, Qin Lang, Lan Feng, Ming Zhang, Wenmin Qin, Minghui Tao, Yi Wang and Lunche Wang
Remote Sens. 2025, 17(19), 3315; https://doi.org/10.3390/rs17193315 - 27 Sep 2025
Cited by 1 | Viewed by 1073
Abstract
Clouds cover nearly two-thirds of Earth’s surface, making reliable cloud mask data essential for remote sensing applications and atmospheric research. This study develops a TrAdaBoost transfer learning framework that integrates active CALIOP and passive MODIS observations to enable unified, high-accuracy cloud detection across [...] Read more.
Clouds cover nearly two-thirds of Earth’s surface, making reliable cloud mask data essential for remote sensing applications and atmospheric research. This study develops a TrAdaBoost transfer learning framework that integrates active CALIOP and passive MODIS observations to enable unified, high-accuracy cloud detection across FY-4A/AGRI, FY-4B/AGRI, and Himawari-8/9 AHI sensors. The proposed TrAdaBoost Cloud Mask algorithm (TCM) achieves robust performance in dual validations with CALIPSO VFM and MOD35/MYD35, attaining a hit rate (HR) above 0.85 and a cloudy probability of detection (PODcld) exceeding 0.89. Relative to official products, TCM consistently delivers higher accuracy, with the most pronounced gains on FY-4A/AGRI. SHAP interpretability analysis highlights that 0.47 μm albedo, 10.8/10.4 μm and 12.0/12.4 μm brightness temperatures and geometric factors such as solar zenith angles (SZA) and satellite zenith angles (VZA) are key contributors influencing cloud detection. Multidimensional consistency assessments further indicate strong inter-sensor agreement under diverse SZA and land cover conditions, underscoring the stability and generalizability of TCM. These results provide a robust foundation for the advancement of multi-source satellite cloud mask algorithms and the development of cloud data products integrated. Full article
(This article belongs to the Special Issue Remote Sensing in Clouds and Precipitation Physics)
Show Figures

Figure 1

18 pages, 1496 KB  
Article
Constructing Real-Time Meteorological Forecast Method of Short-Term Cyanobacteria Bloom Area Index Changes in the Lake Taihu
by Jikang Wang, Junying Zhao, Cong Hua and Jianzhong Zhang
Sustainability 2025, 17(18), 8376; https://doi.org/10.3390/su17188376 - 18 Sep 2025
Cited by 1 | Viewed by 890
Abstract
The dynamics of cyanobacteria bloom in Lake Taihu, China, are subject to rapid fluctuations under the influence of various factors, with meteorological conditions being particularly influential. In this study, monitoring data on the surface area of cyanobacteria bloom in Lake Taihu and observational [...] Read more.
The dynamics of cyanobacteria bloom in Lake Taihu, China, are subject to rapid fluctuations under the influence of various factors, with meteorological conditions being particularly influential. In this study, monitoring data on the surface area of cyanobacteria bloom in Lake Taihu and observational data from automatic meteorological stations around Lake Taihu from 2016 to 2022 were utilized. Meteorological sub-indices were constructed based on the probability density distributions of meteorological factors in different areas of cyanobacterial bloom. A stacked ensemble model utilizing various machine learning algorithms was developed. This model was designed to forecast the cyanobacterial bloom area index in Lake Taihu based on meteorological data. This model has been deployed with real-time gridded forecasts from the China Meteorological Administration (CMA) to predict changes in the cyanobacteria bloom area index in Lake Taihu over the next 7 days. The results demonstrate that utilizing meteorological sub-indices, rather than traditional meteorological elements, provides a more effective reflection of changes in cyanobacteria bloom area. Key meteorological sub-indices were identified through recursive feature elimination, with wind speed variance and wind direction variance highlighted as especially important factors. The real-time forecasting system operated over a 2.5-year period (2023 to July 2025). Results demonstrate that for cyanobacteria bloom areas exceeding 100 km2, the 1-day lead-time forecast hit rate exceeded 72%, and the 3-day forecast hit rate remained above 65%. These findings significantly enhance forecasting capability for cyanobacterial blooms in Lake Taihu, offering critical support for sustainable water management practices in one of China’s most important freshwater systems. Full article
(This article belongs to the Section Sustainable Water Management)
Show Figures

Figure 1

13 pages, 836 KB  
Article
Importance Assessment of Distribution Network Nodes Based on an Improved MBCC-HITS Algorithm
by Jie Wu, Zhengwei Chang, Wei Zhang, Haina Rong and Tingting Zeng
Algorithms 2025, 18(9), 589; https://doi.org/10.3390/a18090589 - 17 Sep 2025
Viewed by 634
Abstract
By accurately identifying important nodes in the distribution network and implementing priority protection measures or optimizing the network layout, a system’s anti-interference capability can be effectively enhanced while reducing the probability of failures. Inspired by the MBCC-HITS algorithm, this paper proposes an improved [...] Read more.
By accurately identifying important nodes in the distribution network and implementing priority protection measures or optimizing the network layout, a system’s anti-interference capability can be effectively enhanced while reducing the probability of failures. Inspired by the MBCC-HITS algorithm, this paper proposes an improved MBCC-HITS algorithm based on node degree centrality (DCHITS) for evaluating important nodes in distribution networks. Building upon the MBCC-HITS algorithm, the DCHITS algorithm incorporates node degree centrality to enhance the evaluation framework, supplementing the influence of topological structure factors on node importance assessment, thereby more accurately reflecting the actual conditions of the distribution network. In the IEEE 33 system, the DCHITS algorithm was compared with the node degree and node betweenness algorithms, as well as the MBCC-HITS algorithm, using two indicators: the scale of load loss and the maximum subgroup size. The results demonstrate that the DCHITS algorithm outperforms the others in both indicators. Specifically, compared to the MBCC-HITS algorithm, the scale of load loss increased by 0.55%, and the maximum subgroup size decreased by 8.21%. compared to the node degree and node betweenness algorithm, the scale of load loss increased by 6.63%, and the maximum subgroup size decreased by 5.38%. These findings indicate that the DCHITS algorithm is more rational and effective in identifying the important nodes in distribution networks. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

16 pages, 3124 KB  
Article
The Prevalence of Diamine Oxidase Polymorphisms and Their Association with Histamine Intolerance Symptomatology in the Mexican Population
by Pamela Aguilar-Rodea, Viviana Mejía-Ramírez, Raúl Hernández-Munguía, Saúl Ramírez-Vargas, Diana Tovar-Vivar, Jaquelin Leyva-Hernández, Juan Carlos Nacar-Gutiérrez, Miriam Morales-Martínez and Aracely Palafox-Zaldivar
Biomedicines 2025, 13(9), 2280; https://doi.org/10.3390/biomedicines13092280 - 17 Sep 2025
Viewed by 2575
Abstract
Exogenous histamine obtained from the intake of histamine-rich food is mainly metabolized by the diamine oxidase enzyme (DAO). Histamine intolerance (HIT) is an alteration mainly caused by DAO deficiency, which is commonly associated with gastrointestinal, respiratory, cardiovascular, central nervous system, muscular, skeletal, and [...] Read more.
Exogenous histamine obtained from the intake of histamine-rich food is mainly metabolized by the diamine oxidase enzyme (DAO). Histamine intolerance (HIT) is an alteration mainly caused by DAO deficiency, which is commonly associated with gastrointestinal, respiratory, cardiovascular, central nervous system, muscular, skeletal, and skin symptoms. Despite four single-nucleotide polymorphisms (SNPs) being mainly associated with DAO deficiency, the probability of inheriting these variants and their relationship with HIT in the Mexican population remain unknown. Objective: The aim of this study was to evaluate the prevalence of these SNPs and their relationships with HIT in the Mexican population, including both individual volunteers and family groups. Methods: Four SNPs related to DAO deficiency were detected in 112 volunteers; medical questionnaires were answered. Results: The prevalence of genetic DAO deficiency attributed to at least one risk allele was 78.57% (rs1049793 was the main SNP). Fifteen DAO SNP combinations were detected (the main rs2052129, rs10156191, rs1049742 (wild-type homozygotes), and rs1049793 (heterozygote), 31.25%). A total of 41.07% of the volunteers presented at least three symptoms in different systems related to HIT, of whom 84.78% presented at least one SNP. The DAO deficiency genetic risk score varied among individual volunteers and families. The highest probability of having a mutated homozygote was 11.8% (rs1049793). HIT symptoms varied among relatives sharing identical genotypes. Conclusions: The prevalence of SNPs related to DAO deficiency in the Mexican population correlates with globally reported data; however, further analysis with volunteers distributed throughout the country would be desirable. Although genetic predisposition was common, the presence of SNPs alone did not predict specific HIT symptoms. Multiple SNPs may increase the presence of HIT symptoms, regardless of the type of allele. These findings highlight the multifactorial nature of HIT and underscore the need for standardized diagnostic criteria. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

17 pages, 1327 KB  
Article
MA-HRL: Multi-Agent Hierarchical Reinforcement Learning for Medical Diagnostic Dialogue Systems
by Xingchuang Liao, Yuchen Qin, Zhimin Fan, Xiaoming Yu, Jingbo Yang, Rongye Shi and Wenjun Wu
Electronics 2025, 14(15), 3001; https://doi.org/10.3390/electronics14153001 - 28 Jul 2025
Viewed by 2162
Abstract
Task-oriented medical dialogue systems face two fundamental challenges: the explosion of state-action space caused by numerous diseases and symptoms and the sparsity of informative signals during interactive diagnosis. These issues significantly hinder the accuracy and efficiency of automated clinical reasoning. To address these [...] Read more.
Task-oriented medical dialogue systems face two fundamental challenges: the explosion of state-action space caused by numerous diseases and symptoms and the sparsity of informative signals during interactive diagnosis. These issues significantly hinder the accuracy and efficiency of automated clinical reasoning. To address these problems, we propose MA-HRL, a multi-agent hierarchical reinforcement learning framework that decomposes the diagnostic task into specialized agents. A high-level controller coordinates symptom inquiry via multiple worker agents, each targeting a specific disease group, while a two-tier disease classifier refines diagnostic decisions through hierarchical probability reasoning. To combat sparse rewards, we design an information entropy-based reward function that encourages agents to acquire maximally informative symptoms. Additionally, medical knowledge graphs are integrated to guide decision-making and improve dialogue coherence. Experiments on the SymCat-derived SD dataset demonstrate that MA-HRL achieves substantial improvements over state-of-the-art baselines, including +7.2% diagnosis accuracy, +0.91% symptom hit rate, and +15.94% symptom recognition rate. Ablation studies further verify the effectiveness of each module. This work highlights the potential of hierarchical, knowledge-aware multi-agent systems for interpretable and scalable medical diagnosis. Full article
(This article belongs to the Special Issue Advanced Techniques for Multi-Agent Systems)
Show Figures

Figure 1

21 pages, 571 KB  
Article
Joint Optimization of Caching and Recommendation with Performance Guarantee for Effective Content Delivery in IoT
by Zhiyong Liu, Hong Shen and Hui Tian
Appl. Sci. 2025, 15(14), 7986; https://doi.org/10.3390/app15147986 - 17 Jul 2025
Viewed by 1073
Abstract
Content caching and recommendation for content delivery over the Internet are two key techniques for improving the content delivery effectiveness determined by delivery efficiency and user satisfaction, which is increasingly important in the booming Internet of Things (IoT). While content caching seeks the [...] Read more.
Content caching and recommendation for content delivery over the Internet are two key techniques for improving the content delivery effectiveness determined by delivery efficiency and user satisfaction, which is increasingly important in the booming Internet of Things (IoT). While content caching seeks the “greatest common denominator” among users to reduce end-to-end delay in content delivery, personalized recommendation, on the contrary, emphasizes users’ differentiation to enhance user satisfaction. Existing studies typically address them separately rather than jointly due to their contradictory objectives. They focus mainly on heuristics and deep reinforcement learning methods without the provision of performance guarantees, which are required in many real-world applications. In this paper, we study the problem of joint optimization of caching and recommendation in which recommendation is performed in the cached contents instead of purely according to users’ preferences, as in the existing work. We show the NP-hardness of this problem and present a greedy solution with a performance guarantee by first performing content caching according to user request probability without considering recommendations to maximize the aggregated request probability on cached contents and then recommendations from cached contents to incorporate user preferences for cache hit rate maximization. We prove that this problem has a monotonically increasing and submodular objective function and develop an efficient algorithm that achieves a 11e0.63 approximation ratio to the optimal solution. Experimental results demonstrate that our algorithm dramatically improves the popular least-recently used (LRU) algorithm. We also show experimental evaluations of hit rate variations by Jensen–Shannon Divergence on different parameter settings of cache capacity and user preference distortion limit, which can be used as a reference for appropriate parameter settings to balance user preferences and cache hit rate for Internet content delivery. Full article
Show Figures

Figure 1

Back to TopTop