Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (104)

Search Parameters:
Keywords = aggregate claims

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 966 KiB  
Article
Agricultural and Food Product Assessment—Methodological Choices in Sustainability Reporting Using the LCA Method
by Tinkara Ošlovnik and Matjaž Denac
Sustainability 2025, 17(15), 6837; https://doi.org/10.3390/su17156837 - 28 Jul 2025
Viewed by 303
Abstract
Consumers are increasingly exposed to environmental claims on food products. These claims often lack scientific validation and there are different methodologies that can be used for grounding these claims, which can lead to misleading results. The European Union’s (EU) Environmental Footprint methodology excludes [...] Read more.
Consumers are increasingly exposed to environmental claims on food products. These claims often lack scientific validation and there are different methodologies that can be used for grounding these claims, which can lead to misleading results. The European Union’s (EU) Environmental Footprint methodology excludes the aggregation of environmental impacts, including damage to human health. This fact reduces transparency and limits the consumers’ ability to make information-based sustainable choices. This study aims to address this issue by calculating aggregated impacts on human health via life cycle assessment (LCA) in the agriculture and food-production sectors. In the study the IMPACT World+ method was used, including trustworthy databases and proper functional unit definition. The assessment encompassed three types of vegetables, four types of fruit, and four types of ready meals. The study also attempts to assess the impact of different farming systems (organic and conventional) on human health. Two standardised functional units, i.e., the unit based on product weight and product energy value were considered for each group of products. Our findings showed significant differences in results when different functional units were used. Additionally, no conclusion could be drawn regarding which farming system is more sustainable. Therefore, it is essential that the regulator clearly defines the criteria for selecting the appropriate functional unit in LCA within the agriculture and food-production sectors. In the absence of these criteria, results should be presented for all alternatives. Although not required by EU regulation, the authors suggest that companies should nevertheless disclose information regarding the environmental impact of agriculture and food production on human health, as this is important for consumers. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

17 pages, 7162 KiB  
Article
Microbeam X-Ray Investigation of the Structural Transition from Circularly Banded to Ringless Dendritic Assemblies in Poly(Butylene Adipate) Through Dilution with Poly(Ethylene Oxide)
by Selvaraj Nagarajan, Chia-I Chang, I-Chuan Lin, Yu-Syuan Chen, Chean-Cheng Su, Li-Ting Lee and Eamor M. Woo
Polymers 2025, 17(15), 2040; https://doi.org/10.3390/polym17152040 - 26 Jul 2025
Viewed by 291
Abstract
In this study, growth mechanisms are proposed to understand how banded dendritic crystal aggregates in poly(1,4-butylene adipate) (PBA) transform into straight dendrites upon dilution with a large quantity of poly(ethylene oxide) (PEO) (25–90 wt.%). In growth packing, crystal plates are deformed in numerous [...] Read more.
In this study, growth mechanisms are proposed to understand how banded dendritic crystal aggregates in poly(1,4-butylene adipate) (PBA) transform into straight dendrites upon dilution with a large quantity of poly(ethylene oxide) (PEO) (25–90 wt.%). In growth packing, crystal plates are deformed in numerous ways, such as bending, scrolling, and twisting in self-assembly, into final aggregated morphologies of periodic bands or straight dendrites. Diluting PBA with a significant amount of PEO uncovers intricate periodic banded assemblies, facilitating better structural analysis. Both circularly banded and straight dendritic PBA aggregates have similar basic lamellar patterns. In straight dendritic PBA spherulites, crystal plates can twist from edge-on to flat-on, similar to those in ring-banded spherulites. Therefore, twists—whether continuous or discontinuous—are not limited to the conventional models proposed for classical periodic-banded spherulites. Thus, it would not be universally accurate to claim that the periodic circular bands observed in polymers or small-molecule compounds are caused by continuous lamellar helix twists. Straight dendrites, which do not exhibit optical bands, may also involve alternate crystal twists or scrolls during growth. Iridescence tests are used to compare the differences in crystal assemblies of straight dendrites vs. circularly banded PBA crystals. Full article
(This article belongs to the Section Polymer Physics and Theory)
Show Figures

Graphical abstract

20 pages, 1240 KiB  
Article
Modelling Insurance Claims During Financial Crises: A Systemic Approach
by Francis Agana and Eben Maré
J. Risk Financial Manag. 2025, 18(6), 307; https://doi.org/10.3390/jrfm18060307 - 5 Jun 2025
Viewed by 560
Abstract
In this paper, we introduce a generalised mutually exciting Hawkes process with random and independent jump intensities. This model provides a robust theoretical framework for modelling complex point processes and appropriately characterises the financial system, especially during periods of crisis. Based on this [...] Read more.
In this paper, we introduce a generalised mutually exciting Hawkes process with random and independent jump intensities. This model provides a robust theoretical framework for modelling complex point processes and appropriately characterises the financial system, especially during periods of crisis. Based on this extended Hawkes process, we propose an insurance claim process and demonstrate that claim processes modelled as an aggregated process enable early detection of crises and inform optimal investment strategies in a financial system. Full article
(This article belongs to the Section Mathematics and Finance)
Show Figures

Figure 1

21 pages, 1276 KiB  
Article
Quantifying Truthfulness: A Probabilistic Framework for Atomic Claim-Based Misinformation Detection
by Fahim Sufi and Musleh Alsulami
Mathematics 2025, 13(11), 1778; https://doi.org/10.3390/math13111778 - 27 May 2025
Viewed by 830
Abstract
The increasing sophistication and volume of misinformation on digital platforms necessitate scalable, explainable, and semantically granular fact-checking systems. Existing approaches typically treat claims as indivisible units, overlooking internal contradictions and partial truths, thereby limiting their interpretability and trustworthiness. This paper addresses this gap [...] Read more.
The increasing sophistication and volume of misinformation on digital platforms necessitate scalable, explainable, and semantically granular fact-checking systems. Existing approaches typically treat claims as indivisible units, overlooking internal contradictions and partial truths, thereby limiting their interpretability and trustworthiness. This paper addresses this gap by proposing a novel probabilistic framework that decomposes complex assertions into semantically atomic claims and computes their veracity through a structured evaluation of source credibility and evidence frequency. Each atomic unit is matched against a curated corpus of 11,928 cyber-related news entries using a binary alignment function, and its truthfulness is quantified via a composite score integrating both source reliability and support density. The framework introduces multiple aggregation strategies—arithmetic and geometric means—to construct claim-level veracity indices, offering both sensitivity and robustness. Empirical evaluation across eight cyber misinformation scenarios—encompassing over 40 atomic claims—demonstrates the system’s effectiveness. The model achieves a Mean Squared Error (MSE) of 0.037, Brier Score of 0.042, and a Spearman rank correlation of 0.88 against expert annotations. When thresholded for binary classification, the system records a Precision of 0.82, Recall of 0.79, and an F1-score of 0.805. The Expected Calibration Error (ECE) of 0.068 further validates the trustworthiness of the score distributions. These results affirm the framework’s ability to deliver interpretable, statistically reliable, and operationally scalable misinformation detection, with implications for automated journalism, governmental monitoring, and AI-based verification platforms. Full article
Show Figures

Figure 1

18 pages, 25518 KiB  
Article
Evaluating Agreement Between Global Satellite Data Products for Forest Monitoring in Madagascar
by Oladimeji Mudele, Marissa L. Childs, Jayden Personnat and Christopher D. Golden
Remote Sens. 2025, 17(9), 1482; https://doi.org/10.3390/rs17091482 - 22 Apr 2025
Viewed by 805
Abstract
Producing high-quality local land cover data can be cost-prohibitive, leaving gaps in reliable estimates of forest cover and loss for environmental policy and planning. Remote sensing data (RSD) offer accessible, globally consistent layers for forest mapping. However, being able to produce reliable RSD-based [...] Read more.
Producing high-quality local land cover data can be cost-prohibitive, leaving gaps in reliable estimates of forest cover and loss for environmental policy and planning. Remote sensing data (RSD) offer accessible, globally consistent layers for forest mapping. However, being able to produce reliable RSD-based land cover products with high local fidelity requires ground truth data, which are scarce and cost-intensive to obtain in settings like Madagascar. Global land cover datasets that rely on models trained mostly in well-studied regions claim to alleviate the problem of label scarcity. However, studies have shown that these products often fail to fulfill this promise. Given downstream studies focused on Madagascar still rely on these global land cover products, in this study we compared seven global RSD products measuring forest extent and change in Madagascar to explore levels of similarity across different forest ecoregions over multiple years. We also conducted temporal correlation analysis by checking the correlation between forest area from the different products. We found that agreement levels among the different data products varied by forest type and region, with higher disagreement levels in drier forest ecosystems (dry and spiny forests) than in more humid ones (moist forests and mangroves). For instance, if high agreement is defined as a pixel being classified as a forest by all or all but one product in a year, the average percentage of high-agreement pixels between 2016 and 2020 is just about 8% in the spiny forest and 16% in the dry forest region. These findings underscore the limitations of global RSD products and the importance of localized data for accurate forest monitoring, building justification for efforts to develop a local forest cover product for Madagascar. Our temporal similarity analysis indicates that, although pixel-level maps may show low agreement, temporal aggregates tend to be highly correlated in most cases. We synthesized these results with existing applications of global RSDs in Madagascar to propose practical recommendations for end-users of these products in Madagascar. Full article
(This article belongs to the Special Issue Biomass Remote Sensing in Forest Landscapes II)
Show Figures

Figure 1

38 pages, 7211 KiB  
Article
Cross-Context Stress Detection: Evaluating Machine Learning Models on Heterogeneous Stress Scenarios Using EEG Signals
by Omneya Attallah, Mona Mamdouh and Ahmad Al-Kabbany
AI 2025, 6(4), 79; https://doi.org/10.3390/ai6040079 - 14 Apr 2025
Cited by 1 | Viewed by 1405
Abstract
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, [...] Read more.
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, there has been limited research on assessing ML models trained in one context and utilized in another. The objective of ML-based stress detection systems is to create models that generalize across various contexts. Methods: This study examines the generalizability of ML models employing EEG recordings from two stress-inducing contexts: mental arithmetic evaluation (MAE) and virtual reality (VR) gaming. We present a data collection workflow and publicly release a portion of the dataset. Furthermore, we evaluate classical ML models and their generalizability, offering insights into the influence of training data on model performance, data efficiency, and related expenses. EEG data were acquired leveraging MUSE-STM hardware during stressful MAE and VR gaming scenarios. The methodology entailed preprocessing EEG signals using wavelet denoising mother wavelets, assessing individual and aggregated sensor data, and employing three ML models—linear discriminant analysis (LDA), support vector machine (SVM), and K-nearest neighbors (KNN)—for classification purposes. Results: In Scenario 1, where MAE was employed for training and VR for testing, the TP10 electrode attained an average accuracy of 91.42% across all classifiers and participants, whereas the SVM classifier achieved the highest average accuracy of 95.76% across all participants. In Scenario 2, adopting VR data as the training data and MAE data as the testing data, the maximum average accuracy achieved was 88.05% with the combination of TP10, AF8, and TP9 electrodes across all classifiers and participants, whereas the LDA model attained the peak average accuracy of 90.27% among all participants. The optimal performance was achieved with Symlets 4 and Daubechies-2 for Scenarios 1 and 2, respectively. Conclusions: The results demonstrate that although ML models exhibit generalization capabilities across stressors, their performance is significantly influenced by the alignment between training and testing contexts, as evidenced by systematic cross-context evaluations using an 80/20 train–test split per participant and quantitative metrics (accuracy, precision, recall, and F1-score) averaged across participants. The observed variations in performance across stress scenarios, classifiers, and EEG sensors provide empirical support for this claim. Full article
Show Figures

Figure 1

19 pages, 306 KiB  
Article
Asymptotic Tail Moments of the Time Dependent Aggregate Risk Model
by Dechen Gao and Jiandong Ren
Mathematics 2025, 13(7), 1153; https://doi.org/10.3390/math13071153 - 31 Mar 2025
Viewed by 159
Abstract
In this paper, we study an extension of the classical compound Poisson risk model with a dependence structure among the inter-claim time and the subsequent claim size. Under a flexible dependence structure and assuming that the claim amounts are heavy tail distributed, we [...] Read more.
In this paper, we study an extension of the classical compound Poisson risk model with a dependence structure among the inter-claim time and the subsequent claim size. Under a flexible dependence structure and assuming that the claim amounts are heavy tail distributed, we derive asymptotic tail moments for the aggregate claims. Numerical examples and simulation studies are provided to validate the results. Full article
(This article belongs to the Section D1: Probability and Statistics)
31 pages, 4000 KiB  
Article
Assessment of Recombinant β-Propeller Phytase of the Bacillus Species Expressed Intracellularly in Yarrowia lipolityca
by Liliya G. Maloshenok, Yulia S. Panina, Sergey A. Bruskin, Victoria V. Zherdeva, Natalya N. Gessler, Alena V. Rozumiy, Egor V. Antonov, Yulia I. Deryabina and Elena P. Isakova
J. Fungi 2025, 11(3), 186; https://doi.org/10.3390/jof11030186 - 26 Feb 2025
Viewed by 666
Abstract
Phytases of the PhyD class according to their pH optimum (7.0–7.8) and high thermal stability can claim to be used in the production of feed supplements. However, today they have no practical application in feed production because there are no suitable producers sufficient [...] Read more.
Phytases of the PhyD class according to their pH optimum (7.0–7.8) and high thermal stability can claim to be used in the production of feed supplements. However, today they have no practical application in feed production because there are no suitable producers sufficient for its biotechnological production compared to the PhyA and PhyC class ones. Moreover, in most cases, the technologies with the enzymes produced in secretory form are preferable for the production of phytases, though upon microencapsulation in yeast-producing cells, the phytase thermal stability increases significantly compared to the extracellular form, which improves its compatibility with spray drying technology. In this study, we assayed the intracellular heterologous expression of PhyD phytase from Bacillus species in the Yarrowia lipolytica yeast cells. While the technology has been successfully used to synthesize PhyC phytase from Obesumbacterium proteus, PhyD phytase tends to aggregate upon intracellular accumulation. Furthermore, we evaluated the prospects for the production of encapsulated phytase of the PhyD class of high enzymatic activity when it accumulates in the cell cytoplasm of the Y. lipolytica extremophile yeast, a highly effective platform for the production of recombinant proteins. Full article
(This article belongs to the Special Issue New Trends in Yeast Metabolic Engineering)
Show Figures

Figure 1

23 pages, 515 KiB  
Article
Copula-Based Risk Aggregation and the Significance of Reinsurance
by Alexandra Dias, Isaudin Ismail and Aihua Zhang
Risks 2025, 13(3), 44; https://doi.org/10.3390/risks13030044 - 26 Feb 2025
Viewed by 1269
Abstract
Insurance companies need to calculate solvency capital requirements in order to ensure that they can meet their future obligations to policyholders and beneficiaries. The solvency capital requirement is a risk management tool essential for addressing extreme catastrophic events that result in a high [...] Read more.
Insurance companies need to calculate solvency capital requirements in order to ensure that they can meet their future obligations to policyholders and beneficiaries. The solvency capital requirement is a risk management tool essential for addressing extreme catastrophic events that result in a high number of possibly interdependent claims. This paper studies the problem of aggregating the risks coming from several insurance business lines and analyses the effect of reinsurance on the level of risk. Our starting point is to use a hierarchical risk aggregation method which was initially based on two-dimensional elliptical copulas. We then propose the use of copulas from the Archimedean family and a mixture of different copulas. Our results show that a mixture of copulas can provide a better fit to the data than an individual copula and consequently avoid over- or underestimation of the capital requirement of an insurance company. We also investigate the significance of reinsurance in reducing the insurance company’s business risk and its effect on diversification. The results show that reinsurance does not always reduce the level of risk, but can also reduce the effect of diversification for insurance companies with multiple business lines. Full article
(This article belongs to the Special Issue Risk Analysis in Insurance and Pensions)
Show Figures

Figure 1

35 pages, 3825 KiB  
Article
An Intelligent Model for Parametric Cognitive Assessment of E-Learning-Based Students
by Muhammad Saqib Javed, Muhammad Aslam and Syed Khaldoon Khurshid
Information 2025, 16(2), 93; https://doi.org/10.3390/info16020093 - 26 Jan 2025
Cited by 1 | Viewed by 1465
Abstract
In an e-learning environment, question levels are based on Bloom’s Taxonomy (BT), which normally classifies a course’s learning objectives into diverse levels. As per the previous literature, the assessment procedure lacks accuracy and results in redundant keywords when automatically assigning Bloom’s taxonomic categories [...] Read more.
In an e-learning environment, question levels are based on Bloom’s Taxonomy (BT), which normally classifies a course’s learning objectives into diverse levels. As per the previous literature, the assessment procedure lacks accuracy and results in redundant keywords when automatically assigning Bloom’s taxonomic categories using a keyword-based approach. These assessments are considered challenging as far as e-learning-based students are concerned, as the text feed is the only instrumental testing part. Student assessments are limited to multiple-choice questions and lack an evaluation of students’ text-based input. This paper proposes a natural-language processing-based intelligent deep-learning model that relies on parametric cognitive assessments. By applying class labels to students’ descriptive responses, the proposed approach helps classify a variety of questions mapped to BT levels. The first contribution of this work is a compiled dataset of the assessment items from 300 students, who were tested on 20 questions at each level. Each level is calculated by combining the responses from all students, resulting in 6000 questions per cognitive level for a total of 36,000 records. The second contribution is the development of an intelligent model based on a recurrent neural network (RNN), which not only predicts Bloom’s question level but also learns it over further iterations. The students’ text-based answers are accessed to gauge performance using a refined question pool gathered through the RNN model. The student dataset is mapped and tested using the NLP model for further classification of the students’ cognitive levels. This assessment is related to the formulation of questions and the compilation of Episode 2 for assessment. The third contribution is the comparison and demonstration of the improvements in learning using a parametric cognitive-based assessment in an episodic manner. Improved classification accuracy was attained by adding more processing layers based on the iterative, RNN-based learning model to achieve the vital threshold difference. The cognitive based questions pool classification achieved by RNN results in 98% accuracy. The resulting student assessments, based on performance, increased to an accuracy ratio of 92.16% and a precision ratio of 92.36% at an aggregate level based on the Random Forest classifier. We claim that our work serves as an initiative for effective student evaluations in interactive and e-learning-based environments when handling other types of inputs, like mathematical, graphical, and multimodal inputs. Full article
(This article belongs to the Special Issue Intelligent Agent and Multi-Agent System)
Show Figures

Graphical abstract

13 pages, 1953 KiB  
Article
Quantifying Uncertainty of Insurance Claims Based on Expert Judgments
by Budhi Handoko, Yeny Krista Franty and Fajar Indrayatna
Mathematics 2025, 13(2), 245; https://doi.org/10.3390/math13020245 - 13 Jan 2025
Cited by 2 | Viewed by 821
Abstract
In Bayesian statistics, prior specification has an important role in determining the quality of posterior estimates. We use expert judgments to quantify uncertain quantities and produce appropriate prior distribution. The aim of this study was to quantify the uncertainty of life insurance claims, [...] Read more.
In Bayesian statistics, prior specification has an important role in determining the quality of posterior estimates. We use expert judgments to quantify uncertain quantities and produce appropriate prior distribution. The aim of this study was to quantify the uncertainty of life insurance claims, especially on the policy owner’s age, as it is the main factor determining the insurance premium. A one-day workshop was conducted to elicit expert judgments from those who have experience in accepting claims. Four experts from different insurance companies were involved in the workshop. The elicitation protocol used in this study was The Sheffield Elicitation Framework (SHELF), which produces four different statistical distributions for each expert. A linear pooling method was used to aggregate the distributions to obtain the consensus distribution among experts. The consensus distribution suggested that the majority of policy owners will make a claim at the age of 54 years old. Full article
(This article belongs to the Special Issue Bayesian Learning and Its Advanced Applications)
Show Figures

Figure 1

38 pages, 9348 KiB  
Article
Bayesian Hierarchical Risk Premium Modeling with Model Risk: Addressing Non-Differential Berkson Error
by Minkun Kim, Marija Bezbradica and Martin Crane
Appl. Sci. 2025, 15(1), 210; https://doi.org/10.3390/app15010210 - 29 Dec 2024
Viewed by 1390
Abstract
For general insurance pricing, aligning losses with accurate premiums is crucial for insurance companies’ competitiveness. Traditional actuarial models often face challenges like data heterogeneity and mismeasured covariates, leading to misspecification bias. This paper addresses these issues from a Bayesian perspective, exploring connections between [...] Read more.
For general insurance pricing, aligning losses with accurate premiums is crucial for insurance companies’ competitiveness. Traditional actuarial models often face challenges like data heterogeneity and mismeasured covariates, leading to misspecification bias. This paper addresses these issues from a Bayesian perspective, exploring connections between Bayesian hierarchical modeling, partial pooling techniques, and the Gustafson correction method for mismeasured covariates. We focus on Non-Differential Berkson (NDB) mismeasurement and propose an approach that corrects such errors without relying on gold standard data. We discover the unique prior knowledge regarding the variance of the NDB errors, and utilize it to adjust the biased parameter estimates built upon the NDB covariate. Using simulated datasets developed with varying error rate scenarios, we demonstrate the superiority of Bayesian methods in correcting parameter estimates. However, our modeling process highlights the challenge in accurately identifying the variance of NDB errors. This emphasizes the need for a thorough sensitivity analysis of the relationship between our prior knowledge of NDB error variance and varying error rate scenarios. Full article
(This article belongs to the Special Issue Novel Applications of Machine Learning and Bayesian Optimization)
Show Figures

Figure 1

19 pages, 5781 KiB  
Article
UAV-Multispectral Based Maize Lodging Stress Assessment with Machine and Deep Learning Methods
by Minghu Zhao, Dashuai Wang, Qing Yan, Zhuolin Li and Xiaoguang Liu
Agriculture 2025, 15(1), 36; https://doi.org/10.3390/agriculture15010036 - 26 Dec 2024
Viewed by 1264
Abstract
Maize lodging is a prevalent stress that can significantly diminish corn yield and quality. Unmanned aerial vehicles (UAVs) remote sensing is a practical means to quickly obtain lodging information at field scale, such as area, severity, and distribution. However, existing studies primarily use [...] Read more.
Maize lodging is a prevalent stress that can significantly diminish corn yield and quality. Unmanned aerial vehicles (UAVs) remote sensing is a practical means to quickly obtain lodging information at field scale, such as area, severity, and distribution. However, existing studies primarily use machine learning (ML) methods to qualitatively analyze maize lodging (lodging and non-lodging) or estimate the maize lodging percentage, while there is less research using deep learning (DL) to quantitatively estimate maize lodging parameters (type, severity, and direction). This study aims to introduce advanced DL algorithms into the maize lodging classification task using UAV-multispectral images and investigate the advantages of DL compared with traditional ML methods. This study collected a UAV-multispectral dataset containing non-lodging maize and lodging maize with different lodging types, severities, and directions. Additionally, 22 vegetation indices (VIs) were extracted from multispectral data, followed by spatial aggregation and image cropping. Five ML classifiers and three DL models were trained to classify the maize lodging parameters. Finally, we compared the performance of ML and DL models in evaluating maize lodging parameters. The results indicate that the Random Forest (RF) model outperforms the other four ML algorithms, achieving an overall accuracy (OA) of 89.29% and a Kappa coefficient of 0.8852. However, the maize lodging classification performance of DL models is significantly better than that of ML methods. Specifically, Swin-T performs better than ResNet-50 and ConvNeXt-T, with an OA reaching 96.02% and a Kappa coefficient of 0.9574. This can be attributed to the fact that Swin-T can more effectively extract detailed information that accurately characterizes maize lodging traits from UAV-multispectral data. This study demonstrates that combining DL with UAV-multispectral data enables a more comprehensive understanding of maize lodging type, severity, and direction, which is essential for post-disaster rescue operations and agricultural insurance claims. Full article
Show Figures

Figure 1

31 pages, 2407 KiB  
Review
Role of Podoplanin (PDPN) in Advancing the Progression and Metastasis of Glioblastoma Multiforme (GBM)
by Bharti Sharma, George Agriantonis, Zahra Shafaee, Kate Twelker, Navin D. Bhatia, Zachary Kuschner, Monique Arnold, Aubrey Agcon, Jasmine Dave, Juan Mestre, Shalini Arora, Hima Ghanta and Jennifer Whittington
Cancers 2024, 16(23), 4051; https://doi.org/10.3390/cancers16234051 - 3 Dec 2024
Cited by 2 | Viewed by 2599
Abstract
Glioblastoma multiforme (GBM) is a malignant primary brain tumor categorized as a Grade 4 astrocytic glioma by the World Health Organization (WHO). Some of the established risk factors of GBM include inherited genetic syndromes, body mass index, alcohol consumption, use of non-steroidal anti-inflammatory [...] Read more.
Glioblastoma multiforme (GBM) is a malignant primary brain tumor categorized as a Grade 4 astrocytic glioma by the World Health Organization (WHO). Some of the established risk factors of GBM include inherited genetic syndromes, body mass index, alcohol consumption, use of non-steroidal anti-inflammatory drugs (NSAIDs), and therapeutic ionizing radiation. Vascular anomalies, including local and peripheral thrombosis, are common features of GBM. Podoplanin (PDPN), a ligand of the C-type lectin receptor (CLEC-2), promotes platelet activation, aggregation, venous thromboembolism (VTE), lymphatic vessel formation, and tumor metastasis in GBM patients. It is regulated by Prox1 and is expressed in developing and adult mammalian brains. It was initially identified on lymphatic endothelial cells (LECs) as the E11 antigen and on fibroblastic reticular cells (FRCs) of lymphoid organs and thymic epithelial cells as gp38. In recent research studies, its expression has been linked with prognosis in GBM. PDPN-expressing cancer cells are highly pernicious, with a mutant aptitude to form stem cells. Such cells, on colocalization to the surrounding tissues, transition from epithelial to mesenchymal cells, contributing to the malignant carcinogenesis of GBM. PDPN can be used as an independent prognostic factor in GBM, and this review provides strong preclinical and clinical evidence supporting these claims. Full article
(This article belongs to the Section Cancer Metastasis)
Show Figures

Figure 1

28 pages, 13144 KiB  
Article
Complexity and Variation in Infectious Disease Birth Cohorts: Findings from HIV+ Medicare and Medicaid Beneficiaries, 1999–2020
by Nick Williams
Entropy 2024, 26(11), 970; https://doi.org/10.3390/e26110970 - 12 Nov 2024
Viewed by 936
Abstract
The impact of uncertainty in information systems is difficult to assess, especially when drawing conclusions from human observation records. In this study, we investigate survival variation in a population experiencing infectious disease as a proxy to investigate uncertainty problems. Using Centers for Medicare [...] Read more.
The impact of uncertainty in information systems is difficult to assess, especially when drawing conclusions from human observation records. In this study, we investigate survival variation in a population experiencing infectious disease as a proxy to investigate uncertainty problems. Using Centers for Medicare and Medicaid Services claims, we discovered 1,543,041 HIV+ persons, 363,425 of whom were observed dying from all-cause mortality. Once aggregated by HIV status, year of birth and year of death, Age-Period-Cohort disambiguation and regression models were constructed to produce explanations of variance in survival. We used Age-Period-Cohort as an alternative method to work around under-observed features of uncertainty like infection transmission, receiver host dynamics or comorbidity noise impacting survival variation. We detected ages that have a consistent, disproportionate share of deaths independent of study year or year of birth. Variation in seasonality of mortality appeared stable in regression models; in turn, HIV cases in the United States do not have a survival gain when uncertainty is uncontrolled for. Given the information complexity issues under observed exposure and transmission, studies of infectious diseases should either include robust decedent cases, observe transmission physics or avoid drawing conclusions about survival from human observation records. Full article
(This article belongs to the Special Issue Stability and Flexibility in Dynamic Systems: Novel Research Pathways)
Show Figures

Figure 1

Back to TopTop