All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
Recent advances in large language models (LLMs) have enabled the effective representation of customer behaviors, including purchases, repairs, and consultations. These LLM-based customer representation models apply to predicting future behavior of the customer or clustering customers with similar representations by latent vectors. Since
[...] Read more.
Recent advances in large language models (LLMs) have enabled the effective representation of customer behaviors, including purchases, repairs, and consultations. These LLM-based customer representation models apply to predicting future behavior of the customer or clustering customers with similar representations by latent vectors. Since these representation technologies depend on data, this paper examines whether training a recommendation model (BERT4Rec) from scratch or fine-tuning a pre-trained LLM (ELECTRA) is more effective for our customer data. To address this, a three-step approach is conducted: (1) defining a sequence of customer behaviors into textual inputs for LLM-based representation learning, (2) extracting customer representation as latent vectors by training or fine-tuning representation models on a dataset of 14 million customers, and (3) training classifiers to predict purchase outcomes for eight products. Our focus is on comparing two primary approaches in step (2): training BERT4Rec from scratch versus fine-tuning pre-trained ELECTRA. The average AUC and F1-score of classifiers across eight products reveal that both methods achieve gaps of only 0.012 in AUC and 0.007 in F1-score. On the other hand, the fine-tuned ELECTRA achieves a 0.27 improvement in the top 10% lift for targeted marketing strategies. This result is particularly meaningful given that buyers of products constitute only about 0.5% of the entire dataset. Beyond the three-step approach, we make an effort to interpret latent space in two-dimensional and attention shifts in fine-tuned ELECTRA. Furthermore, we compare its efficiency advantages against fine-tuned LLaMA2. These findings provide practical insights for optimizing LLM-based representation models in industrial applications.
Full article
In this study, first-principles calculations were employed to analyze the effect of Si and O doping on the electronic structure of the Fe/Zn interface, aiming to reveal the mechanism underlying the degradation of its interfacial stability. Through detailed analysis of bond population, charge
[...] Read more.
In this study, first-principles calculations were employed to analyze the effect of Si and O doping on the electronic structure of the Fe/Zn interface, aiming to reveal the mechanism underlying the degradation of its interfacial stability. Through detailed analysis of bond population, charge density, differential charge density, as well as total density of states (TDOS) and partial density of states (PDOS), the following findings were obtained: After Si and O doping, the charge distribution at the Fe/Zn interface exhibits local aggregation or sparsity. The differential charge density shows a redistribution of charges, and the charge density in the Fe-Zn bonding region changes. In terms of density of states, the contribution of orbitals related to Fe and Zn atoms to the density of states near the Fermi level is altered. The hybridization between the orbitals of Si/O atoms and those of Fe/Zn atoms affects the electronic interaction. Comprehensive analysis indicates that the degradation of Fe/Zn interfacial stability caused by Si and O doping is mainly attributed to the following factors: it modifies the chemical bonding, induces lattice distortion which generates internal stress, enhances the inhomogeneity of charge distribution, and weakens the bonding force between Fe and Zn atoms. This research provides a theoretical basis for the performance regulation of related material systems.
Full article
by
Jose Manuel Sanchez-Manas, Sonia Perales, Gonzalo Martinez-Navajas, Jorge Ceron-Hernandez, Cristina M. Lopez, Angela Peralbo-Molina, Juan R. Delgado, Joaquina Martinez-Galan, Veronica Ramos-Mejia, Eduardo Chicano-Galvez, Maria Hernandez-Valladares, Francisco M. Ortuno, Carolina Torres and Pedro J. Real
Biomolecules2025, 15(12), 1698; https://doi.org/10.3390/biom15121698 (registering DOI) - 5 Dec 2025
Platelets and their extracellular vesicles (EVs) have emerged as promising liquid biopsy biosources for cancer detection and monitoring. The megakaryoblastic MEG-01 cell line offers a controlled system for generating platelet-like particles (PLPs) and EVs through valproic-acid-induced differentiation. Here, we performed comprehensive characterization and
[...] Read more.
Platelets and their extracellular vesicles (EVs) have emerged as promising liquid biopsy biosources for cancer detection and monitoring. The megakaryoblastic MEG-01 cell line offers a controlled system for generating platelet-like particles (PLPs) and EVs through valproic-acid-induced differentiation. Here, we performed comprehensive characterization and proteomic validation of MEG-01-derived populations, native human platelets, and their EVs using nanoparticle tracking analysis, transmission electron microscopy, imaging flow cytometry and quantitative proteomics. MEG-01 megakaryocytic differentiation is characterized by polylobulated nuclei, proplatelet formation, and elevated CD41/CD42a expression. PLPs predominantly exhibit an activated-like phenotype (CD62P+, degranulated morphology), while microvesicles (100–500 nm) and exosomes (50–250 nm) displayed size distributions and phenotypic markers consistent with native platelet-derived EVs. Proteomics identified substantial core proteomes shared across fractions and fraction-specific patterns consistent with selective cargo partitioning during EV biogenesis. Functional enrichment indicated that MEG-01-derived vesicles preserve key hemostatic, cytoskeletal, and immune pathways commonly associated with platelet EV biology. Ingenuity Pathway Analysis showed that PLPs exhibit proliferative transcriptional programs (elevated MYC/RB1/TEAD1, reduced GATA1), while plasma exosomes display minimal differential pathway activation compared to MEG-01 exosomes. Overall, these findings suggest that MEG-01-derived EVs approximate certain aspects of megakaryocyte-lineage exosomes and activated platelet-like states, although they do not fully replicate native platelet biology. Notably, plasma exosomes show strong proteomic convergence with MEG-01 exosomes, whereas platelet exosomes retain distinct activation-related features.
Full article
Background/Objectives: Current genomics research equates the genome with DNA sequence and treats the epigenome as a regulatory layer. This DNA-centric view obscures the fact that genomic identity arises through epigenomic processes. The objective of this article is to reinterpret published findings into a
[...] Read more.
Background/Objectives: Current genomics research equates the genome with DNA sequence and treats the epigenome as a regulatory layer. This DNA-centric view obscures the fact that genomic identity arises through epigenomic processes. The objective of this article is to reinterpret published findings into a new theoretical framework: the EpG2 (Epigenome–Genome) system. Methods: This work develops a new conceptual framework by integrating published evidence from diverse domains—including enhancer biology, overlapping genomic functions, alternative coding frames, zygotic genome activation, and disease-associated loci—and reinterpreting these findings through the lens of epigenomic processes. Results: Evidence shows that enhancers emerge only through the interplay of sequence, transcription factors, and chromatin environment. At fertilization, paternal and maternal genomes remain separate, and a new genome emerges through coordinated epigenomic reprogramming or zygote genome emergence (ZGE). DNA sequence risk variants illustrate the concept of contextual risk alleles, whose effects shift across tissues and developmental stages as epigenomic contexts change. Conclusions: The EpG2 system reframes the genome as a processual, emergent entity generated and regulated by epigenomic processes, offering a paradigm for understanding genomic variation beyond DNA sequence.
Full article
The fast evolution of machine learning methods in both electronic and biomedical engineering continues to transform how data is interpreted, validated, and translated into actionable support systems [...]
Full article
The photocatalytic efficiency of graphitic carbon nitride (g-C3N4) for the decomposition of aqueous rhodamine B (RhB) was investigated. To examine the combined effects of sonication and electron beam (EB) irradiation on the photocatalytic efficiency, g-C3N4 was
[...] Read more.
The photocatalytic efficiency of graphitic carbon nitride (g-C3N4) for the decomposition of aqueous rhodamine B (RhB) was investigated. To examine the combined effects of sonication and electron beam (EB) irradiation on the photocatalytic efficiency, g-C3N4 was sonicated in 1,3-butanediol and subsequently irradiated with EB. The photocatalytic efficiency was improved by the low-dose EB irradiation due to the generation of structural defects that acted as active reaction sites. Sonication before EB irradiation induced mild exfoliation and further improved photocatalytic efficiency. Prolonged sonication enhanced this improvement, primarily by increasing the specific surface area of g-C3N4. The positive effect of sonication was more remarkable for g-C3N4 irradiated with low-dose EB than for g-C3N4 irradiated with higher-dose EB. The photocatalytic RhB decomposition rate measured for g-C3N4 sonicated for 480 min and irradiated at 200 kGy was approximately 6.8 times higher than that measured for the untreated g-C3N4. The difference between the sonication effects can be ascribed to the electrostatic interactions and the resultant agglomeration of the g-C3N4 particles after EB irradiation. High-dose EB irradiation caused electrification followed by coarsening of the particles, whereas low-dose EB irradiation did not produce these results and led to positive effects due to the EB-induced g-C3N4 structural alteration.
Full article
by
Ștefan Moroșanu, Maria Cristina Man, Nicola Mancini, Carlos Hervás-Gómez, Emilia Florina Grosu, Mihai Moroșanu, Horațiu Ghejan, Mircea Boncuț, Dana Ioana Cristea and Vlad Teodor Grosu
Background: The consequences of video games have been a hotly debated topic in recent decades. While the media tend to focus on and publicize the alleged negative effects of video games, the empirical literature continues to research to illustrate the benefits of playing
[...] Read more.
Background: The consequences of video games have been a hotly debated topic in recent decades. While the media tend to focus on and publicize the alleged negative effects of video games, the empirical literature continues to research to illustrate the benefits of playing certain types of video games. Objective: With this paper we want to highlight the utility of virtual reality technology for improving reaction time. Methods: A total of 32 Romanian students, aged 17 to 19, were recruited from a high school in Cluj-Napoca. The experimental group took part in a virtual realitybased intervention, while the control group only attended the standard physical education classes included in the school curriculum. To assess simple and complex reaction time, we used the Deary–Liewald reaction time test. Descriptive statistics and t-tests were used to compare participant characteristics between the two groups. The significance level for all statistical analyses was set at p < 0.05. Results: Subjects in the experimental group (M = 382.75, SD = 21.30) showed statistically significant improvements (p < 0.05) at final testing compared to the control group (M = 396.88, SD: 25.37) in the complex reaction time Deary–Liewald test (t = −1.70, p = 0.04, d = −0.60). Conclusions: As technology continues to advance, new possibilities have emerged for reducing reaction time through cutting-edge tools like virtual reality. Our study shows that a well-structured 6-month virtual reality program can improve simple and complex reaction time in high school students.
Full article
With advancements in solar-induced fluorescence (SIF) observation technology and the evolution of vegetation radiative transfer models, SIF signals can now be more effectively interpreted and leveraged from a mechanistic perspective. This, in turn, facilitates a deeper understanding of the mechanistic link between SIF
[...] Read more.
With advancements in solar-induced fluorescence (SIF) observation technology and the evolution of vegetation radiative transfer models, SIF signals can now be more effectively interpreted and leveraged from a mechanistic perspective. This, in turn, facilitates a deeper understanding of the mechanistic link between SIF and photosynthesis. Considering the impact of water stress on terrestrial ecosystems, this paper simulated SIF and gross primary productivity (GPP) values using the STEMMUS-SCOPE model at half-hour scales from 2017 to 2023 at the Daman site. The simulation results were compared and validated against flux tower observations and SCOPE model outputs. Taking advantage of irrigation events in the semi-arid irrigated farmland, we assessed the accuracy of STEMMUS-SCOPE in simulating SIF and GPP under drought stress, as well as its capability to quantitatively analyze the impacts of water stress on SIF and GPP. The results show that the accuracy of the SIF and GPP values simulated by the STEMMUS-SCOPE model is higher than that of the SCOPE model. The averaged R2 and RMSE between the SIF simulated by STEMMUS-SCOPE model and the observed SIF values are 0.66 and 0.29 mW m−2 nm−1, and the averaged R2 and RMSE between the GPP simulated by the STEMMUS-SCOPE model and the observed GPP values from 2017 to 2023 are 0.88 and 4.93 µmol CO2 m−2 s−1, respectively. Especially under relatively drought conditions, the R2 between the SIF simulated values and observed values is 0.84, and the R2 between the GPP simulated values and observed values is 0.96. By further combining soil moisture content (SMC) and canopy conductance (Gs) analyses, we found that the response of the STEMMUS-SCOPE simulations under water stress was consistent with previous findings on the impacts of water deficits, thereby confirming the model’s reliability for drought conditions. Under drought stress, the decline in fluorescence emission efficiency (ΦF) with decreasing Gs and SMC was smaller than that of the light use efficiency (LUE). Therefore, the STEMMUS-SCOPE model is promising for investigating the SIF–GPP relationship under drought stress.
Full article
Fine-bubble aeration is a core process in wastewater treatment plants (WWTPs). However, the physical mechanisms linking bubble plume hydrodynamics to oxygen transfer performance remain insufficiently quantified under configurations representative of full-scale installations. This study presents a local multi-sensor experimental characterization of a multiple
[...] Read more.
Fine-bubble aeration is a core process in wastewater treatment plants (WWTPs). However, the physical mechanisms linking bubble plume hydrodynamics to oxygen transfer performance remain insufficiently quantified under configurations representative of full-scale installations. This study presents a local multi-sensor experimental characterization of a multiple bubble plume system using a 4 × 4 array of commercial membrane diffusers in a pilot-scale aeration tank (2 m3), emulating WWTP diffuser density and geometry. Airflow rate was varied to analyze its effects on mixing and oxygen transfer efficiency. The experimental methodology combines three complementary measurement approaches. Oxygen transfer performance is quantified using a dissolved oxygen probe. Liquid-phase velocity fields are then mapped using Acoustic Doppler Velocimetry (ADV). Finally, local two-phase measurements are obtained using dual-tip Conductivity Probe (CP) arrays, which provide bubble size, bubble velocity, void fraction, and Interfacial Area Concentration (IAC). Based on these observations, a zonal hydrodynamic model is proposed to describe plume interaction, wall-driven recirculation, and the formation of a collective plume core at higher airflows. Quantitatively, the results reveal a 29% reduction in Standard Oxygen Transfer Efficiency (SOTE) between 10 and 40 m3/h, driven by a 41% increase in bubble size and an 18% rise in bubble velocity. Bubble chord length also increased with height, by 33%, 19%, and 15% over 0.8 m for 10, 20, and 40 m3/h, respectively. These trends indicate that increasing airflow enhances turbulent mixing but simultaneously enlarges bubbles and accelerates their ascent, thereby reducing residence time and negatively affecting oxygen transfer. Overall, the validated multiphase datasets and mechanistic insights demonstrate the dominant role of diffuser interaction in dense layouts, supporting improved parameterization and experimental benchmarking of fine-bubble aeration systems in WWTPs.
Full article
by
Charlotte C. I. Schneider, Belinda J. de Wit-van der Veen, Sanne M. A. Jansen, Kenneth F. M. Hergaarden, Margot E. T. Tesselaar, Niels F. M. Kok, Larissa W. van Golen, Arthur J. A. T. Braat, Regina G. H. Beets-Tan, Tarik R. Baetens and Elisabeth G. Klompenhouwer
Cancers2025, 17(24), 3889; https://doi.org/10.3390/cancers17243889 (registering DOI) - 5 Dec 2025
Background/Objectives: Over the past few years, high-dose radioembolization (≥150 Gy) has become widely adopted for the treatment of primary liver cancer, while evidence for its application in hepatic metastases is still limited. The aim of this study was to evaluate the safety
[...] Read more.
Background/Objectives: Over the past few years, high-dose radioembolization (≥150 Gy) has become widely adopted for the treatment of primary liver cancer, while evidence for its application in hepatic metastases is still limited. The aim of this study was to evaluate the safety and efficacy of high-dose transarterial radioembolization (TARE) in patients with hepatic metastases using resin Yttrium-90 (90Y) microspheres. Methods: In this retrospective analysis, patients who were treated with high-dose TARE for hepatic metastases with 90Y resin microspheres between May 2019 and April 2025 were included. The primary outcomes were treatment efficacy and toxicity assessed according to the National Cancer Institute Common Terminology Criteria for Adverse Events v5.0. Treatment efficacy was evaluated based on radiological response according to Response Evaluation Criteria in Solid Tumors version 1.1, time to progression and overall survival (OS). Secondary outcomes included 90Y PET/CT post-treatment voxel-based local deposition model dosimetry and its relations to response. Results: A total of 15 patients were included, with hepatic metastases originating from colorectal cancer (n = 11, 73.3%), neuroendocrine tumor (n = 3, 20%) and breast cancer (n = 1, 6.7%). Seven patients (47.7%) had undergone one or multiple prior loco(regional) liver treatments and 13 (86.7%) patients had prior systemic therapy. The median mean tumor absorbed dose was 160.7 Gy (IQR 127.6–245.0 Gy), and the median normal liver parenchyma dose was 40.3 Gy (IQR 21.7–52.3 Gy). Disease control was achieved in all patients, with partial response in 10 patients (66.7%) and stable disease in 5 patients (33.3%) after 3 months. The median OS was 26.5 months (95% CI 24.5 months to no estimate). Two patients (13.3%) experienced grade 3 laboratory toxicity. No grade 4 or 5 toxicities were observed. Conclusions: High-dose TARE with 90Y resin microspheres resulted in a high disease control rate and demonstrated a favorable safety profile, even in this heavily pretreated cohort.
Full article
Generative adversarial networks (GANs) typically require large datasets for effective training, which poses challenges for volumetric medical imaging tasks where data are scarce. This study addresses this limitation by extending adaptive discriminator augmentation (ADA) for three-dimensional (3D) StyleGAN2 to improve generative performance on
[...] Read more.
Generative adversarial networks (GANs) typically require large datasets for effective training, which poses challenges for volumetric medical imaging tasks where data are scarce. This study addresses this limitation by extending adaptive discriminator augmentation (ADA) for three-dimensional (3D) StyleGAN2 to improve generative performance on limited volumetric data. The proposed 3D StyleGAN2-ADA redefines all 2D operations for volumetric processing and incorporates the full set of original augmentation techniques. Experiments are conducted on the NoduleMNIST3D dataset of lung CT scans containing 590 voxel-based samples across two classes. Two augmentation pipelines are evaluated—one using color-based transformations and another employing a comprehensive set of 3D augmentations including geometric, filtering, and corruption augmentations. Performance is compared against the same network and dataset without any augmentations at all by assessing generation quality with Kernel Inception Distance (KID) and 3D Structural Similarity Index Measure (SSIM). Results show that volumetric ADA substantially improves training stability and reduces the risk of a mode collapse, even under severe data constraints. A strong augmentation strategy improves the realism of generated 3D samples and better preserves anatomical structures relative to those without data augmentation. These findings demonstrate that adaptive 3D augmentations effectively enable high-quality synthetic medical image generation from extremely limited volumetric datasets. The source code and the weights of the networks are available in the GitHub repository.
Full article
This study presents a systematic literature review of critical risk factors affecting the time and cost performance of highway construction projects. Drawing from 83 peer-reviewed studies across multiple geographic regions, the paper identifies recurrent drivers of project delay and cost overrun in highway
[...] Read more.
This study presents a systematic literature review of critical risk factors affecting the time and cost performance of highway construction projects. Drawing from 83 peer-reviewed studies across multiple geographic regions, the paper identifies recurrent drivers of project delay and cost overrun in highway construction. The most frequently reported risks include (1) financial constraints, (2) political regulatory issues; (3) land-acquisition and right-of-way delays; (4) design and scope changes; (5) utilities relocation/conflicts; (6) materials and equipment shortages; (7) contractor-related issues; (8) planning and scheduling weaknesses; (9) labour and personnel issues; and (10) weather conditions. These risk factors collectively highlight the importance of robust planning, proactive stakeholder coordination, and the integration of risk-informed decision-making tools. The findings emphasize the need for targeted risk mitigation during early project stages and provide a foundation for refining risk assessment frameworks and future research directions in transport infrastructure development.
Full article
The increasing atmospheric carbon dioxide (CO2) emissions are widely recognized as the primary driving force behind the phenomenon of global warming. Considering environmental concerns and the depletion of fossil fuel reserves, the use of electric vehicles (EVs) in transportation has emerged
[...] Read more.
The increasing atmospheric carbon dioxide (CO2) emissions are widely recognized as the primary driving force behind the phenomenon of global warming. Considering environmental concerns and the depletion of fossil fuel reserves, the use of electric vehicles (EVs) in transportation has emerged as one of the most promising technological alternatives to conventional gasoline-powered cars. Compared to their gasoline counterparts, EVs significantly reduce the costs associated with air pollution and mitigate adverse effects on human health. Owing to these characteristics, EVs have become one of the key components of the transition toward a sustainable future, while also steering the transformation of the global automotive industry. This transition is reshaping the structure of the global automobile industry. Many countries aim to achieve their greenhouse gas reduction targets by promoting the adoption of EVs. This study aims to empirically examine the effects of electric vehicles on CO2 emissions in 15 high-income countries during the period 2010–2023, highlighting both short- and long-term environmental impacts. The analysis also considers economic and socio-demographic variables such as gross domestic product (GDP), urbanization, and fossil fuel consumption. The findings indicate that the share of EVs significantly reduces CO2 emissions, whereas sales have a short-term increasing effect.
Full article
Problematic smartphone use (PSU) is increasingly recognized as a behavioral concern among university students, with consequences for well-being, risky behaviors, and academic outcomes. However, evidence from Greece remains limited. This study assessed the prevalence and correlates of PSU among students at the University
[...] Read more.
Problematic smartphone use (PSU) is increasingly recognized as a behavioral concern among university students, with consequences for well-being, risky behaviors, and academic outcomes. However, evidence from Greece remains limited. This study assessed the prevalence and correlates of PSU among students at the University of Piraeus and interpreted findings through Griffiths’ components model of addiction. A cross-sectional survey was conducted between March and June 2023 with 1743 participants, who provided socio-demographic, lifestyle, and health information and completed the Smartphone Addiction Scale–Short Version (SAS-SV). Nearly half of the students (49.2%) exceeded the proposed SAS-SV thresholds for PSU (50.5% men; 48% women). Regression analysis showed that alcohol consumption (p < 0.001), weekly screen time (p < 0.001), younger age (p < 0.001), female sex (p < 0.001), size of household (p < 0.033), and anxiety/depression (p = 0.019) were significant predictors of higher SAS-SV scores, while smoking, BMI, exercise, and academic performance were not associated. For the independent statistical tests, the Benjamini–Hochberg correction was applied to control the false discovery rate. Group comparisons confirmed greater alcohol use (p < 0.001), screen exposure (p < 0.001), and anxiety/depression (p = 0.004) among PSU students. Item-level responses reflected components of tolerance, salience, withdrawal, and conflict. These findings place Greek students at the higher end of international prevalence estimates and highlight the importance of integrating digital-well-being initiatives within student health services in universities.
Full article
Sweet potato (Ipomoea batatas) is a globally important crop and one of a growing number of plants recognized as naturally transgenic, harboring Agrobacterium-derived T-DNA genes whose functions remain largely uncharacterized. In this proof-of-concept study, we applied CRISPR/Cas9 technology to generate
[...] Read more.
Sweet potato (Ipomoea batatas) is a globally important crop and one of a growing number of plants recognized as naturally transgenic, harboring Agrobacterium-derived T-DNA genes whose functions remain largely uncharacterized. In this proof-of-concept study, we applied CRISPR/Cas9 technology to generate targeted knockouts of the Ib-rolB/C and Ib-rolD-like genes located within the sweet potato cellular T-DNA2 (IbT-DNA2) region. Mutations were introduced into sweet potato callus cultures using an optimized genome editing protocol, with most edits consisting of single-nucleotide insertions. Knockout of Ib-rolB/C did not affect callus growth but significantly reduced levels of chlorogenic acid derivatives. Validation in planta using transient expression in I. batatas leaves confirmed the suppressive effect of Ib-rolB/C disruption on polyphenol content. In contrast, Ib-rolD-like knockout lines showed reduced biomass accumulation and downregulation of cell cycle–related genes, but did not display significant changes in metabolite content in either callus cultures or leaf tissues. These findings suggest that Ib-rolB/C and Ib-rolD-like may differentially contribute to growth and secondary metabolism in sweet potato.
Full article
Background/Objectives: Genetic abnormalities are critical for the diagnosis, prognosis, and therapeutic management of myelodysplastic syndromes (MDS). This study aims to evaluate the clinical utility of Multiplex Ligation-dependent Probe Amplification (MLPA) as a rapid and cost-effective method, determining its place alongside Next-Generation Sequencing
[...] Read more.
Background/Objectives: Genetic abnormalities are critical for the diagnosis, prognosis, and therapeutic management of myelodysplastic syndromes (MDS). This study aims to evaluate the clinical utility of Multiplex Ligation-dependent Probe Amplification (MLPA) as a rapid and cost-effective method, determining its place alongside Next-Generation Sequencing (NGS) for the initial genetic assessment of patients with MDS. Methods: Bone marrow samples from 68 patients newly diagnosed with MDS were analyzed. Genomic DNA was investigated using the SALSA MLPA P414-C1 MDS probe mix to detect common copy number variations (CNVs). Results: MLPA detected genetic variants in 25 patients (36.8%). The most common finding was a single chromosomal abnormality (26.5%). Multiple pathological findings were observed in only 1.5% of patients, and a JAK2 mutation was observed in 8.8% of the cohort. However, the presence of these aberrations did not show a statistically significant association with overall survival (OS) in the cohort. Patient sex was identified as the only variable that was associated with a marginal level of statistical significance regarding OS, indicating a worse prognosis for males. Conclusions: MLPA is a valuable, rapid, and cost-effective tool for initial genetic screening in low-resource settings. This was highlighted by our finding that sex was the sole significant prognostic factor, while the MLPA-detected variants were not found to be significant. The findings suggest that comprehensive risk stratification aligned with international standards requires more advanced molecular technologies.
Full article
Introduction: Snakebite envenomation is recognized by the World Health Organization (WHO) as a neglected tropical disease. In Colombia, snakebites are frequent due to the diversity of ecosystems and snake species, and children represent a particularly vulnerable population. Objective: This study aimed to characterize
[...] Read more.
Introduction: Snakebite envenomation is recognized by the World Health Organization (WHO) as a neglected tropical disease. In Colombia, snakebites are frequent due to the diversity of ecosystems and snake species, and children represent a particularly vulnerable population. Objective: This study aimed to characterize the epidemiological and clinical behavior of snakebite envenomation in the pediatric population and to identify factors associated with its severity through the application of a multinomial logistic regression model. Methods: An exploratory analysis was conducted on 170 pediatric patients reported to the Public Health Surveillance System (SIVIGILA) and treated at San Jerónimo Hospital in Montería (HSJ). Sociodemographic and clinical data were collected, and a multinomial logistic regression model was applied to identify risk factors associated with the severity of envenomation. Results: Most cases occurred in children over 12 years of age (51.8%), and males were the most affected. The lower limbs were the most common site of the bite (87.6%). Bothrops was the main genus responsible. Non-medical practices, such as herbal poultices and potions, were reported in 28.2% of cases. Clinically, moderate envenomation was the most frequent (48.2%), with edema (88%) and pain (92%) as the main local manifestations, and nausea (36%) and vomiting (32%) as systemic manifestations. Cellulitis was the most common complication (24%). Student’s t-test showed a significant difference between complications and hospital stays lasting 3 to 7 days. The multinomial logistic regression explained 75% of the severity variability and showed that prior non-medical practices increased the risk of severe cases. Conclusions: Snakebite envenomation in children remains an important public health problem. The statistical model showed that non-medical practices are associated with a higher degree of severity.
Full article
Among the 15 beamlines in the first phase of the High-Energy Photon Source (HEPS) in China, the maximum peak data generation volume can reach 1 PB per day, with the maximum peak data generation rate reaching 3.2 Tb/s. This poses significant challenges to
[...] Read more.
Among the 15 beamlines in the first phase of the High-Energy Photon Source (HEPS) in China, the maximum peak data generation volume can reach 1 PB per day, with the maximum peak data generation rate reaching 3.2 Tb/s. This poses significant challenges to the underlying network system. To meet the storage, computing, and analysis needs of HEPS scientific data, this paper designed a high-performance and scalable network architecture based on RoCE (RDMA over Converged Ethernet). Test results demonstrate that the RoCE-based HEPS data center network system achieves high bandwidth and ultra-low latency, stably maintains reliable transmission performance during the interaction of scientific data storage, computing, and analysis, and exhibits excellent scalability to adapt to the future expansion of HEPS beamlines.
Full article
This study confronts the significant challenges inherent in Traffic State Estimation (TSE) for rural arterial networks, where sparse sensor coverage and complex, dynamic traffic flows complicate effective management and safety assurance. Traditional TSE methodologies, often dependent on single-source data streams, fail to accurately
[...] Read more.
This study confronts the significant challenges inherent in Traffic State Estimation (TSE) for rural arterial networks, where sparse sensor coverage and complex, dynamic traffic flows complicate effective management and safety assurance. Traditional TSE methodologies, often dependent on single-source data streams, fail to accurately model the intricate spatiotemporal dependencies present in such environments. This fundamental limitation precipitates critical safety hazards, including pervasive over speeding and dangerous queue spillback phenomena at intersections. To address these deficiencies, we introduce a novel hybrid intelligence framework that synergistically combines a Graph Attention Temporal Convolutional Network (GAT-TCN) with advanced Kalman Filter variants, specifically the Extended, Unscented, and Sliding Window Kalman Filters. The GAT-TCN component is engineered to excel at learning complex, non-linear correlations across both space and time through multi-source data fusion. Empirical validation conducted on a real-world rural toll corridor demonstrates that our proposed model achieves a statistically significant superiority over conventional benchmarks, as rigorously quantified by substantial reductions in both Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). Beyond mere predictive accuracy, the framework delivers transformative safety enhancements by facilitating the proactive identification of hazardous events, enabling earlier detection of over speeding and queue spillback compared to existing methods. Consequently, this research provides a scalable and robust framework for proactive rural traffic management, fundamentally shifting the paradigm from achieving incremental predictive improvements to generating decisive, safety-actionable insights for infrastructure operators.
Full article
Mining is associated with specific heavy metals (HMs), including cadmium (Cd), lead (Pb), copper (Cu), iron (Fe), and other toxic metals. These metals contaminate water and accumulate in both children and adults at varying concentrations, resulting in severe health implications. This paper examines
[...] Read more.
Mining is associated with specific heavy metals (HMs), including cadmium (Cd), lead (Pb), copper (Cu), iron (Fe), and other toxic metals. These metals contaminate water and accumulate in both children and adults at varying concentrations, resulting in severe health implications. This paper examines the impact of barite mining on water quality, human well-being, and the environment. It evaluates the health implications of natural and anthropogenic activities on the selective liberation of heavy metals at mining sites. The potential environmental impact on mining communities in the extreme dry (April), early or mid-rainy (July), and optimum rainy (October) seasons of the year is also elucidated. Ponds within six barite mining sites were analysed using an Atomic Absorption Spectrometer (AAS) to identify these metals in water samples. The implications of HM concentrations on the well-being of the young and adults were examined and assessed using relevant mathematical expressions, and the outcome was compared with national and international environmental standards. Results show that the ponds within the barite mining sites are contaminated with copper (Cu), barium (Ba), cadmium (Cd), lead (Pb), and iron (Fe). The HM concentration exceeds the reference dose (RfD) or tolerable daily intake (TDI) stated by global and national standards for water quality and healthy living. Statistical assessments indicated that the non-carcinogenic risks of Pb and Cd are higher in children than in adults. In addition to mining, farming activities may increase HM contamination within the areas. It is anticipated that existing policy frameworks and water laws will be reviewed to support efforts for the early detection of HMs in water through medical examinations, water quality assessments, and non-carcinogenic risk (NCR) assessments.
Full article
by
Monica Eljaiek-Urzola, Lino Augusto Sander de Carvalho, Edgar Quiñones-Bolaños, Stella Patricia Betancur-Turizo and Luiz Felipe Machado Faria de Sousa
Water2025, 17(24), 3447; https://doi.org/10.3390/w17243447 (registering DOI) - 5 Dec 2025
Cartagena Bay, a coastal estuary in northern Colombia, receives significant sediment inputs from the Canal del Dique, an artificial channel with average discharge rates of 55 m3/s during the dry season and 250 m3/s during the rainy season. This
[...] Read more.
Cartagena Bay, a coastal estuary in northern Colombia, receives significant sediment inputs from the Canal del Dique, an artificial channel with average discharge rates of 55 m3/s during the dry season and 250 m3/s during the rainy season. This study presents the variability of turbidity in Cartagena Bay for 21 years (2002–2022) using MODIS satellite imagery. Turbidity series were determined by using a remote sensing empirical algorithm developed for Cartagena Bay in 2024. In the present study, this algorithm was validated using MODIS data, demonstrating an adequate performance (R2 = 0.88, RMSE = 3.1, MAPE = 29.5%). Spatial and temporal turbidity patterns were analyzed for three representative months: February (dry season), July (low precipitation), and November (high rainfall). The role of the El Niño–Southern Oscillation (ENSO) on the dynamics of the Canal del Dique discharge and turbidity levels was studied through anomaly analysis and Fourier Transform Analysis (FTA). Results highlight a marked spatial variability in turbidity, with the highest turbidity levels observed near the canal mouth from April to September. FTA revealed a dominant annual cycle in turbidity and discharge, with additional semi-annual and multi-year periodicities linked to the rainfall periods and ENSO. Turbidity variability appears primarily driven by seasonal and local hydrodynamic processes, with a long-term increasing trend in turbidity. This approach can be applied to other tropical estuaries under strong fluvial influence.
Full article
Explanations for static-analysis warnings assist developers in understanding potential code issues. An end-to-end pipeline was implemented to generate natural-language explanations, evaluated on 5183 warning–explanation pairs from Java repositories, including a manually validated gold subset of 1176 examples for faithfulness assessment. Explanations were produced
[...] Read more.
Explanations for static-analysis warnings assist developers in understanding potential code issues. An end-to-end pipeline was implemented to generate natural-language explanations, evaluated on 5183 warning–explanation pairs from Java repositories, including a manually validated gold subset of 1176 examples for faithfulness assessment. Explanations were produced by a transformer-based encoder–decoder model (CodeT5) conditioned on warning types, contextual code snippets, and static-analysis evidence. Initial experiments employed single-objective optimization for hyperparameters (using a genetic algorithm with dynamic search-space correction, which adaptively adjusted search bounds based on the evolving distribution of candidate solutions, clustering promising regions, and pruning unproductive ones), but this approach enforced a fixed faithfulness–fluency trade-off; therefore, a multi-objective evolutionary algorithm (NSGA-II) was adopted to jointly optimize both criteria. Pareto-optimal configurations improved normalized faithfulness by up to 12% and textual quality by 5–8% compared to baseline CodeT5 settings, with batch sizes of 10–21, learning rates to , maximum token lengths of 36–65, beam width 5, length penalty 1.15, and nucleus sampling . Candidate explanations were reranked using a composite score of likelihood, faithfulness, and code-usefulness, producing final outputs in under 0.001 s per example. The results indicate that structured conditioning, evolutionary hyperparameter search, and reranking yield explanations that are both aligned with static-analysis evidence and linguistically coherent.
Full article
This study investigated a typical mining area with overlapping uranium and coal resources within the northern Ordos Basin. Based on the hydrogeologic conditions and spatial overlapping relationship of uranium and coal resources, we analyzed critical constraints on coordinated mining of uranium and coal.
[...] Read more.
This study investigated a typical mining area with overlapping uranium and coal resources within the northern Ordos Basin. Based on the hydrogeologic conditions and spatial overlapping relationship of uranium and coal resources, we analyzed critical constraints on coordinated mining of uranium and coal. Using the Groundwater Modeling System, we established a numerical model of the groundwater flow field for coordinated mining of uranium and coal. Accordingly, we characterized the impacts of coal mining on the groundwater level in the uranium area, followed by quantitative prediction of the relationship between the coal mining avoidance distance and the groundwater level in the uranium mining area. Regarding the impacts on the groundwater level, this study proposed priority zones and their time sequence for coal mining. Additionally, based on the time when coal mining avoidance scenarios would influence the groundwater level in the uranium mining area, this study proposed priority zones and their time sequence for uranium mining. By developing an avoidance scheme for coordinated mining of uranium and coal from temporal and spatial aspects, this study provides a theoretical basis for the scientific, coordinated mining of uranium and coal resources.
Full article
Efficient berth allocation remains a critical challenge in bulk port operations due to the stochastic nature of vessel arrivals and the complex interaction among loading resources. This study proposes an integrated optimisation–simulation framework to minimise total demurrage costs under uncertainty. The mathematical model
[...] Read more.
Efficient berth allocation remains a critical challenge in bulk port operations due to the stochastic nature of vessel arrivals and the complex interaction among loading resources. This study proposes an integrated optimisation–simulation framework to minimise total demurrage costs under uncertainty. The mathematical model was formulated as a mixed-integer linear program (MILP) to determine the optimal assignment and sequencing of vessels to berths and shiploaders, subject to time-window and capacity constraints. The MILP was solved using a Simulated Annealing (SA) metaheuristic to improve computational efficiency for large-scale instances. Subsequently, the optimised berth plans were evaluated in FlexSim, a discrete-event simulation environment, to assess the operational variability arising from stochastic factors, including vessel arrival times, service durations, and loader availability. System performance was measured through vessel waiting time, berth utilisation rate, and demurrage cost variability across multiple replications. Results indicate that the proposed SA–FlexSim framework reduced average demurrage costs by 28.7% compared to the deterministic MILP and by 21.3% relative to standalone SA, confirming its effectiveness and robustness under uncertain operating conditions. The hybrid methodology provides a practical decision-support tool for terminal operators seeking to enhance scheduling reliability and cost efficiency in bulk port environments.
Full article
The present systematic review aims to provide a comprehensive synthesis of evidence and practices regarding sustainable career transitions in elite sport. Following PRISMA guidelines, an extensive literature search was conducted in SPORTDiscus (EBSCOhost), PsycINFO, Scopus, Web of Science, and Google Scholar databases, resulting
[...] Read more.
The present systematic review aims to provide a comprehensive synthesis of evidence and practices regarding sustainable career transitions in elite sport. Following PRISMA guidelines, an extensive literature search was conducted in SPORTDiscus (EBSCOhost), PsycINFO, Scopus, Web of Science, and Google Scholar databases, resulting in 117 manuscripts, published from January 2015 to May 2025, and meeting the defined inclusion criteria. The review focused on mental health, dual-career pathways, transition readiness, and identity-related issues among elite athletes, Olympians, and Paralympians. Methodologies included qualitative, quantitative, and mixed-methods designs, with multisport and mixed-gender samples prevailing. The most commonly used instruments were semi-structured interviews and surveys. The main findings highlighted the centrality of mental health support, the role of dual-career planning, and the importance of proactive identity negotiation. Despite growing research interest, significant gaps persist in access to psychological support, structured transition planning, and dual-career strategies, with notable inconsistencies across countries and sports. The review emphasizes the necessity for integrated, multidimensional guidance, culturally sensitive psychological services, and flexible educational pathways to promote athlete well-being and sustainable post-sport careers. These insights are intended to inform the implementation of the ERASMUS+ funded PORTAL project, supporting evidence-based interventions and the development of resources such as an online platform and Real-Life Transition Officers to enhance the transition experiences of elite athletes.
Full article
The M16 protease family comprises metalloendopeptidases, characterized by a unique molecular architecture. The active enzyme molecule is composed of two halves, which together form a structure resembling a clam shell. Although the active site residues are typically located in only one half, both
[...] Read more.
The M16 protease family comprises metalloendopeptidases, characterized by a unique molecular architecture. The active enzyme molecule is composed of two halves, which together form a structure resembling a clam shell. Although the active site residues are typically located in only one half, both parts are essential for proper enzyme function. The M16 family includes many proteins that are crucial for the physiology of the organism and, therefore, are the subject of intensive research. The flagship examples are insulin-degrading enzyme (IDE), mitochondrial processing peptidases (MPPs), and mitochondrial and chloroplast presequence peptidases (PrePs). The substrates of these enzymes include many biologically important peptides, such as insulin and amyloid β. Therefore, M16 peptidases are considered attractive therapeutic targets, and understanding their structure and mechanism of action is essential for the development of specific and selective modulatory compounds.
Full article