Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (448)

Search Parameters:
Keywords = statistical optimisation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3825 KB  
Article
Surface Characteristics and Hydrolytic Stability in Milled and 3D-Printed PMMA Dental Materials
by Liliana Porojan, Flavia Roxana Bejan, Roxana Diana Vasiliu, Mihaela Ionela Gherban, Lavinia Cristina Moleriu and Anamaria Matichescu
Polymers 2026, 18(5), 597; https://doi.org/10.3390/polym18050597 (registering DOI) - 28 Feb 2026
Abstract
This study investigated how fabrication method (milling versus 3D printing) affects the water sorption and solubility of PMMA dental materials, and how surface characteristics affect hydrolytic stability. Fifty-six PMMA samples were divided into three groups fabricated from CAD/CAM milled discs (Group A: I–III) [...] Read more.
This study investigated how fabrication method (milling versus 3D printing) affects the water sorption and solubility of PMMA dental materials, and how surface characteristics affect hydrolytic stability. Fifty-six PMMA samples were divided into three groups fabricated from CAD/CAM milled discs (Group A: I–III) and four groups from 3D-printed resin (Group B: IV–VII), each subjected to distinct postprocessing protocols. Water sorption (wsp) and solubility (wsl) were measured after immersion in distilled water at 37 °C for 24, 48, and 72 h, and 7 and 14 days. Surface topography and nanoroughness were assessed using atomic force microscopy (AFM). Statistical descriptive analyses were followed by correlation analyses. Milled PMMA demonstrated significantly lower water sorption and negative solubility (mass loss), indicating material dissolution. In contrast, 3D-printed PMMA showed higher water sorption and positive solubility (mass gain), reflecting water incorporation and polymer swelling. The kinetic profiles differed: milled PMMA displayed a monophasic absorption curve, while 3D-printed PMMA exhibited a biphasic pattern with accelerated water uptake after 72 h. AFM analysis revealed that 3D-printed surfaces had significantly greater nanoroughness than milled surfaces. Strong positive correlations were observed between surface roughness parameters (Sa, Sy) and water sorption capacity. The fabrication method was found to influence the hydrolytic stability of PMMA dental materials. Milled PMMA demonstrated superior stability, with lower water uptake, smoother surfaces, and lower leaching solubility. In contrast, 3D-printed PMMA exhibited increased surface roughness and water sorption, attributed to its layered microstructure and nanoporosity. Surface topography emerged as a strong predictor of wsl, related to hydrolytic degradation. For clinical applications, milled PMMA is recommended for long-term use requiring durability, whereas 3D-printed PMMA may be appropriate for short-term applications with optimised postprocessing. Full article
(This article belongs to the Special Issue Advances in Polymeric Dental Materials (2nd Edition))
Show Figures

Figure 1

18 pages, 2969 KB  
Article
Comminution Fault Detection and Diagnosis via Autoencoders and the Sobol Method
by Freddy A. Lucay
Minerals 2026, 16(3), 244; https://doi.org/10.3390/min16030244 - 27 Feb 2026
Abstract
Fault detection and diagnosis (FDD) are critical for maintaining efficiency and operational stability of comminution systems. However, conventional methods struggle to capture their complex dynamic behaviour, while data-driven approaches are constrained by limited labelled fault data and the need for interpretable diagnostic models. [...] Read more.
Fault detection and diagnosis (FDD) are critical for maintaining efficiency and operational stability of comminution systems. However, conventional methods struggle to capture their complex dynamic behaviour, while data-driven approaches are constrained by limited labelled fault data and the need for interpretable diagnostic models. Progress is further hindered by the scarcity of publicly available industrial datasets. This study presents an explainable FDD framework that integrates unsupervised autoencoder (AE)-based anomaly detection with variance-based global sensitivity analysis (GSA) for quantitative fault diagnosis. A simulated comminution control system was developed to enable controlled validation under realistic operating variability. Multiple AE architectures were trained with hyperparameters optimised using chaotic particle swarm optimisation and evaluated using statistical and reconstruction-based metrics combined with multi-criteria decision analysis. The sparse AE achieved the best performance, with an MSE of 5.6 × 10−5, F1-score of 0.9930, and accuracy of 0.986 in detecting faults in P80 and P20. To diagnose detected faults, Sobol’s variance-based GSA was applied to quantify both the main and interaction effects of operational variables on particle size distribution. The results identify circuit feed rate, ball mill critical speed, and the pulp solids fraction supplied to the hydrocyclones as dominant drivers of faults associated with product coarsening, whereas circuit feed rate and ball mill critical speed primarily govern ultrafine particle generation. By integrating deep learning with explainable sensitivity analysis, this study advances transparent and quantitative diagnosis of complex mineral processing systems. Full article
(This article belongs to the Section Mineral Processing and Extractive Metallurgy)
Show Figures

Figure 1

26 pages, 9992 KB  
Article
Suitability Maps of Bactrocera Oleae Presence by SDM Based on Pedo-Climatic and Topographic Predictors Data in Sicily
by Giuseppe Antonio Catalano, Giovanni Pirrello, Provvidenza Rita D’Urso and Claudia Arcidiacono
Agronomy 2026, 16(5), 501; https://doi.org/10.3390/agronomy16050501 - 24 Feb 2026
Viewed by 114
Abstract
Climate change and increasingly restrictive pesticide regulations have created a growing need for new tools to support the integrated pest management (IPM) of the olive fruit fly, Bactrocera oleae, in cultivated areas of the Mediterranean. In this study, the environmental suitability for [...] Read more.
Climate change and increasingly restrictive pesticide regulations have created a growing need for new tools to support the integrated pest management (IPM) of the olive fruit fly, Bactrocera oleae, in cultivated areas of the Mediterranean. In this study, the environmental suitability for this phytophagous insect in eastern Sicily was mapped by using geographic information system (GIS) tools and species distribution models (i.e., Random Forest and MaxEnt). The models were trained on presence data of the fly, obtained from a network of pheromone traps and locations where olive trees were present, combined with climatic, topographic and soil predictors for both current conditions and the future climate scenario (2021–2040). Correlation analysis was utilised to select ten predictors from an initial set of 33 soil and climate variables. Model performance was evaluated by using 10-fold cross-validation based on accuracy measures Area Under the Curve (AUC), True Skill Statistic (TSS), and the difference between the training and testing AUC) to minimise overfitting. Both algorithms demonstrated excellent predictive performance, producing convergent suitability maps, with high values concentrated in the foothills and hills of the Iblean–Calatino area and low values along the coastal plains and at higher altitudes, where extreme temperatures and unfavourable soil textures reduce habitat suitability. Response curves highlighted the combined influence of moderate temperature and precipitation seasonality, balanced topsoil texture, and moderate slopes in defining the species’ ecological niche. The proposed framework provides an operational basis for optimising monitoring networks and targeting IPM measures under current and near-future climate conditions. Full article
26 pages, 4461 KB  
Article
A Spatiotemporal Feature-Driven Deep Learning Framework for Fine-Grained Tugboat Operation Recognition
by Xiang Jia, Hongxiang Feng, Manel Grifoll and Qin Lin
Systems 2026, 14(2), 225; https://doi.org/10.3390/systems14020225 - 23 Feb 2026
Viewed by 127
Abstract
Accurate perception of tugboat operational status is essential for optimising port scheduling efficiency and ensuring operational safety. However, existing AIS-based methods often struggle to capture the fine-grained and asymmetric manoeuvring characteristics of tugboats, particularly in distinguishing assisted berthing from unberthing operations. To address [...] Read more.
Accurate perception of tugboat operational status is essential for optimising port scheduling efficiency and ensuring operational safety. However, existing AIS-based methods often struggle to capture the fine-grained and asymmetric manoeuvring characteristics of tugboats, particularly in distinguishing assisted berthing from unberthing operations. To address these limitations, this study proposes a hybrid recognition framework integrating multidimensional feature engineering with spatiotemporal dynamics. First, a speed-threshold-based sliding window algorithm segments trajectories into sailing and berthing states. Second, a 15-dimensional feature vector—comprising statistical and descriptive features from speed, heading, and trajectory morphology—is constructed to characterise tugboat behaviour. Notably, morpho-logical descriptors such as the ‘Overlap Ratio’ serve as implicit spatial proxies, capturing geographical constraints without reliance on Electronic Navigational Charts. A three-layer fully connected neural network (FCNN) is then developed to classify segments into “Cruising” and “Assisting in Berthing/Unberthing.” Finally, a speed-dynamics rule further distinguishes berthing from unberthing based on opposing temporal evolution patterns. Experiments on real AIS data from Ningbo–Zhoushan Port demonstrate that the model achieves an F1-score of 0.90 and a recall of 0.93 for assistance-related operations. Permutation importance analysis confirms that integrating kinematic and morphological features enables interpretable and precise intent inference. This study offers a high-precision, low-dependency solution for tugboat operation identification, supporting intelligent port surveillance and sustainable maritime management. Full article
Show Figures

Figure 1

15 pages, 1018 KB  
Article
Does Vitamin D Concentration Matter? The Consequential Effects of Serum Vitamin D Concentration and Supplementation on Paediatric Fracture Risk
by Tan Si Heng Sharon, Eunice Anastasia Wilianto, Andrew Kean Seng Lim and James Hoipo Hui
Nutrients 2026, 18(4), 705; https://doi.org/10.3390/nu18040705 - 22 Feb 2026
Viewed by 208
Abstract
Objective: The association between vitamin D status and paediatric fracture risk remains controversial, with inconsistent findings across existing studies. This study aimed to evaluate the relationship between serum 25(OH)D concentrations, vitamin D sufficiency, insufficiency and deficiency, vitamin D supplementation and fracture risk in [...] Read more.
Objective: The association between vitamin D status and paediatric fracture risk remains controversial, with inconsistent findings across existing studies. This study aimed to evaluate the relationship between serum 25(OH)D concentrations, vitamin D sufficiency, insufficiency and deficiency, vitamin D supplementation and fracture risk in a large Southeast Asian paediatric cohort. Methods: This retrospective cross-sectional study included children under 18 years whose serum 25(OH)D concentrations were measured between 2014 and 2022. One-way ANOVA determined statistical significance between 25(OH)D concentrations in fracture and non-fracture groups. Prevalence of vitamin D insufficiency, deficiency and supplementation was compared between the two groups. Chi-square tests evaluated the association between 25(OH)D concentrations and supplementation against fracture risk. Results: A total of 4530 children were included (157 fracture cases, 4373 controls). Mean serum 25(OH)D concentration was lower in the fracture group than in the controls (27.44 ± 12.26 vs. 30.75 ± 15.21 ng/mL; p = 0.007). Sub-sufficient vitamin D status (<30 ng/mL) was more prevalent among fracture patients (p = 0.001), and suboptimal (p = 0.001), insufficient (p = 0.001), and deficient (p = 0.014) categories were each significantly associated with fractures. An association between vitamin D supplementation and fracture risk was observed. However, the dataset did not permit the determination of causality and a protective effect cannot be inferred. Conclusions: Higher serum 25(OH)D concentrations were associated with lower fracture risk, suggesting that optimisation of vitamin D status may represent a modifiable factor in paediatric bone health. Healthcare institutions should aim to maintain adequate 25(OH)D concentrations (>30 ng/mL). An association between vitamin D supplementation and fracture risk was observed; however, causality cannot be inferred from this retrospective dataset. Full article
(This article belongs to the Section Pediatric Nutrition)
Show Figures

Figure 1

37 pages, 1489 KB  
Article
Data-Driven Optimisation of Endoscopy Department Resources Through Statistical Analysis and Mixed-Integer Linear Programming
by Laia Llunas-Mestres, Francesca L. Aguilar Paredes, Luis Barranco-Priego, Miguel Pantaleón Sánchez, Pere Marti-Puig and Jordi Cusido
Appl. Sci. 2026, 16(4), 1864; https://doi.org/10.3390/app16041864 - 13 Feb 2026
Viewed by 156
Abstract
The efficient use of resources represents a critical challenge for public healthcare systems facing increasing demand. In this study, an operational analysis was conducted at Hospital del Mar (Barcelona) to demonstrate that persistent bottlenecks and capacity deficits are primarily organizational and not only [...] Read more.
The efficient use of resources represents a critical challenge for public healthcare systems facing increasing demand. In this study, an operational analysis was conducted at Hospital del Mar (Barcelona) to demonstrate that persistent bottlenecks and capacity deficits are primarily organizational and not only quantitative. Through a prospective observational study and exploratory data analysis (EDA), it was identified that high apparent workloads often coexist with structural inefficiencies, particularly regarding the unpredictable demand of urgent and inpatient procedures. To address these gaps, a Mixed-Integer Linear Programming (MILP) model was implemented to optimize spatial and temporal resource allocation. Unlike reactive scheduling, this data-driven approach explicitly incorporates capacity reserves for non-programmable activities and ensures realistic time slots without increasing physical or human resources. It is shown that MILP-optimized scheduling significantly balances workload, eliminates artificial overlaps, and improves room utilization—reaching rates of 99.5%. The findings highlight that temporal agenda design constitutes a critical, yet underutilized, lever for hospital management. A scalable tool for evidence-based decision-making is provided by this framework, allowing for a clear distinction between apparent productivity and real efficiency. The proposed model is considered highly transferable to other clinical settings facing similar operational constraints. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 2393 KB  
Article
Parameter Optimisation in 3D Extrusion Printing of Polyhydroxybutyrate Using Design of Experiment Methodology
by Mingzu Du, Giuseppe Tronci, Xuebin B. Yang and David J. Wood
J. Funct. Biomater. 2026, 17(2), 90; https://doi.org/10.3390/jfb17020090 - 12 Feb 2026
Viewed by 426
Abstract
This study systematically optimised extrusion-printing parameters for polyhydroxybutyrate (PHB) using a Design of Experiment (DoE) approach to improve printability and construct fidelity. A five-factor DoE was conducted to evaluate the individual and interactive effects of printhead temperature, printing pressure, printing speed, bed temperature, [...] Read more.
This study systematically optimised extrusion-printing parameters for polyhydroxybutyrate (PHB) using a Design of Experiment (DoE) approach to improve printability and construct fidelity. A five-factor DoE was conducted to evaluate the individual and interactive effects of printhead temperature, printing pressure, printing speed, bed temperature, and cartridge heating time on the dimensional accuracy of printed constructs. The resulting regression model enabled the identification of statistically significant main and interaction effects among processing variables. An optimised parameter set (printhead temperature 145 °C, pressure 150 kPa, speed 15 mm s−1, bed temperature 25 °C, and cartridge heating time 120 s) enabled the fabrication of PHB scaffolds with substantially improved shape fidelity, which was experimentally validated using verification prints. These results demonstrate that a DoE-based optimisation strategy provides a robust and efficient route for rationally tuning PHB extrusion-printing conditions, thereby enhancing process reliability for scaffold fabrication in regenerative medicine applications. Full article
(This article belongs to the Special Issue 3D Printing Biomaterials and Technologies in Biomedical Applications)
Show Figures

Figure 1

23 pages, 2298 KB  
Article
Optimal Market Share in Automobile Insurance Auction Markets
by Manuel Rodriguez, Rolando Rubilar-Torrealba, Cristóbal Fernandez-Robin, Diego Yáñez and Bernardo Pincheira
Mathematics 2026, 14(4), 628; https://doi.org/10.3390/math14040628 - 11 Feb 2026
Viewed by 257
Abstract
The automobile insurance industry plays a pivotal role in the financial system, fostering economic stability through effective risk management and consumer confidence. Continuous enhancement in price optimisation not only ensures the sustainability of insurers but also fosters a more competitive, fair, and balanced [...] Read more.
The automobile insurance industry plays a pivotal role in the financial system, fostering economic stability through effective risk management and consumer confidence. Continuous enhancement in price optimisation not only ensures the sustainability of insurers but also fosters a more competitive, fair, and balanced market, which is vital for a country’s economic development. The objective of this research is to develop a methodology for determining the optimal price offered by insurance firms for automobile policies in an industry where a First Price Sealed Bid auction system operates. A statistical methodology is employed to ascertain the expected value and standard deviation of the policies on offer in the public domain, whereby these values are calculated using a heteroskedastic linear regression estimation methodology. Furthermore, the aforementioned expected values and standard deviation enable the calculation of the value of the cumulative distribution for an optimal price set within the public offer. This study demonstrates that identifying the optimal price that maximizes profits is analogous to establishing an expected market share for each niche automobile policy market. Moreover, the market share can be calculated through a straightforward heteroskedastic linear regression estimation for instances where market shares are below 50%. Full article
Show Figures

Figure 1

37 pages, 4614 KB  
Article
The Role of AI in Revolutionising Cryptocurrency Trading
by Georgiana-Iulia Lazea, Cristian Lungu and Ovidiu-Constantin Bunget
Electronics 2026, 15(4), 742; https://doi.org/10.3390/electronics15040742 - 10 Feb 2026
Viewed by 527
Abstract
This article examines the revolutionary impact of Artificial Intelligence (AI) on transforming cryptocurrency trading, a sector characterised by extreme volatility, dynamism, and nonlinear data. Through a rigorous bibliometric analysis based on the Web of Science database, this study examines a sample of 555 [...] Read more.
This article examines the revolutionary impact of Artificial Intelligence (AI) on transforming cryptocurrency trading, a sector characterised by extreme volatility, dynamism, and nonlinear data. Through a rigorous bibliometric analysis based on the Web of Science database, this study examines a sample of 555 scientific papers published between 2016 and 2025, utilising the PRISMA protocol for systematic selection, and tools such as VOSviewer and MS Excel. The analysis identifies five major thematic clusters: (1) blockchain infrastructure and AI integration in decentralised ecosystems, (2) data analysis and practical applicability in crypto markets, (3) financial and social data analysis—machine learning algorithms, (4) algorithmic trading and automation, and (5) prediction and modelling of crypto market developments. The originality of this study lies in providing an overview of the implementation stage of these technologies by integrating the results into a map of Technology Readiness Levels (TRLs). The findings highlight a clear transition from traditional statistical methods to autonomous decision-making systems capable of processing massive volumes of data for portfolio optimisation. This study’s limitation is that it may require periodic updates, as the AI and cryptocurrency landscape are constantly evolving. Full article
Show Figures

Figure 1

16 pages, 1100 KB  
Article
Balance Assessments Using Smartphone Sensor Systems and a Clinician-Led Modified BESS Test in Soccer Athletes with Hip-Related Pain: An Exploratory Cross-Sectional Study
by Alexander Puyol, Matthew King, Charlotte Ganderton, Shuwen Hu and Oren Tirosh
Sensors 2026, 26(3), 1061; https://doi.org/10.3390/s26031061 - 6 Feb 2026
Viewed by 326
Abstract
Background: The Balance Error Scoring System (BESS) is the most practiced static postural balance assessment tool, which relies on visual observation, and has been adopted as the gold standard in the clinic and field. However, the BESS can lead to missed and inaccurate [...] Read more.
Background: The Balance Error Scoring System (BESS) is the most practiced static postural balance assessment tool, which relies on visual observation, and has been adopted as the gold standard in the clinic and field. However, the BESS can lead to missed and inaccurate diagnoses—because of its low inter-rater reliability and limited sensitivity—by missing subtle balance deficits, particularly in the athletic population. Smartphone technology using motion sensors may act as an alternative option for providing quantitative feedback to healthcare clinicians when performing balance assessments. The primary aim of this study was to explore the discriminative validity of an alternative novel smartphone-based cloud system to measure balance remotely in soccer athletes with and without hip pain. Methods: This is an exploratory cross-sectional study. A total of 64 Australian soccer athletes (128 hips, 28% females) between 18 and 40 years completed single and tandem stance balance tests that were scored using the modified BESS test and quantified using the smartphone device attached to their lower back. An Exploratory Factor Analysis (EFA) and a Clustered Receiver Operating Characteristic (ROC) using an Area Under the Curve (AUC) were used to explore the discriminative validity between the smartphone sensor system and the modified BESS test. A Linear Mixed-Effects Analysis of Covariance (ANCOVA) was used to determine any statistical differences in static balance measures between individuals with and without hip-related pain. Results: EFA revealed that the first factor primarily captured variance related to smartphone measurements, while the second factor was associated with modified BESS test scores. The ROC and the AUC showed that the smartphone sway measurements in the anterior–posterior and mediolateral directions during single-leg stance had an acceptable to excellent level of accuracy in distinguishing between individuals with and without hip-related pain (AUC = 0.72–0.80). Linear Mixed-Effects ANCOVA analysis found that individuals with hip-related pain had significantly less single-leg balance variability and magnitude in the anteroposterior and mediolateral directions compared to individuals without hip-related pain (p < 0.05). Conclusion: Due to the ability of smartphone technology to discriminate between individuals with and without hip-related pain during single-leg static balance tasks, it is recommended to use the technology in addition to the modified BESS test to optimise a clinician-led assessment and to further guide clinical balance decision-making. While the study supports smartphone technology as a method to assess static balance, its use in measuring balance during dynamic movements needs further research. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

23 pages, 1750 KB  
Article
Numerical Modelling of Pulsed Laser Surface Processing of Polymer Composites
by Krzysztof Szabliński and Krzysztof Moraczewski
Materials 2026, 19(3), 607; https://doi.org/10.3390/ma19030607 - 4 Feb 2026
Viewed by 285
Abstract
Filled-polymer coatings enable functional surfaces for selective metallisation, wetting control and local conductivity, but pulsed-laser texturing is often limited by process non-uniformity caused by scan kinematics and plume shielding. Here, we develop a three-tier numerical workflow for nanosecond pulsed-laser surface treatment of a [...] Read more.
Filled-polymer coatings enable functional surfaces for selective metallisation, wetting control and local conductivity, but pulsed-laser texturing is often limited by process non-uniformity caused by scan kinematics and plume shielding. Here, we develop a three-tier numerical workflow for nanosecond pulsed-laser surface treatment of a thermoplastic coating containing glass microspheres (baseline case: PLA matrix with Vf = 0.20; spheres represented via an effective optical transport model). Tier 1 predicts spatially resolved ablation depth under raster scanning, using an incubation law and regime switching (no-removal/melt-limited/logarithmic ablation/blow-off) coupled to a dynamic shielding factor. Tier 2 computes the 1D transient (pulse-averaged) temperature field and the thickness of the thermally softened layer. Tier 3 models post-pulse capillary redistribution of the softened layer to estimate groove reshaping. The simulations show that scan overlap and shielding dynamics dominate groove homogeneity more strongly than average power alone: under identical average power, variations in local pulse count and shielding lead to significant changes in depth statistics and regime fractions. The workflow produces quantitative maps and summary metrics (mean depth, P5–P95 range, uniformity index and regime fractions) and demonstrates how controlled reflow can smooth peaks while preserving groove depth. These results provide a predictive tool for laser parameter selection and process optimisation prior to experimental trials. Full article
Show Figures

Figure 1

11 pages, 1739 KB  
Article
Galectin-3 (Gal-3) Inhibitors as Radiosensitizers for Prostate Cancer
by Renato M. Rodrigues, Bárbara Matos, Vera Miranda-Gonçalves, Carmen Jerónimo and Margarida Fardilha
Therapeutics 2026, 3(1), 7; https://doi.org/10.3390/therapeutics3010007 - 3 Feb 2026
Viewed by 292
Abstract
Introduction: Radioresistance in prostate cancer (PCa) poses a major therapeutic challenge. Galectin-3 (Gal-3) is overexpressed in aggressive PCa and may contribute to resistance mechanisms. This study evaluated the role of Gal-3 in radioresistance and assessed the effect of its pharmacological inhibition using [...] Read more.
Introduction: Radioresistance in prostate cancer (PCa) poses a major therapeutic challenge. Galectin-3 (Gal-3) is overexpressed in aggressive PCa and may contribute to resistance mechanisms. This study evaluated the role of Gal-3 in radioresistance and assessed the effect of its pharmacological inhibition using GB1107. Methods: Parental (22RV1-P) and radioresistant (22RV1-RR) PCa cell lines were treated with GB1107. Western blotting assessed Gal-3 and Protein Phosphatase 1 alpha (PP1α) expression. Cell viability (PrestoBlue™), migration (wound assay), and clonogenic survival post-irradiation were evaluated. Statistical significance was set at p < 0.05. Results: Gal-3 was significantly upregulated in 22RV1-RR cells (p = 0.0237). GB1107 reduced viability and impaired migration in both cell lines. Radiosensitisation was observed in 22RV1-P cells (p < 0.0001) but was not significant in 22RV1-RR cells (p = 0.1258). A non-significant increase in PP1α expression was detected in RR cells. Conclusion: Gal-3 contributes to radioresistance. Further studies are needed to clarify the role of PP1α and optimise Gal-3-targeted strategies. Full article
Show Figures

Figure 1

19 pages, 2696 KB  
Article
Quantification of Microplastics in Treated Drinking Water Using µ-FT-IR Spectroscopy: A Case Study from Northeast Italy
by Giulia Dalla Fontana, Davide Lamprillo, Francesca Dotti, Ada Ferri, Tommaso Foccardi and Raffaella Mossotti
Microplastics 2026, 5(1), 23; https://doi.org/10.3390/microplastics5010023 - 2 Feb 2026
Viewed by 623
Abstract
Microplastics spread through the environment in various ways. Inland waters are an ideal medium for their dispersal, as they collect pollutants from various sources and transport them over long distances. From there, microplastics can enter the marine environment, break down into smaller particles [...] Read more.
Microplastics spread through the environment in various ways. Inland waters are an ideal medium for their dispersal, as they collect pollutants from various sources and transport them over long distances. From there, microplastics can enter the marine environment, break down into smaller particles or end up in drinking water treatment plants. However, the fate, transport and potential health effects of microplastics after ingestion of drinking water and water in food are not yet fully understood. It is therefore necessary to evaluate the quantification and identification of microplastics in drinking water by analysing real samples in order to assess the potential impact on human health. To this end, microplastic contamination in 32 treated drinking water samples from a surface water treatment plant in north-eastern Italy were analysed using micro-Fourier transform infrared spectroscopy (µ-FT-IR). The results indicated low levels of contamination, with all the samples containing less than 170 microplastics per litre, which is in line with European drinking water levels. Polyolefins with size 50–500 µm, such as polypropylene and polyethylene, were the predominant polymers detected (50.2%), while surprisingly polyethylene terephthalate was scarcely present (0.1%, size range 10–50 µm). Statistical analysis revealed a significant negative correlation between microplastic concentration and sampling volume, with larger volumes yielding fewer particles. This inconsistency likely results from the lack of bottle rinsing when only a fraction of the sampling volume is filtered. It was also found that rinsing the sampling bottles with ethanol alone prior to analysis was sufficient to ensure accurate quantification. These results highlight the challenges in standardising the detection of microplastics in drinking water and underline the need for optimised sampling protocols. Full article
(This article belongs to the Collection Feature Papers in Microplastics)
Show Figures

Graphical abstract

16 pages, 2645 KB  
Article
Point-of-Care Bilirubin Testing in Neonates: Comparative Performance of Blood Gas Analysis and Transcutaneous Bilirubinometry
by Andrew Xu, Bincy Francis, Kay Weng Choy, George Francis Dargaville, Amy Surkitt, David Tran, Rami Subhi and Wei Qi Fan
Healthcare 2026, 14(3), 370; https://doi.org/10.3390/healthcare14030370 - 1 Feb 2026
Viewed by 382
Abstract
Background: Neonatal jaundice is a common condition with potentially severe complications such as bilirubin-induced neurological dysfunction and kernicterus. While serum bilirubin (SBR) remains the standard laboratory measurement, point-of-care methods, such as transcutaneous bilirubinometry (TcB) and blood gas analysers (BGAs), offer rapid, less [...] Read more.
Background: Neonatal jaundice is a common condition with potentially severe complications such as bilirubin-induced neurological dysfunction and kernicterus. While serum bilirubin (SBR) remains the standard laboratory measurement, point-of-care methods, such as transcutaneous bilirubinometry (TcB) and blood gas analysers (BGAs), offer rapid, less invasive alternatives. Direct comparisons of their diagnostic accuracy remain limited. Objective: The aim of this study was to assess and compare diagnostic accuracy and clinical utility of TcB and BGA against SBR in neonatal hyperbilirubinaemia screening. Methods: This retrospective study included neonates (n = 221) with concurrent SBR, BGA, and TcB measurements (n = 333). Assessment was via Passing–Bablok regression, Bland–Altman analysis, and Spearman correlation. Diagnostic performance was evaluated against jaundice thresholds in phototherapy charts (≥95th percentile threshold). Subgroup analyses considered phototherapy status, haemoglobin concentration, and Fitzpatrick skin type. Results: BGA showed stronger agreement with SBR (R2 = 0.88) than TcB (R2 = 0.43). BGA remained accurate regardless of phototherapy or haemoglobin levels. TcB accuracy declined post-phototherapy with reduced predictive value in darker-skinned neonates (Fitzpatrick III–VI) and increased false discovery rates. Both methods demonstrated low sensitivity (45.8%) but high specificity (>95%) and negative predictive value (~91%) for clinically significant hyperbilirubinaemia. BGA had a higher diagnostic odds ratio (47.5) than TcB (19.3). When individual patient sequential SBR and BGA measurements were compared for jaundice tracking (n = 175), there was high correlation, (r = 0.971) with no statistical differences, and 50% of measurements achieving agreement within 10 μmol/L. Conclusions: BGA is a more reliable alternative to SBR than TcB, particularly in time-critical or resource-limited settings. While TcB remains a non-invasive screening tool, limited accuracy post-phototherapy and with darker skinned neonates indicate confirmatory SBR testing. These findings support the selective and context-aware use of BGA and TcB to optimise neonatal hyperbilirubinaemia management and reduce interventions. Full article
(This article belongs to the Section Clinical Care)
Show Figures

Figure 1

53 pages, 7826 KB  
Article
Neural Network Method for Detecting Low-Intensity DDoS Attacks with Stochastic Fragmentation and Its Adaptation to Law Enforcement Activities in the Cyber Protection of Critical Infrastructure Facilities
by Serhii Vladov, Victoria Vysotska, Łukasz Ścisło, Rafał Dymczyk, Oleksandr Posashkov, Mariia Nazarkevych, Oleksandr Yunin, Liliia Bobrishova and Yevheniia Pylypenko
Computers 2026, 15(2), 84; https://doi.org/10.3390/computers15020084 - 1 Feb 2026
Viewed by 257
Abstract
This article develops a method for the early detection of low-intensity DDoS attacks based on a three-factor vector metric and implements an applied hybrid neural network traffic analysis system that combines preprocessing stages, competitive pretraining (SOM), a radial basis layer, and an associative [...] Read more.
This article develops a method for the early detection of low-intensity DDoS attacks based on a three-factor vector metric and implements an applied hybrid neural network traffic analysis system that combines preprocessing stages, competitive pretraining (SOM), a radial basis layer, and an associative Grossberg output, followed by gradient optimisation. The initial tools used are statistical online estimates (moving or EWMA estimates), CUSUM-like statistics for identifying small stable shifts, and deterministic signature filters. An algorithm has been developed that aggregates the components of fragmentation, reception intensity, and service availability into a single index. Key features include the physically interpretable features, a hybrid neural network architecture with associative stability and low computational complexity, and built-in mechanisms for adaptive threshold calibration and online training. An experimental evaluation of the developed method using real telemetry data demonstrated high recognition performance of the proposed approach (accuracy is 0.945, AUC is 0.965, F1 is 0.945, localisation accuracy is 0.895, with an average detection latency of 55 ms), with these results outperforming the compared CNN-LSTM and Transformer solutions. The scientific contribution of this study lies in the development of a robust, computationally efficient, and application-oriented solution for detecting low-intensity attacks with the ability to integrate into edge and SOC systems. Practical recommendations for reducing false positives and further improvements through low-training methods and hardware acceleration are also proposed. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (3rd Edition))
Show Figures

Figure 1

Back to TopTop