Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (551)

Search Parameters:
Keywords = statistical analysis of experimental evaluation data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 573 KB  
Article
Application and Evaluation of a Bipolar Improvement-Based Metaheuristic Algorithm for Photovoltaic Parameter Estimation
by Mashar Cenk Gençal
Mathematics 2026, 14(3), 548; https://doi.org/10.3390/math14030548 - 3 Feb 2026
Abstract
Photovoltaic (PV) systems play a significant role in renewable energy production. Due to the nonlinear and multi-modal nature of PV models, using accurate model parameters is crucial. In recent years, metaheuristic algorithms have been utilized to estimate these parameter values. While established metaheuristics [...] Read more.
Photovoltaic (PV) systems play a significant role in renewable energy production. Due to the nonlinear and multi-modal nature of PV models, using accurate model parameters is crucial. In recent years, metaheuristic algorithms have been utilized to estimate these parameter values. While established metaheuristics like Genetic Algorithms (GAs) incorporate mechanisms such as mutation and selection to maintain diversity, they may still encounter challenges related to premature convergence when navigating the complex, multi-modal landscapes of PV parameter estimation. In this study, the performance of the previously proposed Bipolar Improved Roosters Algorithm (BIRA), which enhances search efficiency through a bipolar movement strategy to balance exploration and exploitation phases, is evaluated. BIRA is compared with the Simple GA (SGA), Particle Swarm Optimization (PSO), and Grey Wolf Optimizer (GWO) in estimating the electrical parameters of a single-diode PV model using experimental current-voltage data. The experimental results demonstrate that BIRA outperforms its competitors, achieving the lowest Root Mean Squared Error (RMSE) of 1.0504 × 103 for the Siemens SM55 and 4.8698 × 104 for the Kyocera KC200GT modules. Furthermore, statistical analysis using the Friedman test confirms BIRA’s superiority, ranking it first among all tested algorithms across both datasets. These findings indicate that BIRA is a effective and reliable tool for accurate PV parameter estimation. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

31 pages, 1633 KB  
Article
Foundation-Model-Driven Skin Lesion Segmentation and Classification Using SAM-Adapters and Vision Transformers
by Faisal Binzagr and Majed Hariri
Diagnostics 2026, 16(3), 468; https://doi.org/10.3390/diagnostics16030468 - 3 Feb 2026
Abstract
Background: The precise segmentation and classification of dermoscopic images remain prominent obstacles in automated skin cancer evaluation due, in part, to variability in lesions, low-contrast borders, and additional artifacts in the background. There have been recent developments in foundation models, with a particular [...] Read more.
Background: The precise segmentation and classification of dermoscopic images remain prominent obstacles in automated skin cancer evaluation due, in part, to variability in lesions, low-contrast borders, and additional artifacts in the background. There have been recent developments in foundation models, with a particular emphasis on the Segment Anything Model (SAM)—these models exhibit strong generalization potential but require domain-specific adaptation to function effectively in medical imaging. The advent of new architectures, particularly Vision Transformers (ViTs), expands the means of implementing robust lesion identification; however, their strengths are limited without spatial priors. Methods: The proposed study lays out an integrated foundation-model-based framework that utilizes SAM-Adapter-fine-tuning for lesion segmentation and a ViT-based classifier that incorporates lesion-specific cropping derived from segmentation and cross-attention fusion. The SAM encoder is kept frozen while lightweight adapters are fine-tuned only, to introduce skin surface-specific capacity. Segmentation priors are incorporated during the classification stage through fusion with patch-embeddings from the images, creating lesion-centric reasoning. The entire pipeline is trained using a joint multi-task approach using data from the ISIC 2018, HAM10000, and PH2 datasets. Results: From extensive experimentation, the proposed method outperforms the state-of-the-art segmentation and classification across the dataset. On the ISIC 2018 dataset, it achieves a Dice score of 94.27% for segmentation and an accuracy of 95.88% for classification performance. On PH2, a Dice score of 95.62% is achieved, and for HAM10000, an accuracy of 96.37% is achieved. Several ablation analyses confirm that both the SAM-Adapters and lesion-specific cropping and cross-attention fusion contribute substantially to performance. Paired t-tests are used to confirm statistical significance for all the previously stated measures where improvements over strong baselines indicate a p<0.01 for most comparisons and with large effect sizes. Conclusions: The results indicate that the combination of prior segmentation from foundation models, plus transformer-based classification, consistently and reliably improves the quality of lesion boundaries and diagnosis accuracy. Thus, the proposed SAM-ViT framework demonstrates a robust, generalizable, and lesion-centric automated dermoscopic analysis, and represents a promising initial step towards clinically deployable skin cancer decision-support system. Next steps will include model compression, improved pseudo-mask refinement and evaluation on real-world multi-center clinical cohorts. Full article
(This article belongs to the Special Issue Medical Image Analysis and Machine Learning)
Show Figures

Figure 1

30 pages, 616 KB  
Article
Structural Preservation in Time Series Through Multiscale Topological Features Derived from Persistent Homology
by Luiz Carlos de Jesus, Francisco Fernández-Navarro and Mariano Carbonero-Ruz
Mathematics 2026, 14(3), 538; https://doi.org/10.3390/math14030538 - 2 Feb 2026
Abstract
A principled, model-agnostic framework for structural feature extraction in time series is presented, grounded in topological data analysis (TDA). The motivation stems from two gaps identified in the literature: First, compact and interpretable representations that summarise the global geometric organisation of trajectories across [...] Read more.
A principled, model-agnostic framework for structural feature extraction in time series is presented, grounded in topological data analysis (TDA). The motivation stems from two gaps identified in the literature: First, compact and interpretable representations that summarise the global geometric organisation of trajectories across scales remain scarce. Second, a unified, task-agnostic protocol for evaluating structure preservation against established non-topological families is still missing. To address these gaps, time-delay embeddings are employed to reconstruct phase space, sliding windows are used to generate local point clouds, and Vietoris–Rips persistent homology (up to dimension two) is computed. The resulting persistence diagrams are summarised with three transparent descriptors—persistence entropy, maximum persistence amplitude, and feature counts—and concatenated across delays and window sizes to yield a multiscale representation designed to complement temporal and spectral features while remaining computationally tractable. A unified experimental design is specified in which heterogeneous, regularly sampled financial series are preprocessed on native calendars and contrasted with competitive baselines spanning lagged, calendar-driven, difference/change, STL-based, delay-embedding PCA, price-based statistical, signature (FRUITS), and network-derived (NetF) features. Structure preservation is assessed through complementary criteria that probe spectral similarity, variance-scaled reconstruction fidelity, and the conservation of distributional shape (location, scale, asymmetry, tails). The study is positioned as an evaluation of representations, rather than a forecasting benchmark, emphasising interpretability, comparability, and methodological transparency while outlining avenues for adaptive hyperparameter selection and alternative filtrations. Full article
Show Figures

Figure 1

20 pages, 2657 KB  
Article
A Multicomponent Communication Intervention to Reduce the Psycho-Emotional Effects of Critical Illness in ICU Patients Related to Their Level of Consciousness: CONECTEM
by Marta Prats-Arimon, Montserrat Puig-Llobet, Mar Eseverri-Rovira, Elisabet Gallart, David Téllez-Velasco, Sara Shanchez-Balcells, Zaida Agüera, Khadija El Abidi-El Ghazouani, Teresa Lluch-Canut, Miguel Angel Hidalgo-Blanco and Mª Carmen Moreno-Arroyo
J. Clin. Med. 2026, 15(3), 1154; https://doi.org/10.3390/jcm15031154 - 2 Feb 2026
Viewed by 32
Abstract
Background/Objectives: Patients admitted to intensive care units (ICUs) are confronted with complex clinical situations that impact their physical condition and psychological well-being. Psycho-emotional disorders such as pain, anxiety and post-traumatic stress are highly prevalent in this context, significantly affecting both the patient’s experience [...] Read more.
Background/Objectives: Patients admitted to intensive care units (ICUs) are confronted with complex clinical situations that impact their physical condition and psychological well-being. Psycho-emotional disorders such as pain, anxiety and post-traumatic stress are highly prevalent in this context, significantly affecting both the patient’s experience and the quality of care provided. Effective communication can help manage patients’ psycho-emotional states and prevent post-ICU disorders. To evaluate the effectiveness of the CONECTEM communicative intervention in improving the psycho-emotional well-being of critically ill patients admitted to the intensive care unit, regarding pain, anxiety, and post-traumatic stress symptoms. Methods: A quasi-experimental study employed a pre–post-test design with both a control group and an intervention group. The study was conducted in two ICUs in a tertiary Hospital in Spain. A total of 111 critically ill patients and 180 nurse–patient interactions were included according to the inclusion/exclusion criteria. Interactions were classified according to the level of the patient’s consciousness into three groups: G1 (Glasgow 15), G2 (Glasgow 14–9), and G3 (Glasgow < 9). Depending on the patient’s communication difficulties, nurses selected one of three communication strategies of the CONECTEM intervention (AAC low teach, pictograms, magnetic board, and musicotherapy). Pain was assessed using the VAS or BPS scale, anxiety using the STAI, and symptoms of PTSD using the IES-R. The RASS scale was utilized to evaluate the degree of sedation and agitation in critically ill patients receiving mechanical ventilation. Data analysis was performed using repeated ANOVA measures for the pre–post-test, as well as Pearson’s correlation test and Mann–Whitney U or Kruskal–Wallis statistical tests. Results: The results showed pre–post differences consistent with pain after the intervention in patients with Glasgow scores of 15 (p < 0.001) and 14–9 (p < 0.001) and in anxiety (p = 0.010), reducing this symptom by 50% pre-test vs. 26.7% post-test. Patients in the intervention group with levels of consciousness (Glasgow 15–9) tended to decrease their post-traumatic stress symptoms, with reductions in the mean IES scale patients with a Glasgow score of 15 [24.7 (±15.20) vs. 22.5 (±14.11)] and for patients with a Glasgow score of 14–9 [(Glasgow 14–9) [30.2 (±13.56) 27.9 (±11.14)], though this was not significant. Given that patients with a Glasgow score below 9 were deeply sedated (RASS-4), no pre–post-test differences were observed in relation to agitation levels. Conclusions: The CONECTEM communication intervention outcomes differed between pre- and post-intervention assessments in patients with a Glasgow Coma Scale score of 15–9 regarding pain. These findings are consistent with a potential benefit of the CONECTEM communication intervention, although further studies using designs that allow for stronger causal inference are needed to assess its impact on the psycho-emotional well-being of critically ill patients. Full article
(This article belongs to the Special Issue Clinical Management and Long-Term Prognosis in Intensive Care)
Show Figures

Figure 1

24 pages, 346 KB  
Article
The Role of Legal and Regulatory Frameworks in Driving Digital Transformation for the Banking Sector in Qatar with Global Benchmarks
by Bothaina Alsobai and Dalal Aassouli
J. Risk Financial Manag. 2026, 19(2), 99; https://doi.org/10.3390/jrfm19020099 - 2 Feb 2026
Viewed by 56
Abstract
This study evaluates how legal and regulatory architectures shape banks’ digital transformation in Qatar relative to peer jurisdictions and isolates the regulatory components that most strongly predict observed differences in digital maturity. Employing a comparative mixed-methods design, the study links a structured legal-regulatory [...] Read more.
This study evaluates how legal and regulatory architectures shape banks’ digital transformation in Qatar relative to peer jurisdictions and isolates the regulatory components that most strongly predict observed differences in digital maturity. Employing a comparative mixed-methods design, the study links a structured legal-regulatory assessment to quantitative benchmarking of fifteen banks (five Qatar, ten international) using a Digital Maturity Index and inferential tests (descriptive statistics, independent-samples t-tests, and OLS regressions). International banks exhibit higher average digital maturity than Qatar banks, and across the sample, regulatory clarity and coherence are positively and significantly associated with digital maturity, whereas supervisory intensity alone shows no comparable effect; implementation frictions in open banking/interoperability, unified data protection, and approval timelines constrain collaboration and product rollout in Qatar. Moreover, the cross-sectional design, modest sample size, and index weighting choices limit causal inference and external validity, indicating the need for longitudinal and quasi-experimental designs to corroborate mechanisms and generalize findings. Policymakers should adopt risk-proportionate, outcomes-based rules, codify interoperable API standards, strengthen data rights and cloud/third-party governance, and establish sector-level KPIs to match supervisory expectations with bank execution and accelerate safe digitalization. Enhancements to privacy, data portability, and inclusive digital onboarding are likely to improve consumer trust, competition, and access, thereby advancing broad-based participation in digital financial services. The study integrates legal analysis with bank-level operational metrics through an analytically tractable index and a Qatar–international comparison, demonstrating the outsized role of regulatory clarity in advancing digital maturity. Full article
(This article belongs to the Section Banking and Finance)
53 pages, 7826 KB  
Article
Neural Network Method for Detecting Low-Intensity DDoS Attacks with Stochastic Fragmentation and Its Adaptation to Law Enforcement Activities in the Cyber Protection of Critical Infrastructure Facilities
by Serhii Vladov, Victoria Vysotska, Łukasz Ścisło, Rafał Dymczyk, Oleksandr Posashkov, Mariia Nazarkevych, Oleksandr Yunin, Liliia Bobrishova and Yevheniia Pylypenko
Computers 2026, 15(2), 84; https://doi.org/10.3390/computers15020084 - 1 Feb 2026
Viewed by 85
Abstract
This article develops a method for the early detection of low-intensity DDoS attacks based on a three-factor vector metric and implements an applied hybrid neural network traffic analysis system that combines preprocessing stages, competitive pretraining (SOM), a radial basis layer, and an associative [...] Read more.
This article develops a method for the early detection of low-intensity DDoS attacks based on a three-factor vector metric and implements an applied hybrid neural network traffic analysis system that combines preprocessing stages, competitive pretraining (SOM), a radial basis layer, and an associative Grossberg output, followed by gradient optimisation. The initial tools used are statistical online estimates (moving or EWMA estimates), CUSUM-like statistics for identifying small stable shifts, and deterministic signature filters. An algorithm has been developed that aggregates the components of fragmentation, reception intensity, and service availability into a single index. Key features include the physically interpretable features, a hybrid neural network architecture with associative stability and low computational complexity, and built-in mechanisms for adaptive threshold calibration and online training. An experimental evaluation of the developed method using real telemetry data demonstrated high recognition performance of the proposed approach (accuracy is 0.945, AUC is 0.965, F1 is 0.945, localisation accuracy is 0.895, with an average detection latency of 55 ms), with these results outperforming the compared CNN-LSTM and Transformer solutions. The scientific contribution of this study lies in the development of a robust, computationally efficient, and application-oriented solution for detecting low-intensity attacks with the ability to integrate into edge and SOC systems. Practical recommendations for reducing false positives and further improvements through low-training methods and hardware acceleration are also proposed. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (3rd Edition))
Show Figures

Figure 1

22 pages, 3300 KB  
Article
Normalization Challenges Across Adipocyte Differentiation and Lipid-Modulating Treatments: Identifying Reliable Housekeeping Genes
by Zhenya Ivanova, Valeria Petrova, Toncho Penev and Natalia Grigorova
Int. J. Mol. Sci. 2026, 27(3), 1369; https://doi.org/10.3390/ijms27031369 - 29 Jan 2026
Viewed by 112
Abstract
Accurate normalization of RT-qPCR data requires selecting stable internal control genes, particularly in models characterized by dynamic metabolic transitions, such as 3T3-L1 adipocytes. The current study compares the expression stability of nine widely used housekeeping genes (HKGs) (peptidylprolyl isomerase A (Ppia), [...] Read more.
Accurate normalization of RT-qPCR data requires selecting stable internal control genes, particularly in models characterized by dynamic metabolic transitions, such as 3T3-L1 adipocytes. The current study compares the expression stability of nine widely used housekeeping genes (HKGs) (peptidylprolyl isomerase A (Ppia), glyceraldehyde-3-phosphate dehydrogenase (Gapdh), beta-2 microglobulin (B2M), ribosomal protein, large, P0 (36b4), hydroxymethylbilane synthase (Hmbs), hypoxanthine guanine phosphoribosyl transferase (Hprt), tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein, zeta polypeptide (Ywhaz), 18S ribosomal RNA (18S), and β-actin (Actb)) across key stages of differentiation (days 0, 9, and 18) and under treatments with palmitic acid and docosahexaenoic acid. Stability was assessed using four classical algorithms—geNorm, NormFinder, BestKeeper, and RefFinder—supplemented by the ΔCt method, conventional statistical testing, correlation, and regression analysis relative to two target genes, fatty acid-binding protein 4 (Fabp4) and sterol regulatory element binding transcription factor 1 (Srebf1). The obtained data indicate that no single HKG remains universally stable across these experimental conditions, and the expression of traditionally used reference genes (Gapdh, Actb, Hprt, 18S) is highly influenced by both the stage of adipogenesis and exposure to lipid-modulating factors. In contrast, Ppia, 36b4, and B2M—despite some of them being underestimated in use as references—consistently display the lowest variability across most analytical tools, forming a reliable and functionally diverse normalization panel. It should be noted that our initial stability assessment revealed apparent discrepancies among mathematical evaluation methods, emphasizing the need for a holistic, multiple-level approach strategy. The applied combination of algorithmic and statistical methods provides a more rigorous and objective framework for assessing the stability of reference genes, which is highly recommended in such a complex adipocyte-based model. Full article
(This article belongs to the Special Issue Fat and Obesity: Molecular Mechanisms and Pathogenesis)
Show Figures

Figure 1

27 pages, 1310 KB  
Article
Adversarial Attack Resilient ML-Assisted Golden Free Approach for Hardware Trojan Detection
by Ashutosh Ghimire, Mohammed Alkurdi, Ghazal Ghajari, Mohammad Arif Hossain and Fathi Amsaad
Microelectronics 2026, 2(1), 2; https://doi.org/10.3390/microelectronics2010002 - 29 Jan 2026
Viewed by 78
Abstract
The growing dependence on third-party foundries for integrated circuit (IC) fabrication has created major security concerns because of hardware Trojan (HT) insertion risks. Traditional detection methods, including side-channel analysis and golden reference models, face limitations such as sensitivity to noise, high cost, and [...] Read more.
The growing dependence on third-party foundries for integrated circuit (IC) fabrication has created major security concerns because of hardware Trojan (HT) insertion risks. Traditional detection methods, including side-channel analysis and golden reference models, face limitations such as sensitivity to noise, high cost, and impracticality for large-scale deployment. This work introduces a machine learning framework for HT detection that eliminates the need for golden references. The framework automatically extracts statistical features from chip data, groups chips into clusters, and uses an internal filtering process to identify the most reliable patterns. These patterns are then used to guide a learning model that can accurately separate Trojan-infected chips from clean ones. Experimental evaluation demonstrates that the proposed method achieves high detection accuracy with zero false negatives, while remaining resilient against adversarial perturbations. These findings indicate that cluster-filtered pseudo-labeling provides a practical and scalable solution for enhancing hardware security in modern IC supply chains. Full article
Show Figures

Figure 1

26 pages, 6713 KB  
Article
Deep Learning-Based Damage Detection on Composite Bridge Using Vibration Signals Under Varying Temperature Conditions
by Arjun Poudel, Jae Yeol Song, Byoung Hooi Cho and Janghwan Kim
Appl. Sci. 2026, 16(3), 1263; https://doi.org/10.3390/app16031263 - 26 Jan 2026
Viewed by 217
Abstract
The dynamic characteristics of bridges are not only influenced by structural damage but also by ambient environmental variations. If environmental factors are not incorporated into the detection algorithm, they may lead to false positives or false negatives. In recent years, vibration-based damage detection [...] Read more.
The dynamic characteristics of bridges are not only influenced by structural damage but also by ambient environmental variations. If environmental factors are not incorporated into the detection algorithm, they may lead to false positives or false negatives. In recent years, vibration-based damage detection methods have gained significant attention in structural health monitoring (SHM), particularly for assessing structural integrity under varying temperature conditions. This study introduces a deep-learning framework for identifying damage in composite bridges by utilizing both time-domain and frequency-domain vibration signals while explicitly accounting for temperature effects. Two deep learning models—Convolutional Neural Network (CNN) and Artificial Neural Network (ANN)—were implemented and compared. The effectiveness of the proposed damage identification approach was evaluated using an experimental dataset obtained from a composite bridge structure. Furthermore, statistical evaluation metrics—including accuracy, precision, recall, F1 score, and the ROC curve—were used to compare the damage detection performance of the two deep learning models. The results reveal that the CNN model consistently outperforms the ANN in terms of classification accuracy. Moreover, frequency-domain analysis was shown to be more effective than time-domain analysis for damage classification, and integrating temperature data with vibration signals improved the performance of all model architectures. Full article
Show Figures

Figure 1

51 pages, 12791 KB  
Article
Generative Adversarial Networks for Energy-Aware IoT Intrusion Detection: Comprehensive Benchmark Analysis of GAN Architectures with Accuracy-per-Joule Evaluation
by Iacovos Ioannou and Vasos Vassiliou
Sensors 2026, 26(3), 757; https://doi.org/10.3390/s26030757 - 23 Jan 2026
Viewed by 142
Abstract
The proliferation of Internet of Things (IoT) devices has created unprecedented security challenges characterized by resource constraints, heterogeneous network architectures, and severe class imbalance in attack detection datasets. This paper presents a comprehensive benchmark evaluation of five Generative Adversarial Network (GAN) architectures for [...] Read more.
The proliferation of Internet of Things (IoT) devices has created unprecedented security challenges characterized by resource constraints, heterogeneous network architectures, and severe class imbalance in attack detection datasets. This paper presents a comprehensive benchmark evaluation of five Generative Adversarial Network (GAN) architectures for energy-aware intrusion detection: Standard GAN, Progressive GAN (PGAN), Conditional GAN (cGAN), Graph-based GAN (GraphGAN), and Wasserstein GAN with Gradient Penalty (WGAN-GP). Our evaluation framework introduces novel energy-normalized performance metrics, including Accuracy-per-Joule (APJ) and F1-per-Joule (F1PJ), that enable principled architecture selection for energy-constrained deployments. We propose an optimized WGAN-GP architecture incorporating diversity loss, feature matching, and noise injection mechanisms specifically designed for classification-oriented data augmentation. Experimental results on a stratified subset of the BoT-IoT dataset (approximately 1.83 million records) demonstrate that our optimized WGAN-GP achieves state-of-the-art performance, with 99.99% classification accuracy, a 0.99 macro-F1 score, and superior generation quality (MSE 0.01). While traditional classifiers augmented with SMOTE (i.e., Logistic Regression and CNN1D-TCN) also achieve 99.99% accuracy, they suffer from poor minority class detection (77.78–80.00%); our WGAN-GP improves minority class detection to 100.00% on the reported test split (45 of 45 attack instances correctly identified). Furthermore, WGAN-GP provides substantial efficiency advantages under our energy-normalized metrics, achieving superior accuracy-per-joule performance compared to Standard GAN. Also, a cross-dataset validation across five benchmarks (BoT-IoT, CICIoT2023, ToN-IoT, UNSW-NB15, CIC-IDS2017) was implemented using 250 pooled test attacks to confirm generalizability, with WGAN-GP achieving 98.40% minority class accuracy (246/250 attacks detected) compared to 76.80% for Classical + SMOTE methods, a statistically significant 21.60 percentage point improvement (p<0.0001). Finally, our analysis reveals that incorporating diversity-promoting mechanisms in GAN training simultaneously achieves best generation quality AND best classification performance, demonstrating that these objectives are complementary rather than competing. Full article
Show Figures

Graphical abstract

31 pages, 4237 KB  
Article
Cutting Force Mechanisms in Drilling 90MnCrV8 Tool Steel: ANOVA and Theoretical Insights
by Jaroslava Fulemová, Josef Sklenička, Jan Hnátík, Miroslav Gombár, Jindřich Sýkora, Michal Povolný and Adam Lukáš
J. Manuf. Mater. Process. 2026, 10(1), 38; https://doi.org/10.3390/jmmp10010038 - 20 Jan 2026
Viewed by 169
Abstract
This study investigates the influence of tool geometry and cutting parameters on thrust forces and process stability during the drilling of 90MnCrV8, a hard and wear-resistant tool steel. The objective was to identify the dominant and interactive effects of feed per revolution ( [...] Read more.
This study investigates the influence of tool geometry and cutting parameters on thrust forces and process stability during the drilling of 90MnCrV8, a hard and wear-resistant tool steel. The objective was to identify the dominant and interactive effects of feed per revolution (frev), nominal tool diameter (D), cutting speed (vc), and geometry angles (εr, αo, ωr) on the thrust force (Ff). Experimental data were evaluated using analysis of variance (ANOVA) to determine statistical significance and effect size (η2), supported by theoretical models by Kienzle, Merchant, Oxley and Zorev to explain observed physical trends. Feed per revolution had the most decisive influence on thrust force (η2 = 0.690; p < 0.001), followed by tool diameter (D; η2 = 0.188). Geometric parameters showed secondary yet significant effects, mainly on stress distribution and chip evacuation. The interaction between D and frev produced a multiplicative force increase, while the combination of frev and helix angle (ωr) reduced friction at higher feeds. Cutting speed had a minor effect (η2 = 0.007), suggesting limited thermal softening. The findings confirm that drilling hard steels is primarily governed by the energy of plastic deformation. Full article
Show Figures

Figure 1

15 pages, 1238 KB  
Article
Use and Safety of Tyrphostin AG17 as a Stabilizer in Foods and Dietary Supplements Based on Toxicological Studies and QSAR Analysis
by Osvaldo Garrido-Acosta, Ramón Soto-Vázquez, Gabriel Marcelín-Jiménez and Luis Jesús García-Aguirre
Foods 2026, 15(2), 350; https://doi.org/10.3390/foods15020350 - 18 Jan 2026
Viewed by 157
Abstract
This study evaluated two formulations of L-carnitine, which were developed and impregnated in an oil-based self-emulsifying system (SEDDS), the first with tyrphostin AG17 and the second without the addition of tyrphostin AG17. The formulation with tyrphostin AG17 showed the presence of stable microvesicles [...] Read more.
This study evaluated two formulations of L-carnitine, which were developed and impregnated in an oil-based self-emulsifying system (SEDDS), the first with tyrphostin AG17 and the second without the addition of tyrphostin AG17. The formulation with tyrphostin AG17 showed the presence of stable microvesicles up to 498 h after its preparation. To establish a robust safety profile in compliance with modern regulatory frameworks and the 3Rs principle (replacement, reduction, and refinement), a toxicological evaluation was conducted integrating an in silico quantitative structure–activity relationship (QSAR) analysis with confirmatory in vivo subchronic toxicity studies. The QSAR analysis, performed using the OECD QSAR Toolbox and strictly adhering to Organization for Economic Co-operation and Development (OECD) validation principles, predicted an acute oral LD50 of 91.5 mg/kg in rats, a value showing high concordance with the historical experimental data (87 mg/kg). Furthermore, computational modeling for repeated-dose toxicity yielded a no-observed-adverse-effect level (NOAEL) of 80.0 mg/kg bw/day, a no-observed-effect level (NOEL) of 60.4 mg/kg bw/day, and an ADI = 56 mg/day. These computational findings were substantiated by a 90-day subchronic toxicity study in male Wistar rats, where daily intragastric administration of tyrphostin AG17 at doses up to 1.75 mg/kg resulted in not statistically significant hematotoxic activity (p < 0.05), with a maximum cumulative dose over 90 days of 157.5 mg/kg. Collectively, these data indicate that tyrphostin AG17 combines high stabilizing efficacy with a manageable safety profile, supporting its proposed regulatory status as a functional food additive. Based on these results, it is concluded that tyrphostin AG17 shows promising characteristics for use as a stabilizer in food and other substances. Full article
(This article belongs to the Section Food Toxicology)
Show Figures

Figure 1

20 pages, 8641 KB  
Article
A Novel Stochastic Finite Element Model Updating Method Based on Multi-Point Sensitivities
by Zheng Yang, Zhiyu Shi and Jinyan Li
Appl. Sci. 2026, 16(2), 867; https://doi.org/10.3390/app16020867 - 14 Jan 2026
Viewed by 170
Abstract
A novel stochastic finite element model updating method based on multi-point sensitivities is proposed to improve the reproduction and prediction ability of finite element models for experimental data. Drawing upon the theory of small perturbations, this approach employs the sensitivity matrix in conjunction [...] Read more.
A novel stochastic finite element model updating method based on multi-point sensitivities is proposed to improve the reproduction and prediction ability of finite element models for experimental data. Drawing upon the theory of small perturbations, this approach employs the sensitivity matrix in conjunction with the probability distribution of responses evaluated at multiple parameter points to determine the probability density associated with each parameter point and to estimate the statistical properties of the parameters. To achieve this objective, principal component analysis is employed to unify the dimensionality of the parameters and the responses; the least squares method was used to estimate the characteristics of the parameters. The reliability and validity of this method were confirmed through experimentation with a 3-degree-of-freedom spring-mass system and an aerospace thermal insulation structure. A comparison of this method with classical methods reveals significant advantages in terms of robustness across varying computational scales. Notably, it attains superior accuracy with smaller sample sizes while maintaining precision comparable to conventional methods with large samples. Consequently, this characteristic confers upon the method a distinct advantage in scenarios where the costs of finite element computation are prohibitively high. Full article
Show Figures

Figure 1

36 pages, 6828 KB  
Article
Discriminating Music Sequences Method for Music Therapy—DiMuSe
by Emil A. Canciu, Florin Munteanu, Valentin Muntean and Dorin-Mircea Popovici
Appl. Sci. 2026, 16(2), 851; https://doi.org/10.3390/app16020851 - 14 Jan 2026
Viewed by 141
Abstract
The purpose of this research was to investigate whether music empirically associated with therapeutic effects contains intrinsic informational structures that differentiate it from other sound sequences. Drawing on ontology, phenomenology, nonlinear dynamics, and complex systems theory, we hypothesize that therapeutic relevance may be [...] Read more.
The purpose of this research was to investigate whether music empirically associated with therapeutic effects contains intrinsic informational structures that differentiate it from other sound sequences. Drawing on ontology, phenomenology, nonlinear dynamics, and complex systems theory, we hypothesize that therapeutic relevance may be linked to persistent structural patterns embedded in musical signals rather than to stylistic or genre-related attributes. This paper introduces the Discriminating Music Sequences (DiMuSes) method, an unsupervised, structure-oriented analytical framework designed to detect such patterns. The method applies 24 scalar evaluators derived from statistics, fractal geometry, nonlinear physics, and complex systems, transforming sound sequences into multidimensional vectors that characterize their global temporal organization. Principal Component Analysis (PCA) reduces this feature space to three dominant components (PC1–PC3), enabling visualization and comparison in a reduced informational space. Unsupervised k-Means clustering is subsequently applied in the PCA space to identify groups of structurally similar sound sequences, with cluster quality evaluated using Silhouette and Davies–Bouldin indices. Beyond clustering, DiMuSe implements ranking procedures based on relative positions in the PCA space, including distance to cluster centroids, inter-item proximity, and stability across clustering configurations, allowing melodies to be ordered according to their structural proximity to the therapeutic cluster. The method was first validated using synthetically generated nonlinear signals with known properties, confirming its capacity to discriminate structured time series. It was then applied to a dataset of 39 music and sound sequences spanning therapeutic, classical, folk, religious, vocal, natural, and noise categories. The results show that therapeutic music consistently forms a compact and well-separated cluster and ranks highly in structural proximity measures, suggesting shared informational characteristics. Notably, pink noise and ocean sounds also cluster near therapeutic music, aligning with independent evidence of their regulatory and relaxation effects. DiMuSe-derived rankings were consistent with two independent studies that identified the same musical pieces as highly therapeutic.The present research remains at a theoretical stage. Our method has not yet been tested in clinical or experimental therapeutic settings and does not account for individual preference, cultural background, or personal music history, all of which strongly influence therapeutic outcomes. Consequently, DiMuSe does not claim to predict individual efficacy but rather to identify structural potential at the signal level. Future work will focus on clinical validation, integration of biometric feedback, and the development of personalized extensions that combine intrinsic informational structure with listener-specific response data. Full article
24 pages, 654 KB  
Article
Examination of the Effects of a Play-Based Mindfulness Training Program on Resilience, Emotion Regulation Skills, and Executive Functions of Preschool Children
by Betül Kapkın İçen and Osman Tayyar Çelik
Children 2026, 13(1), 110; https://doi.org/10.3390/children13010110 - 12 Jan 2026
Viewed by 360
Abstract
Background/Objectives: The cognitive processes underlying learning are critical for educational practices. While mindfulness-based approaches to strengthening these cognitive processes have become widespread, studies focusing on game-based development of executive functions, particularly in preschool settings, are limited. The primary objective of this study is [...] Read more.
Background/Objectives: The cognitive processes underlying learning are critical for educational practices. While mindfulness-based approaches to strengthening these cognitive processes have become widespread, studies focusing on game-based development of executive functions, particularly in preschool settings, are limited. The primary objective of this study is to develop a play-based mindfulness intervention program for preschool children and to examine the effects of this program on preschool children’s resilience, emotion regulation skills, and executive functions. Methods: The study employed a pretest–post-test control-group experimental design. The study group consisted of 40 children (20 experimental and 20 control) aged 5–6 years, attending a kindergarten in Malatya province, Türkiye. The Devereux Early Childhood Assessment Scale (DECA-P2), Emotion Regulation Scale (ERS), and Childhood Executive Functions Inventory (CHEXI) were used as data collection tools. Independent-samples t-tests were used for baseline analysis, and a two-way repeated-measures ANOVA was used to evaluate the program’s effects. Results: Findings showed that there was a statistically significant difference between the pre-test and post-test mean scores of the children in the experimental group compared with those in the control group for resilience, emotion regulation, and executive function (p < 0.05). Conclusions: Strong evidence was obtained that play-based mindfulness training has positive effects on the cognitive and emotional development of preschool children. Full article
(This article belongs to the Section Pediatric Mental Health)
Show Figures

Figure 1

Back to TopTop