Next Issue
Volume 15, November-2
Previous Issue
Volume 15, October-2
 
 
applsci-logo

Journal Browser

Journal Browser

Appl. Sci., Volume 15, Issue 21 (November-1 2025) – 550 articles

Cover Story (view full-size image): Enhancing fuel efficiency in public transportation requires the integration of complex multimodal data into interpretable, decision-relevant insights. However, traditional analytics and visualization methods often yield fragmented outputs that demand extensive human interpretation, limiting scalability and consistency. This study presents a multi-agent framework that leverages multimodal large language models (LLMs) to automate data narration and energy insight generation. The framework coordinates three specialized agents, including a data narration agent, an LLM-as-a-judge agent, and an optional human-in-the-loop evaluator, to iteratively transform analytical artifacts into coherent, stakeholder-oriented reports. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
35 pages, 1511 KB  
Article
Curriculum Learning and Pattern-Aware Highly Efficient Privacy-Preserving Scheme for Mixed Data Outsourcing with Minimal Utility Loss
by Abdul Majeed, Kyunghyun Lee and Seong Oun Hwang
Appl. Sci. 2025, 15(21), 11849; https://doi.org/10.3390/app152111849 - 6 Nov 2025
Viewed by 449
Abstract
A complex problem when outsourcing personal data for public use is balancing privacy protection with utility, and anonymization is a viable solution to address this issue. However, conventional anonymization methods often overlook global information regarding the composition of attributes in data, leading to [...] Read more.
A complex problem when outsourcing personal data for public use is balancing privacy protection with utility, and anonymization is a viable solution to address this issue. However, conventional anonymization methods often overlook global information regarding the composition of attributes in data, leading to unnecessary computations and high utility loss. To address these problems, we propose a curriculum learning (CL)-based, pattern-aware privacy-preserving scheme that exploits information about attribute composition in the data to enhance utility and privacy without performing unnecessary computations. The CL approach significantly reduces time overheads by sorting data by complexity, and only the most complex (e.g., privacy-sensitive) parts of the data are processed. Our scheme considers both diversity and similarity when forming clusters to effectively address the privacy–utility trade-off. Our scheme prevents substantial changes in data during generalization by protecting generic portions of the data from futile anonymization, and only a limited amount of data is anonymized through a joint application of differential privacy and k-anonymity. We attain promising results by rigorously testing the proposed scheme on three benchmark datasets. Compared to recent anonymization methods, our scheme reduces time complexity by 74.33%, improves data utility by 19.67% and 68.33% across two evaluation metrics, and enhances privacy protection by 29.19%. Our scheme performs 82.66% fewer lookups in generalization hierarchies than existing anonymization methods. In addition, our scheme is very lightweight and is 1.95× faster than the parallel implementation architectures. Our scheme can effectively solve the trade-off between privacy and utility better than prior works in outsourcing personal data enclosed in tabular form. Full article
(This article belongs to the Special Issue Progress in Information Security and Privacy)
Show Figures

Figure 1

14 pages, 1401 KB  
Article
Optical and Thermal Characterization of Locust Bean Gum Using Photopyroelectric Techniques
by José Abraham Balderas López, Erich von Borries Medrano, Maria Fernanda Vargas Torrico and Mónica Rosalía Jaime Fonseca
Appl. Sci. 2025, 15(21), 11848; https://doi.org/10.3390/app152111848 - 6 Nov 2025
Viewed by 296
Abstract
Galactomannans, like locust bean gum, are polysaccharides widely used in the food and pharmaceutical industries because of their rheological and functional properties. However, their optical and thermal characterization is challenging due to their viscous and highly dispersive nature, which hinders the applicability of [...] Read more.
Galactomannans, like locust bean gum, are polysaccharides widely used in the food and pharmaceutical industries because of their rheological and functional properties. However, their optical and thermal characterization is challenging due to their viscous and highly dispersive nature, which hinders the applicability of conventional spectroscopic and calorimetric techniques. In this study, photopyroelectric techniques were used to simultaneously determine, for the first time, the optical absorption coefficients and thermal diffusivity of locust bean gum in aqueous suspension at various concentrations. Optical characterization was performed at 660 nm (visible) and 1550 nm (near-infrared), revealing strong absorption at 1550 nm associated with hydroxyl group overtones and allowing reliable quantification at concentrations as low as 0.5 g/100 mL. Thermal characterization yielded diffusivity values ranging from 1.50 × 10−3 to 1.47 × 10−3 cm2/s, with a slight decreasing trend as concentration increased. These results confirm the applicability of photopyroelectric methods for the dual optical and thermal characterization of galactomannans and highlight their potential for analyzing complex biopolymer suspensions where traditional methods fall short. Full article
Show Figures

Figure 1

13 pages, 3312 KB  
Article
Effect of Manganese Concentration and Calcination Temperature on Photochemical Properties of TiOF2/MnO(OH)
by Dmytro Sofronov, Liliya Frolova, Miroslaw Rucki, Pavel Mateychenko and Vyacheslav Baranov
Appl. Sci. 2025, 15(21), 11847; https://doi.org/10.3390/app152111847 - 6 Nov 2025
Viewed by 324
Abstract
The heterostructures TiOF2/(0.5–5 wt.%)MnO(OH) attract attention as potential catalysts for pollutant removal from water. In this paper, a novel synthesis route was proposed through the precipitation of MnO(OH) particles out of an alkaline solution on the TiOF2 particles. The formation [...] Read more.
The heterostructures TiOF2/(0.5–5 wt.%)MnO(OH) attract attention as potential catalysts for pollutant removal from water. In this paper, a novel synthesis route was proposed through the precipitation of MnO(OH) particles out of an alkaline solution on the TiOF2 particles. The formation of manganese oxyhydroxide was confirmed by X-ray diffraction analysis. The presence of manganese in proportions up to 1 wt.% recalculated to MnO(OH) did not affect the morphology of TiOF2/MnO(OH) particles. Higher concentrations of Mn caused the appearance of mostly spherical particles of dimensions ca. 100 nm. The effect of calcination temperatures 300–600 °C on the structure and photocatalytic activity of the particles was analyzed. It was found that calcination of the powder formed TiO2 phase with mainly anatase structure as well as Mn3O4. After calcination at 600 °C, the appearance of fluorine was detected, indicating the formation of fluorinated titanium dioxide. For higher manganese concentrations, the fluorine proportion in F-TiO2 samples decreased. Increased Mn content in TiOF2/MnO(OH) significantly improved its photocatalytic activity, shortening the degradation time and increasing the degradation degree of methylene blue (MB). However, an increase in the calcination temperature decreased the degradation degree of MB. It was found that the optimal concentration of MnO(OH) was 5 wt.%. Full article
Show Figures

Figure 1

17 pages, 3712 KB  
Article
SP-Transformer: A Medium- and Long-Term Photovoltaic Power Forecasting Model Integrating Multi-Source Spatiotemporal Features
by Bin Wang, Julong Chen, Yongqing Zhu, Junqiu Fan, Jiang Hu and Ling Tan
Appl. Sci. 2025, 15(21), 11846; https://doi.org/10.3390/app152111846 - 6 Nov 2025
Viewed by 439
Abstract
Aiming to solve the challenges of the weak spatial and temporal correlation of medium- and long-term photovoltaic (PV) power data, as well as data redundancy and low forecasting efficiency brought about by long-time forecasting, this paper proposes a medium- and long-term PV power [...] Read more.
Aiming to solve the challenges of the weak spatial and temporal correlation of medium- and long-term photovoltaic (PV) power data, as well as data redundancy and low forecasting efficiency brought about by long-time forecasting, this paper proposes a medium- and long-term PV power forecasting method based on the Transformer, SP-Transformer (spatiotemporal probsparse transformer), which aims to effectively capture the spatiotemporal correlation between meteorological and geographical elements and PV power. The method embeds the geographic location information of PV sites into the model through spatiotemporal positional encoding and designs a spatiotemporal probsparse self-attention mechanism, which reduces model complexity while allowing the model to better capture the spatiotemporal correlation between input data. To further enhance the model’s ability to capture and generalize potential patterns in complex PV power data, this paper proposes a feature pyramid self-attention distillation module to ensure the accuracy and robustness of the model in long-term forecasting tasks. The SP-Transformer model performs well in the PV power forecasting task, with a medium-term (48 h) forecasting accuracy of 93.8% and a long-term (336 h) forecasting accuracy of 90.4%, both of which are better than all the comparative algorithms involved in the experiment. Full article
Show Figures

Figure 1

21 pages, 10119 KB  
Article
Detecting Audio Copy-Move Forgeries on Mel Spectrograms via Hybrid Keypoint Features
by Ezgi Ozgen and Seyma Yucel Altay
Appl. Sci. 2025, 15(21), 11845; https://doi.org/10.3390/app152111845 - 6 Nov 2025
Viewed by 408
Abstract
With the widespread use of audio editing software and artificial intelligence, it has become very easy to forge audio files. One type of these forgeries is copy-move forgery, which is achieved by copying a segment from an audio file and placing it in [...] Read more.
With the widespread use of audio editing software and artificial intelligence, it has become very easy to forge audio files. One type of these forgeries is copy-move forgery, which is achieved by copying a segment from an audio file and placing it in a different place in the same file, where the aim is to take the speech content out of its context and alter its meaning. In practice, forged recordings are often disguised through post-processing steps such as lossy compression, additive noise, or median filtering. This distorts acoustic features and makes forgery detection more difficult. This study introduces a robust keypoint-based approach that analyzes Mel-spectrograms, which are visual time-frequency representations of audio. Instead of processing the raw waveform for forgery detection, the proposed method focuses on identifying duplicate regions by extracting distinctive visual patterns from the spectrogram image. We tested this approach on two speech datasets (Arabic and Turkish) under various real-world attack conditions. Experimental results show that the method outperforms existing techniques and achieves high accuracy, precision, recall, and F1-scores. These findings highlight the potential of visual-domain analysis to increase the reliability of audio forgery detection in forensic and communication contexts. Full article
(This article belongs to the Special Issue Multimedia Smart Security)
Show Figures

Figure 1

12 pages, 3026 KB  
Article
An In Vitro Study Comparing Debonding of Orthodontic Ceramic and Metal Brackets Using Er:YAG Laser and Conventional Pliers
by Aous Abdulmajeed, Tiannie Phan, Kinga Grzech-Leśniak and Janina Golob Deeb
Appl. Sci. 2025, 15(21), 11844; https://doi.org/10.3390/app152111844 - 6 Nov 2025
Viewed by 520
Abstract
Removing orthodontic brackets often presents clinical challenges, as it may cause patient discomfort, bracket fracture, or enamel damage resulting from strong adhesive bonds. Various techniques have been proposed to facilitate safer and more efficient debonding. Among them, laser-assisted methods have gained attention for [...] Read more.
Removing orthodontic brackets often presents clinical challenges, as it may cause patient discomfort, bracket fracture, or enamel damage resulting from strong adhesive bonds. Various techniques have been proposed to facilitate safer and more efficient debonding. Among them, laser-assisted methods have gained attention for their potential to minimize mechanical stress and improve patient comfort. The main objective of this study was to evaluate the effect of an erbium-doped yttrium–aluminum–garnet (Er:YAG) laser as an alternative to traditional mechanical methods for removing metal and ceramic orthodontic brackets. Materials and Methods: Thirty-six extracted premolars were prepared for bonding metal or ceramic brackets using a light-cure adhesive system. The control group consisted of six ceramic and six metal brackets removed with conventional orthodontic pliers. In the experimental groups, brackets were debonded using the Er:YAG laser (2940 nm, 0.6 mm spot size, 150 mJ; 15 Hz; (2.25 W) with an H14 handpiece. Irradiation time was recorded for each method, and teeth were rescanned to measure the surface area and volume of the crowns before and after bracket removal. Data were analyzed using one-way ANOVA and Tukey’s HSD test (p < 0.05). Scanning electron microscopy (SEM) was used for surface analysis. Results: A significant difference in debonding time (p = 0.001) was observed between the laser and traditional methods. The laser group took 52.5 s for metal and 56.25 s for ceramic brackets, compared to 1.05 s (metal) and 0.64 s (ceramic) in the traditional group. A significant difference in remaining cement volume was noted (p = 0.0002), but no differences were found between metal and ceramic brackets with laser removal. Conclusions: Er:YAG laser-assisted debonding is safe and minimally invasive but more time-consuming and costly than conventional methods, showing no improvement in clinical efficiency under current parameters. Full article
Show Figures

Figure 1

21 pages, 2248 KB  
Article
A Metagenomic and Colorimetric Analysis of the Biological Recolonization Occurring at the “Largo da Porta Férrea” Statues (Coimbra UNESCO World Heritage Site), After Cleaning Interventions
by Fabiana Soares, Lídia Catarino, Conceição Egas and João Trovão
Appl. Sci. 2025, 15(21), 11843; https://doi.org/10.3390/app152111843 - 6 Nov 2025
Viewed by 519
Abstract
Biological recolonization after cleaning remains a major challenge for the conservation of stone cultural heritage. As recolonization can start within months to a few years following intervention, developing rapid, field-deployable diagnostic approaches is crucial to better monitor microbial reappearance and to assess treatment [...] Read more.
Biological recolonization after cleaning remains a major challenge for the conservation of stone cultural heritage. As recolonization can start within months to a few years following intervention, developing rapid, field-deployable diagnostic approaches is crucial to better monitor microbial reappearance and to assess treatment performance in real time. Traditional evaluation methods lack the capacity to take into consideration non-cultivable microorganisms or assess functional traits relevant to recolonization. To bypass this gap, we applied on-site direct Whole-Genome Sequencing (Oxford Nanopore® MinION™ sequencer) coupled with colorimetric analysis to understand the microbiome, resistome, and metabolic traits of subaerial biofilms present in untreated and treated (recolonized) areas of stone statues at the “Largo da Porta Férrea” (Coimbra’s UNESCO World Heritage site). Colorimetric analysis (ΔE of 32–40 in recolonized vs. 19–43 in untreated areas) and genomic data pointed out that the applied treatment provided only a short-term effect (roughly 4–5 years), with a marked decline in fungi (1–2% vs. 7–18%), coupled with an increased recolonization mainly by Cyanobacteriota (circa 35–45%) and several stress-resistant Bacteria (globally ~95% of reads vs. 73–79% in controls). Antimicrobial resistance profiles significantly differed between sites, with treated areas showing distinct and unique resistance genes, and plasmids containing the blaTEM-116 gene, which can indicate potential adaptive shifts in the resistomes profiles after intervention. Metabolic pathways analysis revealed that untreated areas retained more complete nitrogen and sulfur cycling gene sets, whereas treated areas showed reduced biogeochemical gene contents, consistent with earlier-stage recolonization steps. Given the current recolonization detection and the ongoing biofilm formation, routine monitoring efforts (e.g., every 6 months) are recommended. Overall, this study demonstrates the first on-site genomic characterization of recolonization events on heritage stone, providing a practical prompt-warning tool for conservation monitoring and future biofilm management strategies. Full article
(This article belongs to the Special Issue Application of Biology to Cultural Heritage III)
Show Figures

Figure 1

19 pages, 3290 KB  
Article
Multi-Granularity Content-Aware Network with Semantic Integration for Unsupervised Anomaly Detection
by Xinyu Guo, Shihui Zhao, Jianbin Xue, Dongdong Liu, Xinyang Han, Shuai Zhang and Yufeng Zhang
Appl. Sci. 2025, 15(21), 11842; https://doi.org/10.3390/app152111842 - 6 Nov 2025
Viewed by 471
Abstract
Unsupervised anomaly detection has been widely applied to industrial scenarios. Recently, transformer-based methods have also been developed and have produced good performance. Although the global dependencies in anomaly images are considered, the typical patch partition strategy in the vanilla self-attention mechanism ignores the [...] Read more.
Unsupervised anomaly detection has been widely applied to industrial scenarios. Recently, transformer-based methods have also been developed and have produced good performance. Although the global dependencies in anomaly images are considered, the typical patch partition strategy in the vanilla self-attention mechanism ignores the content consistencies in anomaly defects or normal regions. To sufficiently exploit the content consistency in images, we propose the multi-granularity content-aware network with semantic integration (MGCA-Net), in which superpixel segmentation is introduced into feature space to divide images according to their spatial structures. Specifically, we adopt a pre-trained ResNet as the encoder to extract features. Then, we design content-aware attention blocks (CAABs) to capture the global information in features at different granularities. In this block, we impose superpixel segmentation on the features from the encoder and employ the superpixels as tokens for the learning of global relationships. Because the superpixels are divided according to their content consistencies, the spatial structures of objects in anomaly or normal regions are preserved. Meanwhile, the multi-granularity semantic integration block is devised to further integrate the global information of all granularities. Next, we use semantic-guided fusion blocks (SGFBs) to progressively upsample the features with the help of CAABs. Finally, the differences between the outputs of CAABs and SGFBs are calculated and merged to predict the anomaly defects. Thanks to the preservation of content consistency of objects, experimental results on two benchmark datasets demonstrate that our proposed MGCA-Net achieves superior anomaly detection performance over state-of-the-art methods. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

14 pages, 39102 KB  
Article
Schrödinger Cat States in Giant Negative Magnetoresistance of 2D Electron Systems
by Jesús Iñarrea
Appl. Sci. 2025, 15(21), 11841; https://doi.org/10.3390/app152111841 - 6 Nov 2025
Viewed by 296
Abstract
We investigate the effect of giant negative magnetoresistance in ultrahigh-mobility (μ107cm2V1s1) two-dimensional electron systems. These systems present a dramatic drop in the mangetoresistance at low magnetic fields ( [...] Read more.
We investigate the effect of giant negative magnetoresistance in ultrahigh-mobility (μ107cm2V1s1) two-dimensional electron systems. These systems present a dramatic drop in the mangetoresistance at low magnetic fields (B0.1 T) and temperatures (T0.1 K). This effect is reversed by increasing the temperature or the presence of an in-plane magnetic field. The motivation for the present work is to develop a microscopical model to explain the experimental evidence, based on coherent states and Schródinger cat states of the quantum harmonic oscillator. Thus, we approach the giant negative magnetoresistance effect based on the description of ultrahigh-mobility two-dimensional electron systems in terms of Schrödinger cat states (superposition of coherent states of the quantum harmonic oscillator). We explain the experimental results in terms of the increasing disorder in the sample due to the rising temperature or the in-plane magnetic field, breaking up the Schrödinger cat states and giving rise to mere coherent states, which hold magnetoresistance in lower-mobility samples. The latter, jointly with the description of ultrahigh-mobility samples with Schrödinger cat states, accounts for the main contribution. The most interesting application of this novel description of such systems would be in the implementation of qubits for quantum computing based on bosonic models. Full article
(This article belongs to the Section Applied Physics General)
Show Figures

Figure 1

19 pages, 13708 KB  
Article
A-BiYOLOv9: An Attention-Guided YOLOv9 Model for Infrared-Based Wind Turbine Inspection
by Sami Ekici, Murat Uyar and Tugce Nur Karadeniz
Appl. Sci. 2025, 15(21), 11840; https://doi.org/10.3390/app152111840 - 6 Nov 2025
Viewed by 503
Abstract
This work examines how thermal turbulence patterns can be identified on the blades of operating wind turbines—an issue that plays a key role in preventive maintenance and overall safety assurance. Using the publicly available KI-VISIR dataset, containing annotated infrared images collected under real-world [...] Read more.
This work examines how thermal turbulence patterns can be identified on the blades of operating wind turbines—an issue that plays a key role in preventive maintenance and overall safety assurance. Using the publicly available KI-VISIR dataset, containing annotated infrared images collected under real-world operating conditions, four object detection architectures were evaluated: YOLOv8, the baseline YOLOv9, the transformer-based RT-DETR, and an enhanced variant introduced as A-BiYOLOv9. The proposed approach extends the YOLOv9 backbone with convolutional block attention modules (CBAM) and integrates a bidirectional feature pyramid network (BiFPN) in the neck to improve feature fusion. All models were trained for thirty epochs on single-class turbulence annotations. The experiments confirm that YOLOv8 provides fast and efficient detection, YOLOv9 delivers higher accuracy and more stable convergence, and RT-DETR exhibits strong precision and consistent localization performance. A-BiYOLOv9 maintains stable and reliable accuracy even when the thermal patterns vary significantly between scenes. These results confirm that attention-augmented and feature-fusion-centric architectures improve detection sensitivity and reliability in the thermal domain. Consequently, the proposed A-BiYOLOv9 represents a promising candidate for real-time, contactless thermographic monitoring of wind turbines, with the potential to extend turbine lifespan through predictive maintenance strategies. Full article
Show Figures

Figure 1

25 pages, 452 KB  
Review
Polysaccharide-Enriched Bakery and Pasta Products: Advances, Functional Benefits, and Challenges in Modern Food Innovation
by Jovana Petrović, Jana Zahorec, Dragana Šoronja-Simović, Ivana Lončarević, Ivana Nikolić, Biljana Pajin, Milica Stožinić, Drago Šubarić, Đurđica Ačkar and Antun Jozinović
Appl. Sci. 2025, 15(21), 11839; https://doi.org/10.3390/app152111839 - 6 Nov 2025
Viewed by 826
Abstract
The increasing consumer demand for healthier food choices has stimulated research into functional bakery products enriched with bioactive ingredients. This review summarizes recent developments in the application of key polysaccharides—such as inulin and fructooligosaccharides (FOS), β-glucan, arabinoxylan, pectin, cellulose derivatives, resistant starch, maltodextrins, [...] Read more.
The increasing consumer demand for healthier food choices has stimulated research into functional bakery products enriched with bioactive ingredients. This review summarizes recent developments in the application of key polysaccharides—such as inulin and fructooligosaccharides (FOS), β-glucan, arabinoxylan, pectin, cellulose derivatives, resistant starch, maltodextrins, and dextrins—in bread, pasta, and fine bakery systems. Their incorporation affects dough rheology, fermentation behavior, and gas retention, leading to modifications in texture, volume, and shelf-life stability. Technologically, polysaccharides function as hydrocolloids, fat and sugar replacers, or water-binding agents, influencing gluten network formation and starch gelatinization. Nutritionally, they contribute to higher dietary fiber intake, improved postprandial glycemic response, enhanced satiety, and favorable modulation of gut microbiota. From a sensory perspective, optimized formulations can maintain or even improve product acceptability despite structural changes. However, challenges remain related to dosage optimization, interactions with the gluten–starch matrix, and gastrointestinal tolerance (particularly in FODMAP-sensitive individuals). This review summarizes current knowledge and future opportunities for creating innovative bakery products that unite technological functionality with nutritional and sensory excellence. Full article
(This article belongs to the Section Food Science and Technology)
19 pages, 2925 KB  
Article
Research on Target Detection and Counting Algorithms for Swarming Termites in Agricultural and Forestry Disaster Early Warning
by Hechuang Wang, Yifan Wang and Tong Chen
Appl. Sci. 2025, 15(21), 11838; https://doi.org/10.3390/app152111838 - 6 Nov 2025
Viewed by 362
Abstract
The accurate monitoring of termite swarming—a key indicator of dispersal and population growth—is essential for early warning systems that mitigate infestation risks in agricultural and forestry environments. Automated detection and counting systems have become a viable alternative to labor-intensive and time-consuming manual inspection [...] Read more.
The accurate monitoring of termite swarming—a key indicator of dispersal and population growth—is essential for early warning systems that mitigate infestation risks in agricultural and forestry environments. Automated detection and counting systems have become a viable alternative to labor-intensive and time-consuming manual inspection methods. However, detecting and counting such small and fast-moving targets as swarming termites poses a significant challenge. This study proposes the YOLOv11-ST algorithm and a novel counting algorithm to address this challenge. By incorporating the Fourier-domain parameter decomposition and dynamic modulation mechanism of the FDConv module, along with the LRSA attention mechanism that enhances local feature interaction, the feature extraction capability for swarming termites is improved, enabling more accurate detection. The SPPF-DW module was designed to replace the original network’s SPPF module, enhancing the feature capture capability for small targets. In comparative evaluations with other baseline models, YOLOv11-ST demonstrated superior performance, achieving a Recall of 87.32% and a mAP50 of 93.21%. This represents an improvement of 2.1% and 2.02%, respectively, over the original YOLOv11. The proposed counting algorithm achieved an average counting accuracy of 91.2%. These research findings offer both theoretical and technical support for the development of a detection and counting system for swarming termites. Full article
Show Figures

Figure 1

16 pages, 6836 KB  
Article
Enhancing Crash Safety Analysis Through Female-Specific Head Modeling: Application of FeFEHM in Traffic Accident Reconstructions
by Carlos G. S. Cardoso, Andre Eggers, Marcus Wisch, Fábio A. O. Fernandes and Ricardo J. Alves de Sousa
Appl. Sci. 2025, 15(21), 11837; https://doi.org/10.3390/app152111837 - 6 Nov 2025
Viewed by 370
Abstract
Traumatic brain injury (TBI) is a significant public health concern and its rising prevalence in road traffic accidents underscores the need for deeper understanding and tailored investigation. This study explores the feasibility of employing the female finite element head model (FeFEHM) to analyse [...] Read more.
Traumatic brain injury (TBI) is a significant public health concern and its rising prevalence in road traffic accidents underscores the need for deeper understanding and tailored investigation. This study explores the feasibility of employing the female finite element head model (FeFEHM) to analyse biomechanical responses in two distinct road traffic accident scenarios, focusing on strain and stress distribution in critical brain structures. Two collision scenarios from the German In-Depth Accident Study (GIDAS) were reconstructed using validated Total Human Model for Safety (THUMS) simulations. The extracted skull kinematics were applied to the FeFEHM in ABAQUS to compute maximum principal strain, von Mises stress, and intracranial pressure across key brain regions, including the corpus callosum and pituitary gland. Simulations revealed strain concentrations in the parietal and temporal lobes, while the mid-body region was the most affected in the corpus callosum. Pituitary gland deformation was minimal under both loading conditions. Our findings align qualitatively with reported injury sites and injury risk was consistent with those observed in the real-world crashes. The findings highlight the potential of integrating sex-specific biomechanical models into crash biomechanics workflows. Future work should extend this approach across larger datasets and impact scenarios to support its implementation in regulatory and engineering contexts, since the actual sample size prevents conclusions regarding sex-specific biomechanics. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

18 pages, 3304 KB  
Article
Dynamic Load of the Tank Container with Sandwich Components
by Juraj Gerlici and Alyona Lovska
Appl. Sci. 2025, 15(21), 11836; https://doi.org/10.3390/app152111836 - 6 Nov 2025
Viewed by 345
Abstract
The presented research is focused on a proposal to improve a tank container structure by introducing sandwich components in the bottom to reduce its load while transported by rail. This solution will help to reduce loads by means of the energy-absorbing material included [...] Read more.
The presented research is focused on a proposal to improve a tank container structure by introducing sandwich components in the bottom to reduce its load while transported by rail. This solution will help to reduce loads by means of the energy-absorbing material included in the sandwich components. The proposed improvement is substantiated with mathematical modeling of the dynamic load of the tank container. It is found that the use of sandwich components in its structure can reduce the dynamic load by 12 to 18%, depending on the characteristics of the energy-absorbing material. This study also includes computer modeling of the dynamic load on the tank container, which makes it possible to identify the acceleration distribution fields acting on the tank container, and to determine their numerical values. The mathematical model of the dynamic load of the tank container is verified. Fisher’s exact test was used. The coefficient of determination was 0.81. The calculation was performed within the confidence interval from −0.95 to +0.95. It is found that the hypothesis of the adequacy for the model is not rejected. The study also includes a modal analysis of the tank container. The results of the modal analysis revealed that the safe operation of the tank container during transportation is ensured, as the first eigenfrequency is 13.8 Hz. The achieved results can be used for developing modern tank container designs. Full article
(This article belongs to the Special Issue Railway Vehicle Dynamics)
Show Figures

Figure 1

12 pages, 2217 KB  
Article
Development and Verification of an Online Monitoring Ionization Chamber for Dose Measurement in a Small-Sized Betatron
by Bin Zhang, Wenlong Zheng, Ting Yan, Haitao Wang, Yan Zhang, Shumin Zhou and Qi Liu
Appl. Sci. 2025, 15(21), 11835; https://doi.org/10.3390/app152111835 - 6 Nov 2025
Viewed by 397
Abstract
Online radiation dose monitoring is critical for the safe operation of accelerators. Although commercial dose monitors are well-developed, integrating an ionization chamber directly within a small-sized Betatron magnet remains challenging. In this study, we designed an air ionization chamber tailored for real-time dose [...] Read more.
Online radiation dose monitoring is critical for the safe operation of accelerators. Although commercial dose monitors are well-developed, integrating an ionization chamber directly within a small-sized Betatron magnet remains challenging. In this study, we designed an air ionization chamber tailored for real-time dose monitoring in a small-sized Betatron. We selected aluminum for the chamber wall based on structural and integration requirements, designed the cavity geometry, and developed the associated charge collection and sampling circuits. Using a standard reference PTW ionization chamber, we calibrated the output voltage of the chamber against X-ray dose rates and conducted stability tests. The results show that there is a very good linear relationship between the output voltage of the ionization chamber and the X-ray dose rate. The relative standard deviation of the dose rate data within a 10 min working cycle is 3.25%, and the dose rate data shows good consistency with the standard reference ionization chamber. The ionization chamber can ensure operational safety for a small-sized Betatron and offer guidance for similar applications. Full article
Show Figures

Figure 1

21 pages, 2321 KB  
Article
Can LLMs Generate Coherent Summaries? Leveraging LLM Summarization for Spanish-Language News Articles
by Ronghao Pan, Tomás Bernal-Beltrán, María del Pilar Salas-Zárate, Mario Andrés Paredes-Valverde, José Antonio García-Díaz and Rafael Valencia-García
Appl. Sci. 2025, 15(21), 11834; https://doi.org/10.3390/app152111834 - 6 Nov 2025
Viewed by 705
Abstract
Automatic summarization is essential for processing the vast quantity of news articles. However, existing methods struggle with factual consistency, hallucinations, and English-centric evaluations. This paper investigates whether Large Language Models can generate coherent and factually grounded summaries of Spanish-language news articles, using the [...] Read more.
Automatic summarization is essential for processing the vast quantity of news articles. However, existing methods struggle with factual consistency, hallucinations, and English-centric evaluations. This paper investigates whether Large Language Models can generate coherent and factually grounded summaries of Spanish-language news articles, using the DACSA dataset as a benchmark. Several strategies are evaluated, including zero-shot prompting, one-shot prompting, fine-tuning of seq2seq models mBART and mT5, and a novel bottleneck prompting method that integrates attention-based salience scoring with Named Entity Recognition. Our results show that modern instruction-tuned language models can achieve competitive performance in zero- and one-shot settings, often approaching the performance of fine-tuned baselines. Our proposed bottleneck method enhances factual accuracy and content selection, leading to measurable improvements in ROUGE and BERTScore, especially for larger models such as LLaMA-3.1-70B and Gemma-2-9B. These results suggest that structured prompting can complement conventional approaches, offering an effective and cost-efficient alternative to full supervision. The results indicate that LLMs guided by entity-anchored bottlenecks provide a promising approach to multilingual summarization in domains with limited resources. Full article
(This article belongs to the Special Issue Techniques and Applications of Natural Language Processing)
Show Figures

Figure 1

14 pages, 2722 KB  
Article
Electric Field and Charge Characteristics at the Gas–Solid Interface of a Scaled HVDC Wall Bushing Model
by Wenhao Lu, Xiaodi Ouyang, Jinyin Zhang, Xiang Xie, Xiaoxing Wei, Feng Wang, Mingchun Hou and She Chen
Appl. Sci. 2025, 15(21), 11833; https://doi.org/10.3390/app152111833 - 6 Nov 2025
Viewed by 311
Abstract
Ultra-high-voltage direct current (UHVDC) wall bushings are critical components in DC transmission systems, ensuring insulation integrity and operational reliability. In recent years, surface discharge incidents induced by charge accumulation at the gas–solid interface have become increasingly prominent. A comprehensive understanding of the electric [...] Read more.
Ultra-high-voltage direct current (UHVDC) wall bushings are critical components in DC transmission systems, ensuring insulation integrity and operational reliability. In recent years, surface discharge incidents induced by charge accumulation at the gas–solid interface have become increasingly prominent. A comprehensive understanding of the electric field distribution and charge accumulation behavior of wall bushings under UHVDC is therefore essential for improving their safety and stability. In this work, an electrostatic field model of a ±800 kV UHVDC wall bushing core was developed using COMSOL Multiphysics 6.3. Based on this, a geometrically scaled model of the bushing core was further established to investigate charge distribution characteristics along the gas–solid interface under varying voltage amplitudes, application durations, and practical operating conditions. The results reveal that the maximum surface charge density occurs near the geometric corner of the core, with charge accumulation increasing as the applied voltage amplitude rises. Over time, the accumulation exhibits a saturation trend, approaching a steady state after approximately 480 min. Moreover, under actual operating conditions, the charge accumulation at the gas–solid interface increases by approximately 40%. These findings provide valuable insights for the design optimization of UHVDC wall bushings, thereby contributing to improved insulation performance and enhanced long-term operational reliability of DC transmission systems. Full article
Show Figures

Figure 1

20 pages, 1349 KB  
Article
DATTAMM: Domain-Aware Test-Time Adaptation for Multimodal Misinformation Detection
by Kaicheng Xu, Shasha Wang and Zipeng Diao
Appl. Sci. 2025, 15(21), 11832; https://doi.org/10.3390/app152111832 - 6 Nov 2025
Viewed by 829
Abstract
The rapid proliferation of multimodal misinformation across diverse news categories poses unprecedented challenges to digital ecosystems, where existing detection systems exhibit critical limitations in domain adaptation and fairness. Current methods suffer from two fundamental flaws: (1) severe performance variance (>35% accuracy drop in [...] Read more.
The rapid proliferation of multimodal misinformation across diverse news categories poses unprecedented challenges to digital ecosystems, where existing detection systems exhibit critical limitations in domain adaptation and fairness. Current methods suffer from two fundamental flaws: (1) severe performance variance (>35% accuracy drop in education/science categories) due to category-specific semantic shifts; (2) systemic real/fake detection bias causing up to 68.3% false positives in legitimate content—risking suppression of factual reporting especially in high-stakes domains like public health discourse. To address these dual challenges, this paper proposes the DATTAMM (Domain-Adaptive Tensorized Multimodal Model), a novel framework integrating category-aware attention mechanisms and adversarial debiasing modules. Our approach dynamically aligns textual–visual features while suppressing domain-irrelevant noise through the following: (a) semantic disentanglement layers extracting category-invariant patterns; (b) cross-modal verification units resolving inter-modal conflicts; (c) real/fake gradient alignment regularizers. Extensive experiments on nine news categories demonstrate that the DATTAMM achieves an average F1-score of 0.854, outperforming state-of-the-art baselines by 32.7%. The model maintains consistent performance with less than 5.4% variance across categories, significantly reducing accuracy drops in education and science content where baselines degrade by over 35%. Crucially, the DATTAMM narrows the real/fake F1 gap to merely 0.017, compared to 0.243–0.547 in baseline models, while cutting false positives in high-stakes domains like health news to 5.8% versus the 38.2% baseline average. These advances lower societal costs of misclassification by 79.7%, establishing a new paradigm for robust and equitable misinformation detection in evolving information ecosystems. Full article
Show Figures

Figure 1

16 pages, 2638 KB  
Article
Non-Steady-State Coupled Model of Viscosity–Temperature–Pressure in Polymer Flooding Injection Wellbores
by Yutian Huang, Jiawei Fan, Ming Hao, Xinlei Zhang, Fuzhen Liu and Xuesong Zhang
Appl. Sci. 2025, 15(21), 11831; https://doi.org/10.3390/app152111831 - 6 Nov 2025
Viewed by 318
Abstract
Polymer solutions play a crucial role in the polymer flooding process by influencing the flow characteristics of formation fluids and enhancing recovery efficiency. Their properties are influenced by the transient coupling of temperature, pressure, and viscosity, yet the underlying patterns remain unclear. This [...] Read more.
Polymer solutions play a crucial role in the polymer flooding process by influencing the flow characteristics of formation fluids and enhancing recovery efficiency. Their properties are influenced by the transient coupling of temperature, pressure, and viscosity, yet the underlying patterns remain unclear. This study establishes a non-steady-state coupling model of polymer temperature–pressure–viscosity in wellbores, solved numerically using a staggered-grid fully implicit scheme in Matlab. At a depth of 1000 m, the polymer viscosity is measured in the field as 102.12 mPa·s, while the simulated value is 107.46 mPa·s (4.97% error), indicating good agreement with the wellbore viscosity distribution. Wellbore temperature is the dominant factor, whereas injection pressure has minor effects. Injection flow rate governs heat exchange with the formation; low flow causes larger temperature and viscosity fluctuations, while high flow leads to insufficient heat transfer. With prolonged injection, wellbore temperature approaches dynamic equilibrium, viscosity decreases, and sand-carrying capacity weakens. These findings provide theoretical guidance for optimizing polymer flooding. Full article
(This article belongs to the Section Applied Thermal Engineering)
Show Figures

Figure 1

22 pages, 7129 KB  
Article
Hybrid Coatings of Chitosan-Tetracycline-Oxide Layer on Anodized Ti-13Zr-13Nb Alloy as New Drug Delivery System
by Aizada Utenaliyeva, Patrycja Osak, Karolina Dudek, Delfina Nowińska, Jan Rak, Joanna Maszybrocka and Bożena Łosiewicz
Appl. Sci. 2025, 15(21), 11830; https://doi.org/10.3390/app152111830 - 6 Nov 2025
Viewed by 450
Abstract
Titanium alloys are widely used in orthopedic and dental implants, yet their limited bioactivity and bacterial resistance remain critical challenges. This study aimed to enhance the surface performance of a Ti-13Zr-13Nb alloy through the formation of a porous oxide layer and the application [...] Read more.
Titanium alloys are widely used in orthopedic and dental implants, yet their limited bioactivity and bacterial resistance remain critical challenges. This study aimed to enhance the surface performance of a Ti-13Zr-13Nb alloy through the formation of a porous oxide layer and the application of a bioactive, drug-loaded coating. Porous oxide layers composed of Ti, Zr, and Nb oxides with fluoride incorporation were fabricated using a novel anodizing process. The fluoride-assisted electrochemical mechanism controlling oxide growth was elucidated through SEM and EDS analyses. The anodized surface exhibited reduced microhardness, beneficial for minimizing stress-shielding effects. Subsequently, chitosan–tetracycline composite coatings were produced via EPD and compared with dip-coating method. Characterization by ATR-FTIR, optical microscopy, SEM, and UV-VIS spectroscopy confirmed the formation of uniform, adherent, and moderately porous coatings with sustained drug release when produced by EPD, while dip-coated layers were less homogeneous and released the drug faster. Microhardness testing revealed improved mechanical integrity of EPD coatings. The developed chitosan–tetracycline–oxide layer system provides tunable nano/microgram-scale drug release and enhanced surface functionality, offering promising perspectives for acute and medium-term regenerative and antibacterial biomedical applications. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

23 pages, 2283 KB  
Article
Cuff-Less Estimation of Blood Pressure and Detection of Hypertension/Arteriosclerosis from Fingertip PPG Using Machine Learning: An Experimental Study
by Marco Antonio Arroyo-Ramírez, Isaac Machorro-Cano, Augusto Javier Reyes-Delgado, Jorge Ernesto González-Díaz and José Luis Sánchez-Cervantes
Appl. Sci. 2025, 15(21), 11829; https://doi.org/10.3390/app152111829 - 6 Nov 2025
Viewed by 596
Abstract
Worldwide less than half of adults with hypertension are diagnosed and treated (only 42%), in addition one in five adults with hypertension (21%) has the condition under control. In the American continent, cardiovascular diseases (CVD) are the leading cause of death and high [...] Read more.
Worldwide less than half of adults with hypertension are diagnosed and treated (only 42%), in addition one in five adults with hypertension (21%) has the condition under control. In the American continent, cardiovascular diseases (CVD) are the leading cause of death and high blood pressure (hypertension) is responsible for 50% of CVD deaths. Only a few countries show a population hypertension control rate of more than 50%. In this experimental study, we trained 15 regression-type machine learning algorithms, including traditional and ensemble methods to assess their effectiveness in estimating arterial pressure using noninvasive photoplethysmographic (PPG) signals extracted from 110 study subjects, to identify the risk of hypertension and its correlation with arteriosclerosis. We analyzed the performance of each algorithm using the metrics MSE, MAE, RMSE, and r2. A 10-fold cross-validation showed that the best algorithms for hypertension risk identification were LR, KNN, SVR, RF, LR Baggin, KNNBagging, SVRBagging, and DTBagging. On the other hand, the best algorithms for arterioclesrosis risk identification were LR, KNN, SVR, RF, LR Bagging, and DTBagging. These results suggest that this research is promising and offers valuable information on the acquisition and processing of PPG signals. However, as this is an experimental study, the effectiveness of our model needs to be validated with a larger database. On the other hand, this model represents a support tool for healthcare specialists in the early detection of cardiovascular health, allowing people to self-manage their health and seek medical attention at an early stage. Full article
(This article belongs to the Special Issue Data Science for Human Health Monitoring with Smart Sensors)
Show Figures

Figure 1

26 pages, 898 KB  
Article
Super-Resolution Task Inference Acceleration for In-Vehicle Real-Time Video via Edge–End Collaboration
by Liming Zhou, Yafei Li, Yulong Feng, Dian Shen, Hui Wang and Fang Dong
Appl. Sci. 2025, 15(21), 11828; https://doi.org/10.3390/app152111828 - 6 Nov 2025
Viewed by 460
Abstract
As intelligent transportation systems continue to advance, on-board surveillance video has become essential for train safety and intelligent scheduling. However, high-resolution video transmission faces bandwidth limitations, and existing deep learning-based super-resolution models find it difficult to meet real-time requirements due to high computational [...] Read more.
As intelligent transportation systems continue to advance, on-board surveillance video has become essential for train safety and intelligent scheduling. However, high-resolution video transmission faces bandwidth limitations, and existing deep learning-based super-resolution models find it difficult to meet real-time requirements due to high computational complexity. To address this, this paper proposes an “edge–end” collaborative multi-terminal task inference framework, which improves inference speed by integrating resources of in-vehicle end devices and edge servers. The framework establishes a real-time-priority mathematical model, uses game theory to solve the problem of minimizing multi-terminal task inference latency, and proposes a multi-terminal task model partitioning strategy and an adaptive adjustment mechanism. It can dynamically partition the model according to device performance and network status, prioritizing real-time performance and minimizing the maximum inference delay. Experimental results show that the dynamic model partitioning mechanism can adaptively determine the optimal partition point, effectively reducing the inference delay of each end device in high-speed mobile and bandwidth-constrained scenarios and providing high-quality video data support for safety monitoring and intelligent analysis. Full article
Show Figures

Figure 1

12 pages, 890 KB  
Article
Control Modality and Accuracy on the Trust and Acceptance of Construction Robots
by Daeguk Lee, Donghun Lee, Jae Hyun Jung and Taezoon Park
Appl. Sci. 2025, 15(21), 11827; https://doi.org/10.3390/app152111827 - 6 Nov 2025
Viewed by 345
Abstract
This study investigates how control modalities and recognition accuracy influence construction workers’ trust and acceptance of collaborative robots. Sixty participants evaluated voice and gesture control under varying levels of recognition accuracy while performing tiling together with collaborative robots. Experimental results indicated that recognition [...] Read more.
This study investigates how control modalities and recognition accuracy influence construction workers’ trust and acceptance of collaborative robots. Sixty participants evaluated voice and gesture control under varying levels of recognition accuracy while performing tiling together with collaborative robots. Experimental results indicated that recognition accuracy significantly affected perceived enjoyment (PE, p = 0.010), ease of use (PEOU, p = 0.030), and intention to use (ITU, p = 0.022), but not trust, usefulness (PU), or attitude (ATT). Furthermore, the interaction between control modality and accuracy shaped most acceptance factors (PE, p = 0.049; PEOU, p = 0.006; PU, p = 0.006; ATT, p = 0.003, and ITU, p < 0.001) except trust. In general, high recognition accuracy enhanced user experience and adoption intentions. Voice interfaces were favored when recognition accuracy was high, whereas gesture interfaces were more acceptable under low-accuracy conditions. These findings highlight the importance of designing high-accuracy, task-appropriate interfaces to support technology acceptance in construction. The preference for voice interfaces under accurate conditions aligns with the noisy, fast-paced nature of construction sites, where efficiency is paramount. By contrast, gesture interfaces offer resilience when recognition errors occur. The study provides practical guidance for robot developers, interface designers, and construction managers, emphasizing that carefully matching interaction modalities and accuracy levels to on-site demands can improve acceptance and long-term adoption in this traditionally conservative sector. Full article
(This article belongs to the Special Issue Robot Control in Human–Computer Interaction)
Show Figures

Figure 1

15 pages, 659 KB  
Article
A Principal Component Estimation Algorithm with Adaptive Learning Rate and Its Convergence Analysis
by Yingbin Gao, Haidi Dong, Yuan Zhou, Zhongying Xu, Jing Li and Kai Jin
Appl. Sci. 2025, 15(21), 11826; https://doi.org/10.3390/app152111826 - 6 Nov 2025
Viewed by 235
Abstract
The deterministic discrete-time method is a dominant approach for analyzing neural network algorithms. To address the issue where conventional convergence conditions impose stringent restrictions on the range of learning factors, this paper proposes a principal component estimation algorithm with an adaptive learning factor, [...] Read more.
The deterministic discrete-time method is a dominant approach for analyzing neural network algorithms. To address the issue where conventional convergence conditions impose stringent restrictions on the range of learning factors, this paper proposes a principal component estimation algorithm with an adaptive learning factor, which guarantees global convergence. The convergence of the algorithm is analyzed using the deterministic discrete-time method, and conditions ensuring convergence are established. Unlike convergence conditions for other algorithms, the proposed algorithm’s convergence conditions eliminate restrictions on the learning factor, thereby extending its feasible range. Simulation results demonstrate that the proposed algorithm effectively resolves ill-conditioned matrix problems. When compared with existing algorithms, the proposed algorithm exhibits significantly faster convergence speed than several current methods. Full article
(This article belongs to the Special Issue Research and Application of Neural Networks)
Show Figures

Figure 1

19 pages, 3621 KB  
Article
CFD Analysis of Natural Convection Performance of a MMRTG Model Under Martian Atmospheric Conditions
by Rafael Bardera-Mora, Ángel Rodríguez-Sevillano, Juan Carlos Matías-García, Estela Barroso-Barderas and Jaime Fernández-Antón
Appl. Sci. 2025, 15(21), 11825; https://doi.org/10.3390/app152111825 - 6 Nov 2025
Viewed by 376
Abstract
Understanding the thermal behaviour of radioisotope generators under Martian conditions is essential for the safe and efficient operation of planetary exploration rovers. This study investigates the heat transfer and flow mechanisms around a simplified full-scale model of the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) [...] Read more.
Understanding the thermal behaviour of radioisotope generators under Martian conditions is essential for the safe and efficient operation of planetary exploration rovers. This study investigates the heat transfer and flow mechanisms around a simplified full-scale model of the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) by means of Computational Fluid Dynamics (CFD) simulations performed with ANSYS Fluent 2023 R1. The model consists of a central cylindrical core and eight radial fins, operating under pure CO2 at a pressure of approximately 600 Pa, representative of the Martian atmosphere. Four cases were simulated, varying both the reactor surface temperature (373–453 K) and the ambient temperature (248 to 173 K) to reproduce typical diurnal and seasonal scenarios on Mars. The results show the formation of a buoyancy-driven plume rising above the generator, with peak velocities between 1 and 3.5 m/s depending on the thermal load. Temperature fields reveal that the fins generate multiple localized hot spots that merge into a single vertical plume at higher elevations. The calculated dimensionless numbers (Grashof ≈ 105, Rayleigh ≈ 105, Reynolds ≈ 102, Prandtl ≈ 0.7, Nusselt ≈ 4) satisfy the expected range for natural convection in low-density CO2 atmospheres, confirming the laminar regime. These results contribute to a better understanding of heat dissipation processes in Martian environments and may guide future design improvements of thermoelectric generators and passive thermal management systems for space missions. Full article
Show Figures

Figure 1

23 pages, 16159 KB  
Article
Adaptive Multi-Scale Feature Learning Module for Pediatric Pneumonia Recognition in Chest X-Rays
by Petra Radočaj, Goran Martinović and Dorijan Radočaj
Appl. Sci. 2025, 15(21), 11824; https://doi.org/10.3390/app152111824 - 6 Nov 2025
Viewed by 407
Abstract
Pneumonia remains a major global health concern, particularly among pediatric populations in low-resource settings where radiological expertise is limited. This study investigates the enhancement of deep convolutional neural networks (CNNs) for automated pneumonia diagnosis from chest X-ray images through the integration of a [...] Read more.
Pneumonia remains a major global health concern, particularly among pediatric populations in low-resource settings where radiological expertise is limited. This study investigates the enhancement of deep convolutional neural networks (CNNs) for automated pneumonia diagnosis from chest X-ray images through the integration of a novel module combining Inception blocks, Mish activation, and Batch Normalization (IncMB). Four state-of-the-art transfer learning models—InceptionV3, InceptionResNetV2, MobileNetV2, and DenseNet201—were evaluated in their base form and with the proposed IncMB extension. Comparative analysis based on standardized classification metrics reveals consistent performance improvements across all models with the addition of the IncMB module. The most notable improvement was observed in InceptionResNetV2, where the IncMB-enhanced model achieved the highest accuracy of 0.9812, F1-score of 0.9761, precision of 0.9781, recall of 0.9742, and strong specificity of 0.9590. Other models also demonstrated similar trends, confirming that the IncMB module contributes to better generalization and discriminative capability. These enhancements were achieved while reducing the total number of parameters, indicating improved computational efficiency. In conclusion, the integration of IncMB significantly boosts the performance of CNN-based pneumonia classifiers, offering a promising direction for the development of lightweight, high-performing diagnostic tools suitable for real-world clinical application, particularly in underserved healthcare environments. Full article
(This article belongs to the Special Issue Engineering Applications of Hybrid Artificial Intelligence Tools)
Show Figures

Figure 1

16 pages, 1193 KB  
Article
Enhancing Biscuit Nutritional Value Through Apple and Sour Cherry Pomace Fortification
by Maria Bianca Mandache, Carmen Mihaela Topală, Loredana Elena Vijan and Sina Cosmulescu
Appl. Sci. 2025, 15(21), 11823; https://doi.org/10.3390/app152111823 - 6 Nov 2025
Viewed by 434
Abstract
This research investigates the use of apple and sour cherry pomace to fortify biscuits, aiming both to improve their nutritional profile and to support the sustainable reuse of fruit processing by-products. Apple and sour cherry pomace, known for their high content of bioactive [...] Read more.
This research investigates the use of apple and sour cherry pomace to fortify biscuits, aiming both to improve their nutritional profile and to support the sustainable reuse of fruit processing by-products. Apple and sour cherry pomace, known for their high content of bioactive compounds, were added to biscuit formulations at inclusion levels of 5%, 10%, and 15%. Enrichment notably boosted the concentration of health-promoting constituents. Biscuits containing 15% sour cherry pomace recorded the highest amounts of polyphenols (475.16 mg gallic acid equivalents/100 g), flavonoids (204.10 mg catechin equivalents/100 g), and anthocyanins (28.58 mg cyanidin-3-glucoside equivalents/100 g). In contrast, biscuits fortified with 15% apple pomace displayed stronger antiradical activity (30.80%) and higher sugar content (46.31 g glucose equivalents/100 g) than their sour cherry counterparts. FTIR spectroscopy confirmed the presence of characteristic vibrations associated with these bioactive compounds in both the pomace and the enriched biscuits. Overall, the results show that incorporating apple and sour cherry pomace is a practical way to create functional biscuits with enhanced nutritional qualities while promoting the sustainable use of fruit industry residues. Full article
Show Figures

Figure 1

19 pages, 1275 KB  
Article
The Possibilities for Using Ash and Slag Waste in Civil Engineering
by Natalia Stankiewicz and Wioleta Rutkowska
Appl. Sci. 2025, 15(21), 11822; https://doi.org/10.3390/app152111822 - 6 Nov 2025
Viewed by 311
Abstract
This research aimed to improve our understanding of how ash and slag waste (ASW) could be used in civil engineering. The present study concentrated on the utilisation of bottom sand (ASW) in cement composites as a replacement for a part of aggregate and [...] Read more.
This research aimed to improve our understanding of how ash and slag waste (ASW) could be used in civil engineering. The present study concentrated on the utilisation of bottom sand (ASW) in cement composites as a replacement for a part of aggregate and the evaluation of the pozzolanic properties of the material. This would enable its use as a binder in non-cementitious or cementitious composites. The basic properties of the modified mortars were investigated. The pozzolanic activity index (PAI) of the bottom sand was also tested using two methods. Analysis of the test results shows that we can replace natural aggregate with 25% bottom sand without significantly impairing the properties of the modified composites. However, the tested ASW does not exhibit pozzolanic activity. Consequently, it should not be used as a binder substitute in cementitious or non-cementitious composites. Full article
Show Figures

Figure 1

19 pages, 5015 KB  
Article
An ANN–Driven Excavatability Chart Integrating GSI and Rock Mass Strength
by Gulseren Dagdelenler
Appl. Sci. 2025, 15(21), 11821; https://doi.org/10.3390/app152111821 - 6 Nov 2025
Viewed by 395
Abstract
Excavation is a common requirement in engineering construction within rock masses. While excavation volumes are generally limited in road slope projects, they may become substantial in large-scale operations such as deep open pit mines. The interaction between time and cost in excavation processes [...] Read more.
Excavation is a common requirement in engineering construction within rock masses. While excavation volumes are generally limited in road slope projects, they may become substantial in large-scale operations such as deep open pit mines. The interaction between time and cost in excavation processes is strongly controlled by rock mass excavatability, which has been recognized as a key factor in project budgets. Since the 1970s, excavatability assessment has therefore attracted considerable research interest in rock mechanics. In this study, the excavatability cases previously plotted on the Geological Strength Index (GSI) versus Uniaxial Compressive Strength of the Rock Mass (σc_rm) diagram in the literature were improved by employing an Artificial Neural Network (ANN). The ANN approach was used to investigate the boundaries between digger, ripper, and hammer+blasting excavation classes within the available case zones defined by GSI–σc_rm data pairs. The prediction performance of the developed rock mass excavatability chart is highly acceptable, with correct classification rates of 91.1% for blasting+hammer and ripper classes, and 87.2% for the ripper class. Considering GSI and σc_rm as the main input parameters, the proposed ANN-oriented excavatability chart is highly acceptable for preliminary equipment selection during the design stage of surface rock mass excavations, including slope cases. Full article
Show Figures

Figure 1

11 pages, 1565 KB  
Article
Internal and External Loads in U16 Women’s Basketball Players Participating in U18 Training Sessions: A Case Study
by Álvaro Bustamante-Sánchez, Enrique Alonso-Perez-Chao, Rubén Portes and Nuno Leite
Appl. Sci. 2025, 15(21), 11820; https://doi.org/10.3390/app152111820 - 6 Nov 2025
Viewed by 389
Abstract
Background: This study aimed to analyze and compare the internal and external training load responses in U16 female basketball players participating in a micro-cycle with the U18 team from the same club. Methods: Twelve U16 and six U18 female basketball players completed two [...] Read more.
Background: This study aimed to analyze and compare the internal and external training load responses in U16 female basketball players participating in a micro-cycle with the U18 team from the same club. Methods: Twelve U16 and six U18 female basketball players completed two U18-team training sessions (MD-3 and MD-1; 90 min each). The internal load (heart rate metrics) and external load (accelerations, decelerations, speed, and distance) were measured using Polar Team Pro sensors. Differences between groups were analyzed using t-tests and Cohen’s d effect sizes. Results: No significant differences (p > 0.05) were found between age categories for either the internal or external load variables. U16 players showed slightly higher maximum heart rate percentages (96.5% vs. 94.7%, ES = 0.29) but similar average heart rate and time in heart rate zones. For the external load, both groups exhibited comparable values in total distance, average speed, and movement across speed and acceleration/deceleration zones. Effect sizes were mostly small, with moderate differences found in specific acceleration and deceleration zones. Conclusions: U16 players training with the U18 team experienced similar internal and external loads, suggesting that they can cope with the physical and physiological demands of older-age-group training. These findings support the inclusion of younger players in higher-age-group training environments as part of their long-term athletic development. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

Previous Issue
Back to TopTop