Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,647)

Search Parameters:
Keywords = information metrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 5388 KiB  
Article
Numerical and Experimental Evaluation of Axial Load Transfer in Deep Foundations Within Stratified Cohesive Soils
by Şahin Çaglar Tuna
Buildings 2025, 15(15), 2723; https://doi.org/10.3390/buildings15152723 (registering DOI) - 1 Aug 2025
Abstract
This study presents a numerical and experimental evaluation of axial load transfer mechanisms in deep foundations constructed in stratified cohesive soils in İzmir, Türkiye. A full-scale bi-directional static load test equipped with strain gauges was conducted on a barrette pile to investigate depth-dependent [...] Read more.
This study presents a numerical and experimental evaluation of axial load transfer mechanisms in deep foundations constructed in stratified cohesive soils in İzmir, Türkiye. A full-scale bi-directional static load test equipped with strain gauges was conducted on a barrette pile to investigate depth-dependent mobilization of shaft resistance. A finite element model was developed and calibrated using field-observed load–settlement and strain data to replicate the pile–soil interaction and deformation behavior. The analysis revealed a shaft-dominated load transfer behavior, with progressive mobilization concentrated in intermediate-depth cohesive layers. Sensitivity analysis identified the undrained stiffness (Eu) as the most influential parameter governing pile settlement. A strong polynomial correlation was established between calibrated Eu values and SPT N60, offering a practical tool for preliminary design. Additionally, strain energy distribution was evaluated as a supplementary metric, enhancing the interpretation of mobilization zones beyond conventional stress-based methods. The integrated approach provides valuable insights for performance-based foundation design in layered cohesive ground, supporting the development of site-calibrated numerical models informed by full-scale testing data. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

27 pages, 9675 KiB  
Article
Assessing Model Trade-Offs in Agricultural Remote Sensing: A Review of Machine Learning and Deep Learning Approaches Using Almond Crop Mapping
by Mashoukur Rahaman, Jane Southworth, Yixin Wen and David Keellings
Remote Sens. 2025, 17(15), 2670; https://doi.org/10.3390/rs17152670 (registering DOI) - 1 Aug 2025
Abstract
This study presents a comprehensive review and comparative analysis of traditional machine learning (ML) and deep learning (DL) models for land cover classification in agricultural remote sensing. We evaluate the reported successes, trade-offs, and performance metrics of ML and DL models across diverse [...] Read more.
This study presents a comprehensive review and comparative analysis of traditional machine learning (ML) and deep learning (DL) models for land cover classification in agricultural remote sensing. We evaluate the reported successes, trade-offs, and performance metrics of ML and DL models across diverse agricultural contexts. Building on this foundation, we apply both model types to the specific case of almond crop field identification in California’s Central Valley using Landsat data. DL models, including U-Net, MANet, and DeepLabv3+, achieve high accuracy rates of 97.3% to 97.5%, yet our findings demonstrate that conventional ML models—such as Decision Tree, K-Nearest Neighbor, and Random Forest—can reach comparable accuracies of 96.6% to 96.8%. Importantly, the ML models were developed using data from a single year, while DL models required extensive training data spanning 2008 to 2022. Our results highlight that traditional ML models offer robust classification performance with substantially lower computational demands, making them especially valuable in resource-constrained settings. This paper underscores the need for a balanced approach in model selection—one that weighs accuracy alongside efficiency. The findings contribute actionable insights for agricultural land cover mapping and inform ongoing model development in the geospatial sciences. Full article
Show Figures

Figure 1

14 pages, 483 KiB  
Review
Artificial Intelligence and Its Impact on the Management of Lumbar Degenerative Pathology: A Narrative Review
by Alessandro Trento, Salvatore Rapisarda, Nicola Bresolin, Andrea Valenti and Enrico Giordan
Medicina 2025, 61(8), 1400; https://doi.org/10.3390/medicina61081400 (registering DOI) - 1 Aug 2025
Abstract
In this narrative review, we explore the role of artificial intelligence (AI) in managing lumbar degenerative conditions, a topic that has recently garnered significant interest. The use of AI-based solutions in spine surgery is particularly appealing due to its potential applications in preoperative [...] Read more.
In this narrative review, we explore the role of artificial intelligence (AI) in managing lumbar degenerative conditions, a topic that has recently garnered significant interest. The use of AI-based solutions in spine surgery is particularly appealing due to its potential applications in preoperative planning and outcome prediction. This study aims to clarify the impact of artificial intelligence models on the diagnosis and prognosis of common types of degenerative conditions: lumbar disc herniation, spinal stenosis, and eventually spinal fusion. Additionally, the study seeks to identify predictive factors for lumbar fusion surgery based on a review of the literature from the past 10 years. From the literature search, 96 articles were examined. The literature on this topic appears to be consistent, describing various models that show promising results, particularly in predicting outcomes. However, most studies adopt a retrospective approach and often lack detailed information about imaging features, intraoperative findings, and postoperative functional metrics. Additionally, the predictive performance of these models varies significantly, and few studies include external validation. The application of artificial intelligence in treating degenerative spine conditions, while valid and promising, is still in a developmental phase. However, over the last decade, there has been an exponential growth in studies related to this subject, which is beginning to pave the way for its systematic use in clinical practice. Full article
Show Figures

Figure 1

8 pages, 347 KiB  
Article
Localizing Synergies of Hidden Factors in Complex Systems: Resting Brain Networks and HeLa GeneExpression Profile as Case Studies
by Marlis Ontivero-Ortega, Gorana Mijatovic, Luca Faes, Fernando E. Rosas, Daniele Marinazzo and Sebastiano Stramaglia
Entropy 2025, 27(8), 820; https://doi.org/10.3390/e27080820 (registering DOI) - 1 Aug 2025
Abstract
Factor analysis is a well-known statistical method to describe the variability of observed variables in terms of a smaller number of unobserved latent variables called factors. Even though latent factors are conceptually independent of each other, their influence on the observed variables is [...] Read more.
Factor analysis is a well-known statistical method to describe the variability of observed variables in terms of a smaller number of unobserved latent variables called factors. Even though latent factors are conceptually independent of each other, their influence on the observed variables is often joint and synergistic. We propose to quantify the synergy of the joint influence of factors on the observed variables using O-information, a recently introduced metric to assess high-order dependencies in complex systems; in the proposed framework, latent factors and observed variables are jointly analyzed in terms of their joint informational character. Two case studies are reported: analyzing resting fMRI data, we find that DMN and FP networks show the highest synergy, consistent with their crucial role in higher cognitive functions; concerning HeLa cells, we find that the most synergistic gene is STK-12 (AURKB), suggesting that this gene is involved in controlling the HeLa cell cycle. We believe that our approach, representing a bridge between factor analysis and the field of high-order interactions, will find wide application across several domains. Full article
(This article belongs to the Special Issue Entropy in Biomedical Engineering, 3rd Edition)
Show Figures

Figure 1

21 pages, 97817 KiB  
Article
Compression of 3D Optical Encryption Using Singular Value Decomposition
by Kyungtae Park, Min-Chul Lee and Myungjin Cho
Sensors 2025, 25(15), 4742; https://doi.org/10.3390/s25154742 (registering DOI) - 1 Aug 2025
Abstract
In this paper, we propose a compressionmethod for optical encryption using singular value decomposition (SVD). Double random phase encryption (DRPE), which employs two distinct random phase masks, is adopted as the optical encryption technique. Since the encrypted data in DRPE have the same [...] Read more.
In this paper, we propose a compressionmethod for optical encryption using singular value decomposition (SVD). Double random phase encryption (DRPE), which employs two distinct random phase masks, is adopted as the optical encryption technique. Since the encrypted data in DRPE have the same size as the input data and consists of complex values, a compression technique is required to improve data efficiency. To address this issue, we introduce SVD as a compression method. SVD decomposes any matrix into simpler components, such as a unitary matrix, a rectangular diagonal matrix, and a complex unitary matrix. By leveraging this property, the encrypted data generated by DRPE can be effectively compressed. However, this compression may lead to some loss of information in the decrypted data. To mitigate this loss, we employ volumetric computational reconstruction based on integral imaging. As a result, the proposed method enhances the visual quality, compression ratio, and security of DRPE simultaneously. To validate the effectiveness of the proposed method, we conduct both computer simulations and optical experiments. The performance is evaluated quantitatively using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and peak sidelobe ratio (PSR) as evaluation metrics. Full article
Show Figures

Figure 1

34 pages, 1543 KiB  
Article
Smart Money, Greener Future: AI-Enhanced English Financial Text Processing for ESG Investment Decisions
by Junying Fan, Daojuan Wang and Yuhua Zheng
Sustainability 2025, 17(15), 6971; https://doi.org/10.3390/su17156971 (registering DOI) - 31 Jul 2025
Abstract
Emerging markets face growing pressures to integrate sustainable English business practices while maintaining economic growth, particularly in addressing environmental challenges and achieving carbon neutrality goals. English Financial information extraction becomes crucial for supporting green finance initiatives, Environmental, Social, and Governance (ESG) compliance, and [...] Read more.
Emerging markets face growing pressures to integrate sustainable English business practices while maintaining economic growth, particularly in addressing environmental challenges and achieving carbon neutrality goals. English Financial information extraction becomes crucial for supporting green finance initiatives, Environmental, Social, and Governance (ESG) compliance, and sustainable investment decisions in these markets. This paper presents FinATG, an AI-driven autoregressive framework for extracting sustainability-related English financial information from English texts, specifically designed to support emerging markets in their transition toward sustainable development. The framework addresses the complex challenges of processing ESG reports, green bond disclosures, carbon footprint assessments, and sustainable investment documentation prevalent in emerging economies. FinATG introduces a domain-adaptive span representation method fine-tuned on sustainability-focused English financial corpora, implements constrained decoding mechanisms based on green finance regulations, and integrates FinBERT with autoregressive generation for end-to-end extraction of environmental and governance information. While achieving competitive performance on standard benchmarks, FinATG’s primary contribution lies in its architecture, which prioritizes correctness and compliance for the high-stakes financial domain. Experimental validation demonstrates FinATG’s effectiveness with entity F1 scores of 88.5 and REL F1 scores of 80.2 on standard English datasets, while achieving superior performance (85.7–86.0 entity F1, 73.1–74.0 REL+ F1) on sustainability-focused financial datasets. The framework particularly excels in extracting carbon emission data, green investment relationships, and ESG compliance indicators, achieving average AUC and RGR scores of 0.93 and 0.89 respectively. By automating the extraction of sustainability metrics from complex English financial documents, FinATG supports emerging markets in meeting international ESG standards, facilitating green finance flows, and enhancing transparency in sustainable business practices, ultimately contributing to their sustainable development goals and climate action commitments. Full article
15 pages, 2158 KiB  
Article
A Data-Driven Approach for Internal Crack Prediction in Continuous Casting of HSLA Steels Using CTGAN and CatBoost
by Mengying Geng, Haonan Ma, Shuangli Liu, Zhuosuo Zhou, Lei Xing, Yibo Ai and Weidong Zhang
Materials 2025, 18(15), 3599; https://doi.org/10.3390/ma18153599 (registering DOI) - 31 Jul 2025
Viewed by 94
Abstract
Internal crack defects in high-strength low-alloy (HSLA) steels during continuous casting pose significant challenges to downstream processing and product reliability. However, due to the inherent class imbalance in industrial defect datasets, conventional machine learning models often suffer from poor sensitivity to minority class [...] Read more.
Internal crack defects in high-strength low-alloy (HSLA) steels during continuous casting pose significant challenges to downstream processing and product reliability. However, due to the inherent class imbalance in industrial defect datasets, conventional machine learning models often suffer from poor sensitivity to minority class instances. This study proposes a predictive framework that integrates conditional tabular generative adversarial network (CTGAN) for synthetic minority sample generation and CatBoost for classification. A dataset of 733 process records was collected from a continuous caster, and 25 informative features were selected using mutual information. CTGAN was employed to augment the minority class (crack) samples, achieving a balanced training set. Feature distribution analysis and principal component visualization indicated that the synthetic data effectively preserved the statistical structure of the original minority class. Compared with the other machine learning methods, including KNN, SVM, and MLP, CatBoost achieved the highest metrics, with an accuracy of 0.9239, precision of 0.9041, recall of 0.9018, and F1-score of 0.9022. Results show that CTGAN-based augmentation improves classification performance across all models. These findings highlight the effectiveness of GAN-based augmentation for imbalanced industrial data and validate the CTGAN–CatBoost model as a robust solution for online defect prediction in steel manufacturing. Full article
(This article belongs to the Special Issue Latest Developments in Advanced Machining Technologies for Materials)
Show Figures

Figure 1

15 pages, 2428 KiB  
Article
Using Large Language Models to Simulate History Taking: Implications for Symptom-Based Medical Education
by Cheong Yoon Huh, Jongwon Lee, Gibaeg Kim, Yerin Jang, Hye-seung Ko, Min Jung Suh, Sumin Hwang, Ho Jin Son, Junha Song, Soo-Jeong Kim, Kwang Joon Kim, Sung Il Kim, Chang Oh Kim and Yeo Gyeong Ko
Information 2025, 16(8), 653; https://doi.org/10.3390/info16080653 (registering DOI) - 31 Jul 2025
Viewed by 47
Abstract
Medical education often emphasizes theoretical knowledge, limiting students’ opportunities to practice history taking, a structured interview that elicits relevant patient information before clinical decision making. Large language models (LLMs) offer novel solutions by generating simulated patient interviews. This study evaluated the educational potential [...] Read more.
Medical education often emphasizes theoretical knowledge, limiting students’ opportunities to practice history taking, a structured interview that elicits relevant patient information before clinical decision making. Large language models (LLMs) offer novel solutions by generating simulated patient interviews. This study evaluated the educational potential of LLM-generated history-taking dialogues, focusing on clinical validity and diagnostic diversity. Chest pain was chosen as a representative case given its frequent presentation and importance for differential diagnosis. A fine-tuned Gemma-3-27B, specialized for medical interviews, was compared with GPT-4o-mini, a freely accessible LLM, in generating multi-branching history-taking dialogues, with Claude-3.5 Sonnet inferring diagnoses from these dialogues. The dialogues were assessed using a Chest Pain Checklist (CPC) and entropy-based metrics. Gemma-3-27B outperformed GPT-4o-mini, generating significantly more high-quality dialogues (90.7% vs. 76.5%). Gemma-3-27B produced diverse and focused diagnoses, whereas GPT-4o-mini generated broader but less specific patterns. For demographic information, such as age and sex, Gemma-3-27B showed significant shifts in dialogue patterns and diagnoses aligned with real-world epidemiological trends. These findings suggest that LLMs, particularly those fine-tuned for medical tasks, are promising educational tools for generating diverse, clinically valid interview scenarios that enhance clinical reasoning in history taking. Full article
Show Figures

Figure 1

25 pages, 573 KiB  
Review
Challenges and Opportunities in Using Fish Metrics for Reservoir Water Quality Evaluation
by Alexandre Moreira, Sara Rodrigues, Lucas Ferreira, Nuno E. Formigo and Sara C. Antunes
Water 2025, 17(15), 2274; https://doi.org/10.3390/w17152274 - 30 Jul 2025
Viewed by 217
Abstract
The Water Framework Directive (WFD) was designed to protect the quality of all water resources. For reservoirs, the ecological potential classification assesses biological parameters, evaluating only the phytoplankton community. Thus, this study aimed to evaluate the effectiveness of using fish communities to determine [...] Read more.
The Water Framework Directive (WFD) was designed to protect the quality of all water resources. For reservoirs, the ecological potential classification assesses biological parameters, evaluating only the phytoplankton community. Thus, this study aimed to evaluate the effectiveness of using fish communities to determine water quality in reservoirs. A literature review was conducted to gather information on how fish community data were integrated into reservoir water quality assessment under the WFD. This work includes an exploratory case study of the Aguieira Reservoir (Portugal), evaluating the ichthyofauna community, along with physical, chemical, and biological assessment of the water. The results of the review show that fish abundance and composition (sensitive metrics) should be used to develop ecological indices for assessing water quality in reservoirs. However, the effects of anthropogenic pressures and invasive species are not included in the calculation of most proposed indices. The case study serves as an illustrative example and demonstrates low abundance and composition of the fish community with a high percentage of invasive species, revealing a poor water quality, regarding ichthyofauna biotic index results (F-IBIP). Nevertheless, including these metrics in the classification of ecological potential can help guide restoration strategies to mitigate the effects of anthropogenic pressures. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

21 pages, 563 KiB  
Article
Optimized Interdisciplinary Research Team Formation Using a Genetic Algorithm and Publication Metadata Records
by Christian-Daniel Curiac, Mihai Micea, Traian-Radu Plosca, Daniel-Ioan Curiac and Alex Doboli
AI 2025, 6(8), 171; https://doi.org/10.3390/ai6080171 - 30 Jul 2025
Viewed by 246
Abstract
Forming interdisciplinary research teams is challenging, especially when the pool of candidates is large and/or the addressed research projects require multi-disciplinary expertise. Based on their previous research outputs, like published work, a data-driven team formation procedure selects the researchers that are likely to [...] Read more.
Forming interdisciplinary research teams is challenging, especially when the pool of candidates is large and/or the addressed research projects require multi-disciplinary expertise. Based on their previous research outputs, like published work, a data-driven team formation procedure selects the researchers that are likely to work well together while covering all areas and offering all skills required by the multi-disciplinary topic. The description of the research team formation problem proposed in this paper uses novel quantitative metrics about the team candidates computed from bibliographic metadata records. The proposed methodology first analyzes the metadata fields that provide useful information and then computes four synthetic indicators regarding candidates’ skills and their interpersonal traits. Interdisciplinary teams are formed by solving a complex combinatorial multi-objective weighted set cover optimization problem, defined as equations involving the synthetic indicators. Problem solving uses the NSGA-II genetic algorithm. The proposed methodology is validated and compared with other similar approaches using a dataset on researchers from Politehnica University of Timisoara extracted from the IEEE Xplore database. Experimental results show that the method can identify potential research teams in situations for which other related algorithms fail. Full article
Show Figures

Figure 1

19 pages, 7161 KiB  
Article
Dynamic Snake Convolution Neural Network for Enhanced Image Super-Resolution
by Weiqiang Xin, Ziang Wu, Qi Zhu, Tingting Bi, Bing Li and Chunwei Tian
Mathematics 2025, 13(15), 2457; https://doi.org/10.3390/math13152457 - 30 Jul 2025
Viewed by 118
Abstract
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To [...] Read more.
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To address these limitations, this paper proposes DSCNN, a dynamic snake convolution neural network for enhanced image super-resolution. DSCNN optimizes feature extraction and network architecture to enhance both performance and efficiency: To improve feature extraction, the core innovation is a feature extraction and enhancement module with dynamic snake convolution that dynamically adjusts the convolution kernel’s shape and position to better fit the image’s geometric structures, significantly improving feature extraction. To optimize the network’s structure, DSCNN employs an enhanced residual network framework. This framework utilizes parallel convolutional layers and a global feature fusion mechanism to further strengthen feature extraction capability and gradient flow efficiency. Additionally, the network incorporates a SwishReLU-based activation function and a multi-scale convolutional concatenation structure. This multi-scale design effectively captures both local details and global image structure, enhancing SR reconstruction. In summary, the proposed DSCNN outperforms existing methods in both objective metrics and visual perception (e.g., our method achieved optimal PSNR and SSIM results on the Set5 ×4 dataset). Full article
(This article belongs to the Special Issue Structural Networks for Image Application)
Show Figures

Figure 1

30 pages, 37977 KiB  
Article
Text-Guided Visual Representation Optimization for Sensor-Acquired Video Temporal Grounding
by Yun Tian, Xiaobo Guo, Jinsong Wang and Xinyue Liang
Sensors 2025, 25(15), 4704; https://doi.org/10.3390/s25154704 - 30 Jul 2025
Viewed by 170
Abstract
Video temporal grounding (VTG) aims to localize a semantically relevant temporal segment within an untrimmed video based on a natural language query. The task continues to face challenges arising from cross-modal semantic misalignment, which is largely attributed to redundant visual content in sensor-acquired [...] Read more.
Video temporal grounding (VTG) aims to localize a semantically relevant temporal segment within an untrimmed video based on a natural language query. The task continues to face challenges arising from cross-modal semantic misalignment, which is largely attributed to redundant visual content in sensor-acquired video streams, linguistic ambiguity, and discrepancies in modality-specific representations. Most existing approaches rely on intra-modal feature modeling, processing video and text independently throughout the representation learning stage. However, this isolation undermines semantic alignment by neglecting the potential of cross-modal interactions. In practice, a natural language query typically corresponds to spatiotemporal content in video signals collected through camera-based sensing systems, encompassing a particular sequence of frames and its associated salient subregions. We propose a text-guided visual representation optimization framework tailored to enhance semantic interpretation over video signals captured by visual sensors. This framework leverages textual information to focus on spatiotemporal video content, thereby narrowing the cross-modal gap. Built upon the unified cross-modal embedding space provided by CLIP, our model leverages video data from sensing devices to structure representations and introduces two dedicated modules to semantically refine visual representations across spatial and temporal dimensions. First, we design a Spatial Visual Representation Optimization (SVRO) module to learn spatial information within intra-frames. It selects salient patches related to the text, capturing more fine-grained visual details. Second, we introduce a Temporal Visual Representation Optimization (TVRO) module to learn temporal relations from inter-frames. Temporal triplet loss is employed in TVRO to enhance attention on text-relevant frames and capture clip semantics. Additionally, a self-supervised contrastive loss is introduced at the clip–text level to improve inter-clip discrimination by maximizing semantic variance during training. Experiments on Charades-STA, ActivityNet Captions, and TACoS, widely used benchmark datasets, demonstrate that our method outperforms state-of-the-art methods across multiple metrics. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

34 pages, 1156 KiB  
Systematic Review
Mathematical Modelling and Optimization Methods in Geomechanically Informed Blast Design: A Systematic Literature Review
by Fabian Leon, Luis Rojas, Alvaro Peña, Paola Moraga, Pedro Robles, Blanca Gana and Jose García
Mathematics 2025, 13(15), 2456; https://doi.org/10.3390/math13152456 - 30 Jul 2025
Viewed by 185
Abstract
Background: Rock–blast design is a canonical inverse problem that joins elastodynamic partial differential equations (PDEs), fracture mechanics, and stochastic heterogeneity. Objective: Guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, a systematic review of mathematical methods for geomechanically informed [...] Read more.
Background: Rock–blast design is a canonical inverse problem that joins elastodynamic partial differential equations (PDEs), fracture mechanics, and stochastic heterogeneity. Objective: Guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, a systematic review of mathematical methods for geomechanically informed blast modelling and optimisation is provided. Methods: A Scopus–Web of Science search (2000–2025) retrieved 2415 records; semantic filtering and expert screening reduced the corpus to 97 studies. Topic modelling with Bidirectional Encoder Representations from Transformers Topic (BERTOPIC) and bibliometrics organised them into (i) finite-element and finite–discrete element simulations, including arbitrary Lagrangian–Eulerian (ALE) formulations; (ii) geomechanics-enhanced empirical laws; and (iii) machine-learning surrogates and multi-objective optimisers. Results: High-fidelity simulations delimit blast-induced damage with ≤0.2 m mean absolute error; extensions of the Kuznetsov–Ram equation cut median-size mean absolute percentage error (MAPE) from 27% to 15%; Gaussian-process and ensemble learners reach a coefficient of determination (R2>0.95) while providing closed-form uncertainty; Pareto optimisers lower peak particle velocity (PPV) by up to 48% without productivity loss. Synthesis: Four themes emerge—surrogate-assisted PDE-constrained optimisation, probabilistic domain adaptation, Bayesian model fusion for digital-twin updating, and entropy-based energy metrics. Conclusions: Persisting challenges in scalable uncertainty quantification, coupled discrete–continuous fracture solvers, and rigorous fusion of physics-informed and data-driven models position blast design as a fertile test bed for advances in applied mathematics, numerical analysis, and machine-learning theory. Full article
Show Figures

Figure 1

15 pages, 273 KiB  
Article
Use of Household Apparent Food Intake Data to Estimate Micronutrient Inadequacy in Comparison to the 24-h Recall Data Among Women of Reproductive Age in Kasungu District, Malawi
by Alexander A. Kalimbira, Zione Kalumikiza-Chikumbu, Gareth Osman, Bridget Mkama, Edward J. M. Joy, Elaine L. Ferguson, Lucia Segovia de la Revilla, Louise E. Ander, Sarah Pedersen, Omar Dary, Jennifer Yourkavitch and Monica Woldt
Nutrients 2025, 17(15), 2485; https://doi.org/10.3390/nu17152485 - 30 Jul 2025
Viewed by 161
Abstract
Objective: The aim of this study was to compare micronutrient intake and inadequacy estimates using household consumption and expenditure survey (HCES) and quantitative 24-h recall (24HR) data among women of reproductive age (WRA) in Kasungu district, Malawi. Methods: We conducted a secondary data [...] Read more.
Objective: The aim of this study was to compare micronutrient intake and inadequacy estimates using household consumption and expenditure survey (HCES) and quantitative 24-h recall (24HR) data among women of reproductive age (WRA) in Kasungu district, Malawi. Methods: We conducted a secondary data analysis utilizing HCES dietary data from a subsample of households in rural areas of Kasungu district, which were sourced from the 2019/20 Malawi Fifth Integrated Household Survey (n = 183); and 24HR data were obtained from WRA in a community-based Addressing Hidden Hunger with Agronomy (AHHA) trial in the same district (n = 177). Micronutrient intakes and inadequacy were estimated under two alternative scenarios of large-scale food fortification (LSFF). We standardized apparent nutrient intakes from the HCES data using the adult female equivalent metric. Results: Estimated prevalence of micronutrient inadequacy fell within 20 percentage points between HCES and 24HR for iron (Fe), zinc (Zn), vitamins B2 and B9 under both no fortification and fortification scenarios. There were some discrepancies for the remaining B vitamins, being consistently large for vitamin B3. Conclusions: In the absence of 24HR data, HCES data can be used to make inferences about some micronutrient intakes and inadequacies among rural WRA in Malawi and to inform decisions regarding LSFF, including vehicle selection and coverage. However, additional efforts are needed to improve HCES for dietary nutrient surveillance given existing limitations. Full article
(This article belongs to the Section Nutrition Methodology & Assessment)
19 pages, 660 KiB  
Article
Exploring the Relationship Between Game Performance and Physical Demands in Youth Male Basketball Players
by Javier Espasa-Labrador, Carlos Martínez-Rubio, Franc García, Azahara Fort-Vanmeergaehe, Jordi Guarch and Julio Calleja-González
J. Funct. Morphol. Kinesiol. 2025, 10(3), 293; https://doi.org/10.3390/jfmk10030293 - 29 Jul 2025
Viewed by 236
Abstract
Background: Understanding the relationship between physical demands and game performance is essential to optimize player development and management in basketball. This study aimed to examine the association between game performance and physical demands in youth male basketball players. Methods: Fifteen players (16.3 ± [...] Read more.
Background: Understanding the relationship between physical demands and game performance is essential to optimize player development and management in basketball. This study aimed to examine the association between game performance and physical demands in youth male basketball players. Methods: Fifteen players (16.3 ± 0.7 years) from a Spanish 4th division team were monitored over seven official games. Game performance variables were extracted from official statistics, including traditional and advanced metrics. Physical demands were monitored using an Electronic Performance Tracking System device, combining a positioning system and inertial sensors. Partial correlations, controlling for minutes played, were calculated to explore associations between physical demands and performance variables, both for the entire team and by playing position. Results: Significant correlations between physical demands and game performance were observed. Points scored correlated strongly with total distance and high-intensity accelerations, while assists correlated with high-intensity decelerations. Inertial metrics, such as player load and the number of jumps, showed large correlations with points, two-point attempts, and the efficiency rating. Positional analysis revealed stronger and more numerous correlations for centers compared to guards and forwards. Inertial sensor-derived metrics exhibited a greater number and strength of correlations than positioning metrics. Conclusions: Game performance and physical demands are intrinsically related, with specific patterns varying by playing position. Inertial sensors provide valuable complementary information to positioning systems for assessing physical demands in basketball. These findings can assist practitioners in tailoring monitoring and training strategies to optimize performance and manage player workload effectively. Full article
Show Figures

Figure 1

Back to TopTop