Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,015)

Search Parameters:
Keywords = boxing performance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1689 KB  
Article
Simulation-Based Evaluation of Incident Commander (IC) Competencies: A Multivariate Analysis of Certification Outcomes in South Korea
by Jin-chan Park, Ji-hoon Suh and Jung-min Chae
Fire 2025, 8(9), 340; https://doi.org/10.3390/fire8090340 (registering DOI) - 25 Aug 2025
Abstract
This study investigates the certification outcomes of intermediate-level ICs in The National Fire Service Academy in South Korea through a comprehensive quantitative analysis of their evaluated competencies. Using assessment data from 141 candidates collected from 2022 to 2024, we examine how scores on [...] Read more.
This study investigates the certification outcomes of intermediate-level ICs in The National Fire Service Academy in South Korea through a comprehensive quantitative analysis of their evaluated competencies. Using assessment data from 141 candidates collected from 2022 to 2024, we examine how scores on six higher-order competencies—comprising 35 sub-competencies—influence pass or fail results. Descriptive statistics, correlation analysis, logistic regression (a statistical model for binary outcomes), random forest modeling (an ensemble decision-tree machine-learning method), and principal component analysis (PCA; a dimensionality reduction technique) were applied to identify significant predictors of certification success. Visualization techniques, including heatmaps, box plots, and importance bar charts, were used to illustrate performance gaps between successful and unsuccessful candidates. Results indicate that competencies related to decision-making under pressure and crisis leadership most strongly correlate with positive outcomes. Furthermore, unsupervised clustering analysis (a data-driven grouping method) revealed distinctive performance patterns among candidates. These findings suggest that current evaluation frameworks effectively differentiate command readiness but also highlight specific skill domains that may require enhanced instructional focus. The study offers practical implications for fire training academies, policymakers, and certification bodies, particularly in refining curriculum design, competency benchmarks, and evaluation criteria to improve fireground leadership training and assessment standards. Full article
Show Figures

Figure 1

40 pages, 48075 KB  
Article
Directional Lighting-Based Deep Learning Models for Crack and Spalling Classification
by Sanjeetha Pennada, Jack McAlorum, Marcus Perry, Hamish Dow and Gordon Dobie
J. Imaging 2025, 11(9), 288; https://doi.org/10.3390/jimaging11090288 (registering DOI) - 25 Aug 2025
Abstract
External lighting is essential for autonomous inspections of concrete structures in low-light environments. However, previous studies have primarily relied on uniformly diffused lighting to illuminate images and faced challenges in detecting complex crack patterns. This paper proposes two novel algorithms that use directional [...] Read more.
External lighting is essential for autonomous inspections of concrete structures in low-light environments. However, previous studies have primarily relied on uniformly diffused lighting to illuminate images and faced challenges in detecting complex crack patterns. This paper proposes two novel algorithms that use directional lighting to classify concrete defects. The first method, named fused neural network, uses the maximum intensity pixel-level image fusion technique and selects the maximum intensity pixel values from all directional images for each pixel to generate a fused image. The second proposed method, named multi-channel neural network, generates a five-channel image, with each channel representing the grayscale version of images captured in the Right (R), Down (D), Left (L), Up (U), and Diffused (A) directions, respectively. The proposed multi-channel neural network model achieved the best performance, with accuracy, precision, recall, and F1 score of 96.6%, 96.3%, 97%, and 96.6%, respectively. It also outperformed the FusedNet and other models found in the literature, with no significant change in evaluation time. The results from this work have the potential to improve concrete crack classification in environments where external illumination is required. Future research focuses on extending the concepts of multi-channel and image fusion to white-box techniques. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

11 pages, 2637 KB  
Article
AI Enhances Lung Ultrasound Interpretation Across Clinicians with Varying Expertise Levels
by Seyed Ehsan Seyed Bolouri, Masood Dehghan, Mahdiar Nekoui, Brian Buchanan, Jacob L. Jaremko, Dornoosh Zonoobi, Arun Nagdev and Jeevesh Kapur
Diagnostics 2025, 15(17), 2145; https://doi.org/10.3390/diagnostics15172145 (registering DOI) - 25 Aug 2025
Abstract
Background/Objective: Lung ultrasound (LUS) is a valuable tool for detecting pulmonary conditions, but its accuracy depends on user expertise. This study evaluated whether an artificial intelligence (AI) tool could improve clinician performance in detecting pleural effusion and consolidation/atelectasis on LUS scans. Methods [...] Read more.
Background/Objective: Lung ultrasound (LUS) is a valuable tool for detecting pulmonary conditions, but its accuracy depends on user expertise. This study evaluated whether an artificial intelligence (AI) tool could improve clinician performance in detecting pleural effusion and consolidation/atelectasis on LUS scans. Methods: In this multi-reader, multi-case study, 14 clinicians of varying experience reviewed 374 retrospectively selected LUS scans (cine clips from the PLAPS point, obtained using three different probes) from 359 patients across six centers in the U.S. and Canada. In phase one, readers scored the likelihood (0–100) of pleural effusion and consolidation/atelectasis without AI. After a 4-week washout, they re-evaluated all scans with AI-generated bounding boxes. Performance metrics included area under the curve (AUC), sensitivity, specificity, and Fleiss’ Kappa. Subgroup analyses examined effects by reader experience. Results: For pleural effusion, AUC improved from 0.917 to 0.960, sensitivity from 77.3% to 89.1%, and specificity from 91.7% to 92.9%. Fleiss’ Kappa increased from 0.612 to 0.774. For consolidation/atelectasis, AUC rose from 0.870 to 0.941, sensitivity from 70.7% to 89.2%, and specificity from 85.8% to 89.5%. Kappa improved from 0.427 to 0.756. Conclusions: AI assistance enhanced clinician detection of pleural effusion and consolidation/atelectasis in LUS scans, particularly benefiting less experienced users. Full article
Show Figures

Figure 1

18 pages, 3256 KB  
Article
Facilitated Effects of Closed-Loop Assessment and Training on Trans-Radial Prosthesis User Rehabilitation
by Huimin Hu, Yi Luo, Ling Min, Lei Li and Xing Wang
Sensors 2025, 25(17), 5277; https://doi.org/10.3390/s25175277 (registering DOI) - 25 Aug 2025
Abstract
(1) Background: Integrating assessment with training helps to enhance precision prosthetic rehabilitation of trans-radial amputees. This study aimed to validate a self-developed closed-loop rehabilitation platform combining accurate measurement in comprehensive assessment and immediate interaction in virtual reality (VR) training in refining patient-centered myoelectric [...] Read more.
(1) Background: Integrating assessment with training helps to enhance precision prosthetic rehabilitation of trans-radial amputees. This study aimed to validate a self-developed closed-loop rehabilitation platform combining accurate measurement in comprehensive assessment and immediate interaction in virtual reality (VR) training in refining patient-centered myoelectric prosthesis rehabilitation. (2) Methods: The platform consisted of two modules, a multimodal assessment module and an sEMG-driven VR game training module. The former included clinical scales (OPUS, DASH), task performance metrics (modified Box and Block Test), kinematics analysis (inertial sensors), and surface electromyography (sEMG) recording, verified on six trans-radial amputees and four healthy subjects. The latter aimed for muscle coordination training driven by four-channel sEMG, tested on three amputees. Post 1-week training, task performance and sEMG metrics (wrist flexion/extension activation) were re-evaluated. (3) Results: The sEMG in the residual limb of the amputees upgraded by 4.8%, either the subjects’ number of gold coins or game scores after 1-week training. Subjects uniformly agreed or strongly agreed with all the items on the user questionnaire. In reassessment after training, the average completion time (CT) of all three amputees in both tasks decreased. CTs of the A1 and A3 in the placing tasks were reduced by 49.52% and 50.61%, respectively, and the CTs for the submitting task were reduced by 19.67% and 55.44%, respectively. Average CT of all three amputees in the ADL task after training was 9.97 s, significantly lower than the pre-training time of 15.17 s. (4) Conclusions: The closed-loop platform promotes patients’ prosthesis motor-control tasks through accurate measurement and immediate interaction according to the sensorimotor recalibration principle, demonstrating a potential tool for precision rehabilitation. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

35 pages, 7622 KB  
Article
Bayesian Optimization Meets Explainable AI: Enhanced Chronic Kidney Disease Risk Assessment
by Jianbo Huang, Long Li, Mengdi Hou and Jia Chen
Mathematics 2025, 13(17), 2726; https://doi.org/10.3390/math13172726 (registering DOI) - 25 Aug 2025
Abstract
Chronic kidney disease (CKD) affects over 850 million individuals worldwide, yet conventional risk stratification approaches fail to capture complex disease progression patterns. Current machine learning approaches suffer from inefficient parameter optimization and limited clinical interpretability. We developed an integrated framework combining advanced Bayesian [...] Read more.
Chronic kidney disease (CKD) affects over 850 million individuals worldwide, yet conventional risk stratification approaches fail to capture complex disease progression patterns. Current machine learning approaches suffer from inefficient parameter optimization and limited clinical interpretability. We developed an integrated framework combining advanced Bayesian optimization with explainable artificial intelligence for enhanced CKD risk assessment. Our approach employs XGBoost ensemble learning with intelligent parameter optimization through Optuna (a Bayesian optimization framework) and comprehensive interpretability analysis using SHAP (SHapley Additive exPlanations) to explain model predictions. To address algorithmic “black-box” limitations and enhance clinical trustworthiness, we implemented four-tier risk stratification using stratified cross-validation and balanced evaluation metrics that ensure equitable performance across all patient risk categories, preventing bias toward common cases while maintaining sensitivity for high-risk patients. The optimized model achieved exceptional performance with 92.4% accuracy, 91.9% F1-score, and 97.7% ROC-AUC, significantly outperforming 16 baseline algorithms by 7.9–18.9%. Bayesian optimization reduced computational time by 74% compared to traditional grid search while maintaining robust generalization. Model interpretability analysis identified CKD stage, albumin-creatinine ratio, and estimated glomerular filtration rate as primary predictors, fully aligning with established clinical guidelines. This framework delivers superior predictive accuracy while providing transparent, clinically-meaningful explanations for CKD risk stratification, addressing critical challenges in medical AI deployment: computational efficiency, algorithmic transparency, and equitable performance across diverse patient populations. Full article
Show Figures

Figure 1

19 pages, 2069 KB  
Article
Learning Guided Binary PSO Algorithm for Feature Selection and Reconstruction of Ultrasound Contrast Images in Endometrial Region Detection
by Zihao Zhang, Yongjun Liu, Haitong Zhao, Yu Zhou, Yifei Xu and Zhengyu Li
Biomimetics 2025, 10(9), 567; https://doi.org/10.3390/biomimetics10090567 (registering DOI) - 25 Aug 2025
Abstract
Accurate identification of the endometrial region is critical for the early detection of endometrial lesions. However, current detection models still face two major challenges when processing endometrial imaging data: (1) In complex and noisy environments, recognition accuracy remains limited, partly due to the [...] Read more.
Accurate identification of the endometrial region is critical for the early detection of endometrial lesions. However, current detection models still face two major challenges when processing endometrial imaging data: (1) In complex and noisy environments, recognition accuracy remains limited, partly due to the insufficient exploitation of color information within the images; (2) Traditional Two-dimensional PCA-based (2DPCA-based) feature selection methods have limited capacity to capture and represent key characteristics of the endometrial region. To address these challenges, this paper proposes a novel algorithm named Feature-Level Image Fusion and Improved Swarm Intelligence Optimization Algorithm (FLFSI), which integrates a learning guided binary particle swarm optimization (BPSO) strategy with an image feature selection and reconstruction framework to enhance the detection of endometrial regions in clinical ultrasound images. Specifically, FLFSI contributes to improving feature selection accuracy and image reconstruction quality, thereby enhancing the overall performance of region recognition tasks. First, we enhance endometrial image representation by incorporating feature engineering techniques that combine structural and color information, thereby improving reconstruction quality and emphasizing critical regional features. Second, the BPSO algorithm is introduced into the feature selection stage, improving the accuracy of feature selection and its global search ability while effectively reducing the impact of redundant features. Furthermore, we refined the BPSO design to accelerate convergence and enhance optimization efficiency during the selection process. The proposed FLFSI algorithm can be integrated into mainstream detection models such as YOLO11 and YOLOv12. When applied to YOLO11, FLFSI achieves 96.6% Box mAP and 87.8% Mask mAP. With YOLOv12, it further improves the Mask mAP to 88.8%, demonstrating excellent cross-model adaptability and robust detection performance. Extensive experimental results validate the effectiveness and broad applicability of FLFSI in enhancing endometrial region detection for clinical ultrasound image analysis. Full article
(This article belongs to the Special Issue Exploration of Bio-Inspired Computing: 2nd Edition)
Show Figures

Figure 1

26 pages, 30652 KB  
Article
Hybrid ViT-RetinaNet with Explainable Ensemble Learning for Fine-Grained Vehicle Damage Classification
by Ananya Saha, Mahir Afser Pavel, Md Fahim Shahoriar Titu, Afifa Zain Apurba and Riasat Khan
Vehicles 2025, 7(3), 89; https://doi.org/10.3390/vehicles7030089 - 25 Aug 2025
Abstract
Efficient and explainable vehicle damage inspection is essential due to the increasing complexity and volume of vehicular incidents. Traditional manual inspection approaches are not time-effective, prone to human error, and lead to inefficiencies in insurance claims and repair workflows. Existing deep learning methods, [...] Read more.
Efficient and explainable vehicle damage inspection is essential due to the increasing complexity and volume of vehicular incidents. Traditional manual inspection approaches are not time-effective, prone to human error, and lead to inefficiencies in insurance claims and repair workflows. Existing deep learning methods, such as CNNs, often struggle with generalization, require large annotated datasets, and lack interpretability. This study presents a robust and interpretable deep learning framework for vehicle damage classification, integrating Vision Transformers (ViTs) and ensemble detection strategies. The proposed architecture employs a RetinaNet backbone with a ViT-enhanced detection head, implemented in PyTorch using the Detectron2 object detection technique. It is pretrained on COCO weights and fine-tuned through focal loss and aggressive augmentation techniques to improve generalization under real-world damage variability. The proposed system applies the Weighted Box Fusion (WBF) ensemble strategy to refine detection outputs from multiple models, offering improved spatial precision. To ensure interpretability and transparency, we adopt numerous explainability techniques—Grad-CAM, Grad-CAM++, and SHAP—offering semantic and visual insights into model decisions. A custom vehicle damage dataset with 4500 images has been built, consisting of approximately 60% curated images collected through targeted web scraping and crawling covering various damage types (such as bumper dents, panel scratches, and frontal impacts), along with 40% COCO dataset images to support model generalization. Comparative evaluations show that Hybrid ViT-RetinaNet achieves superior performance with an F1-score of 84.6%, mAP of 87.2%, and 22 FPS inference speed. In an ablation analysis, WBF, augmentation, transfer learning, and focal loss significantly improve performance, with focal loss increasing F1 by 6.3% for underrepresented classes and COCO pretraining boosting mAP by 8.7%. Additional architectural comparisons demonstrate that our full hybrid configuration not only maintains competitive accuracy but also achieves up to 150 FPS, making it well suited for real-time use cases. Robustness tests under challenging conditions, including real-world visual disturbances (smoke, fire, motion blur, varying lighting, and occlusions) and artificial noise (Gaussian; salt-and-pepper), confirm the model’s generalization ability. This work contributes a scalable, explainable, and high-performance solution for real-world vehicle damage diagnostics. Full article
Show Figures

Figure 1

25 pages, 4739 KB  
Article
YOLOv5s-F: An Improved Algorithm for Real-Time Monitoring of Small Targets on Highways
by Guo Jinhao, Geng Guoqing, Sun Liqin and Ji Zhifan
World Electr. Veh. J. 2025, 16(9), 483; https://doi.org/10.3390/wevj16090483 - 25 Aug 2025
Abstract
To address the challenges of real-time monitoring via highway vehicle-mounted cameras—specifically, the difficulty in detecting distant pedestrians and vehicles in real time—this study proposes an enhanced object detection algorithm, YOLOv5s-F. Firstly, the FasterNet network structure is adopted to improve the model’s runtime speed. [...] Read more.
To address the challenges of real-time monitoring via highway vehicle-mounted cameras—specifically, the difficulty in detecting distant pedestrians and vehicles in real time—this study proposes an enhanced object detection algorithm, YOLOv5s-F. Firstly, the FasterNet network structure is adopted to improve the model’s runtime speed. Secondly, the attention mechanism BRA, which is derived from the Transformer algorithm, and a 160 × 160 small-object detection layer are introduced to enhance small target detection performance. Thirdly, the improved upsampling operator CARAFE is incorporated to boost the localization and classification accuracy of small objects. Finally, Focal EIoU is employed as the localization loss function to accelerate model training convergence. Quantitative experiments on high-speed sequences show that Focal EIoU reduces bounding box jitter by 42.9% and improves tracking stability (consecutive frame overlap) by 11.4% compared to CIoU, while accelerating convergence by 17.6%. Results show that compared with the YOLOv5s baseline network, the proposed algorithm reduces computational complexity and parameter count by 10.1% and 24.6%, respectively, while increasing detection speed and accuracy by 15.4% and 2.1%. Transfer learning experiments on the VisDrone2019 and Highway-100k dataset demonstrate that the algorithm outperforms YOLOv5s in average precision across all target categories. On NVIDIA Jetson Xavier NX, YOLOv5s-F achieves 32 FPS after quantization, meeting the real-time requirements of in-vehicle monitoring. The YOLOv5s-F algorithm not only meets the real-time detection and accuracy requirements for small objects but also exhibits strong generalization capabilities. This study clarifies core challenges in highway small-target detection and achieves accuracy–speed improvements via three key innovations, with all experiments being reproducible. If any researchers need the code and dataset of this study, they can consult the author through email. Full article
(This article belongs to the Special Issue Recent Advances in Autonomous Vehicles)
Show Figures

Figure 1

30 pages, 6393 KB  
Review
Electrochemical Sensors for Chloramphenicol: Advances in Food Safety and Environmental Monitoring
by Matiar M. R. Howlader, Wei-Ting Ting and Md Younus Ali
Pharmaceuticals 2025, 18(9), 1257; https://doi.org/10.3390/ph18091257 - 24 Aug 2025
Abstract
Excessive use of antibiotics can lead to antibiotic resistance, posing a significant threat to human health and the environment. Chloramphenicol (CAP), once widely used, has been banned in many regions for over 20 years due to its toxicity. Detecting CAP residues in food [...] Read more.
Excessive use of antibiotics can lead to antibiotic resistance, posing a significant threat to human health and the environment. Chloramphenicol (CAP), once widely used, has been banned in many regions for over 20 years due to its toxicity. Detecting CAP residues in food products is crucial for regulating safe use and preventing unnecessary antibiotic exposure. Electrochemical sensors are low-cost, sensitive, and easily detect CAP. This paper reviews recent research on electrochemical sensors for CAP detection, with a focus on the materials and fabrication techniques employed. The sensors are evaluated based on key performance parameters, including limit of detection, sensitivity, linear range, selectivity, and the ability to perform simultaneous detection. Specifically, we highlight the use of metal and carbon-based electrode modifications, including gold nanoparticles (AuNPs), nickel–cobalt (Ni-Co) hollow nano boxes, platinum–palladium (Pt-Pd), graphene (Gr), and covalent organic frameworks (COFs), as well as molecularly imprinted polymers (MIPs) such as polyaniline (PANI) and poly(o-phenylenediamine) (P(o-PD)). The mechanisms by which these modifications enhance CAP detection are discussed, including improved conductivity, increased surface-to-volume ratio, and enhanced binding site availability. The reviewed sensors demonstrated promising results, with some exhibiting high selectivity and sensitivity, and the effective detection of CAP in complex sample matrices. This review aims to support the development of next-generation sensors for antibiotic monitoring and contribute to global efforts to combat antibiotic resistance. Full article
(This article belongs to the Special Issue Application of Biosensors in Pharmaceutical Research)
Show Figures

Graphical abstract

17 pages, 471 KB  
Review
On the Continuum of Foundational Validity: Lessons from Eyewitness Science for Latent Fingerprint Examination
by Adele Quigley-McBride and T. L. Blackall
Behav. Sci. 2025, 15(9), 1145; https://doi.org/10.3390/bs15091145 - 22 Aug 2025
Viewed by 105
Abstract
Whether forensic disciplines have established foundational validity—sufficient empirical evidence that a method reliably produces a predictable level of performance—has become a question of growing interest among scientists and legal professionals. This paper evaluates the foundational validity of two sources of forensic evidence relied [...] Read more.
Whether forensic disciplines have established foundational validity—sufficient empirical evidence that a method reliably produces a predictable level of performance—has become a question of growing interest among scientists and legal professionals. This paper evaluates the foundational validity of two sources of forensic evidence relied upon in criminal cases: eyewitness identification decisions and latent fingerprint examiners’ conclusions. Importantly, establishing foundational validity and estimating accuracy are conceptually and functionally different. Though eyewitnesses can often be mistaken, identification procedures recommended by researchers are grounded in decades of programmatic research that justifies the use of methods that improve the reliability of eyewitness decisions. In contrast, latent print research suggests that expert examiners can be very accurate, but foundational validity in this field is limited by an overreliance on a handful of black-box studies, the dismissal of smaller-scale, yet high-quality, research, and a tendency to treat foundational validity as a fixed destination rather than a continuum. Critically, the lack of a standardized method means that any estimates of examiner performance are not tied to any specific approach to latent print examination. Despite promising early work, until the field adopts and tests well-defined procedures, foundational validity in latent print examination will remain a goal still to be achieved. Full article
(This article belongs to the Special Issue Forensic and Legal Cognition)
Show Figures

Figure 1

26 pages, 625 KB  
Article
Statistical Optimization in the Fermentation Stage for Organic Ethanol: A Sustainable Approach
by Eliani Sosa-Gómez, Irenia Gallardo Aguilar, Ana Celia de Armas Mártínez and Guillermo Sosa-Gómez
Processes 2025, 13(9), 2675; https://doi.org/10.3390/pr13092675 (registering DOI) - 22 Aug 2025
Viewed by 114
Abstract
The growing demand for organic products is having a transformative effect on the alcoholic beverage industry. This work investigates the possibility of producing organic ethanol only from sugarcane final molasses as a nutrient vector and Saccharomyces cerevisiae in the absence of inorganic nitrogen [...] Read more.
The growing demand for organic products is having a transformative effect on the alcoholic beverage industry. This work investigates the possibility of producing organic ethanol only from sugarcane final molasses as a nutrient vector and Saccharomyces cerevisiae in the absence of inorganic nitrogen or phosphorus compounds. The Plackett–Bürman design included the pseudo-factors (X4–X6) due to the experimental design requirements. These factors represent the possible influence of uncontrolled variables, such as pH or nutrient interactions. Subsequently, a predictive quadratic model using Box–Behnken design with the real variables (sugar concentration, yeast dose, and incubation time) was developed and validated (R2=0.977) with internal validation; given the lack of replications and the sample size, this value should be interpreted with caution and not as generalizable predictive evidence. Further experiments with replications and cross-validation will be required to confirm its predictive capacity. Through statistical optimization, the maximum cell proliferation of 432×106 cells/mL was achieved under optimal conditions of 8°Brix sugar concentration, 20 g/L dry yeast, and 3 h incubation time. The optimized fermentation process produced 7.8% v/v ethanol with a theoretical fermentation efficiency of 78.52%, an alcohol-to-substrate yield of 62.15%, and a productivity of 1.86 g/L·h, representing significant improvements of 21.9%, 24.6%, 31.0%, and 10.1%, respectively, compared with non-optimized conditions. The fermentation time was reduced from 48 to 42 h while maintaining superior performance. These results demonstrate the technical feasibility of producing organic ethanol using certified organic molasses and no chemical additives. Overall, these findings should be regarded as proof of concept. All experiments were single-run without biological or technical replicates; consequently, the optimization and models are preliminary and require confirmation with replicated experiments and external validation. Full article
(This article belongs to the Section Chemical Processes and Systems)
Show Figures

Figure 1

21 pages, 9378 KB  
Article
Integrated Approach for the Optimization of the Sustainable Extraction of Polyphenols from a South American Abundant Edible Plant: Neltuma ruscifolia
by Giuliana S. Seling, Roy C. Rivero, Camila V. Sisi, Verónica M. Busch and M. Pilar Buera
Foods 2025, 14(17), 2927; https://doi.org/10.3390/foods14172927 - 22 Aug 2025
Viewed by 169
Abstract
The pods from Neltuma ruscifolia (vinal), an underutilized species, are rich in bioactive functional compounds. However, the extraction procedures to obtain the highest proportion of these compounds, considering sustainability aspects, have not been optimized. This study aimed to optimize and compare [...] Read more.
The pods from Neltuma ruscifolia (vinal), an underutilized species, are rich in bioactive functional compounds. However, the extraction procedures to obtain the highest proportion of these compounds, considering sustainability aspects, have not been optimized. This study aimed to optimize and compare three affordable extraction methods—dynamic maceration (DME), ultrasound-assisted extraction (UE), and microwave-assisted extraction (ME)—to obtain enriched extracts. The effects of temperature, ethanol-to-water ratio in the solvent, extraction time, and frequency (for ME) were evaluated using a Box–Behnken design and response surface methodology to optimize total polyphenolic content (TPC), total flavonoids (TF), and antioxidant capacity (DPPH). Energy consumption and carbon footprints were also assessed, and phenolic compounds in the optimized extracts were identified by HPLC. The ethanol-to-water ratio emerged as the most influential factor, showing synergistic effects with both time and temperature, enabling optimal yields at intermediate ethanol concentrations. Gallic acid, rutin, and theobromine were found to be the most abundant components, followed by cinnamic, caffeic, and chlorogenic acids. Although UE exhibited the lowest energy consumption (0.64 ± 0.03 Wh/mg of TPC), the simple and easily implementable DME—optimized at 40 min, 50 °C, and 42% ethanol—proved to be the most efficient method, combining high extractive performance (TPC 1432 mg GAE/100 g Dw), reduced solvent use, and intermediate energy efficiency (1.84 Wh/mg of TPC). These findings highlight the potential of vinal as a natural source of bioactive ingredients obtained through simple and cost-effective techniques adaptable to small producers while underscoring the value of experimental design in optimizing sustainable extraction technologies and elucidating the interactions between key processing factors. Full article
Show Figures

Graphical abstract

20 pages, 5323 KB  
Article
An Object-Based Deep Learning Approach for Building Height Estimation from Single SAR Images
by Babak Memar, Luigi Russo, Silvia Liberata Ullo and Paolo Gamba
Remote Sens. 2025, 17(17), 2922; https://doi.org/10.3390/rs17172922 - 22 Aug 2025
Viewed by 177
Abstract
The accurate estimation of building heights using very-high-resolution (VHR) synthetic aperture radar (SAR) imagery is crucial for various urban applications. This paper introduces a deep learning (DL)-based methodology for automated building height estimation from single VHR COSMO-SkyMed images: an object-based regression approach based [...] Read more.
The accurate estimation of building heights using very-high-resolution (VHR) synthetic aperture radar (SAR) imagery is crucial for various urban applications. This paper introduces a deep learning (DL)-based methodology for automated building height estimation from single VHR COSMO-SkyMed images: an object-based regression approach based on bounding box detection followed by height estimation. This model was trained and evaluated on a unique multi-continental dataset comprising eight geographically diverse cities across Europe, North and South America, and Asia, employing a cross-validation strategy to explicitly assess out-of-distribution (OOD) generalization. The results demonstrate highly promising performance, particularly on European cities where the model achieves a Mean Absolute Error (MAE) of approximately one building story (2.20 m in Munich), significantly outperforming recent state-of-the-art methods in similar OOD scenarios. Despite the increased variability observed when generalizing to cities in other continents, particularly in Asia with its distinct urban typologies and the prevalence of high-rise structures, this study underscores the significant potential of DL for robust cross-city and cross-continental transfer learning in building height estimation from single VHR SAR data. Full article
Show Figures

Figure 1

17 pages, 891 KB  
Article
LLaVA-Pose: Keypoint-Integrated Instruction Tuning for Human Pose and Action Understanding
by Dewen Zhang, Tahir Hussain, Wangpeng An and Hayaru Shouno
Sensors 2025, 25(16), 5213; https://doi.org/10.3390/s25165213 - 21 Aug 2025
Viewed by 222
Abstract
Current vision–language models (VLMs) are well-adapted for general visual understanding tasks. However, they perform inadequately when handling complex visual tasks related to human poses and actions due to the lack of specialized vision–language instruction-following data. We introduce a method for generating such data [...] Read more.
Current vision–language models (VLMs) are well-adapted for general visual understanding tasks. However, they perform inadequately when handling complex visual tasks related to human poses and actions due to the lack of specialized vision–language instruction-following data. We introduce a method for generating such data by integrating human keypoints with traditional visual features such as captions and bounding boxes, enabling more precise understanding of human-centric scenes. Our approach constructs a dataset comprising 200,328 samples tailored to fine-tune models for human-centric tasks, focusing on three areas: conversation, detailed description, and complex reasoning. We establish an Extended Human Pose and Action Understanding Benchmark (E-HPAUB) to assess model performance on human pose and action understanding. We fine-tune the LLaVA-1.5-7B model using this dataset and evaluate our resulting LLaVA-Pose model on the benchmark, achieving significant improvements. Experimental results show an overall improvement of 33.2% compared to the original LLaVA-1.5-7B model. These findings highlight the effectiveness of keypoint-integrated data in enhancing multimodal models for human-centric visual understanding. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 1601 KB  
Article
Influence of Anthropometric Characteristics and Muscle Performance on Punch Impact
by Manuel Pinto, João Crisóstomo, Christopher Kirk, Javier Abián-Vicén and Luís Monteiro
Sports 2025, 13(8), 281; https://doi.org/10.3390/sports13080281 - 21 Aug 2025
Viewed by 191
Abstract
Despite the known relevance of punch impact in boxing, limited evidence exists regarding how anthropometric and muscle performance variables contribute to it. This study investigated the relationship between anthropometric characteristics, muscle power and strength performance, and punch impact power in 69 boxing practitioners [...] Read more.
Despite the known relevance of punch impact in boxing, limited evidence exists regarding how anthropometric and muscle performance variables contribute to it. This study investigated the relationship between anthropometric characteristics, muscle power and strength performance, and punch impact power in 69 boxing practitioners (mean ± SD age: 27.0 ± 6.1 years). Anthropometric variables (body height (BH), armspan (AS), body mass (BM)) and muscle power and strength tests (countermovement jump (CMJ), one repetition maximum in bench press (1RM BP), and handgrip strength (HS)) were assessed. Punch impact power was assessed with PowerKube (PK), a specific device designed to measure punch impact power. Punch impact power was positively correlated with BH, AS, and BM. Linear regression indicated that BH and AS explained about 36% of the variance in Straight punch impact power and 30–34% in Hook punch impact power. BM showed weaker predictive capacity, explaining 10% of the variance in Straight punch impact power and 11% in Hook punch impact power. When comparing punch impact power differences across groups with varying BH, AS, and BM, it was found that groups with High BH exhibited higher punch impact power than the groups with Low and Medium BH for both Straight and Hook punches. For AS, the High AS group also demonstrated higher punch impact power, with similar trends for BM, where significant differences were observed only between the High and Low BM groups. Additionally, our findings confirm significant relationships between anthropometric characteristics, muscle power, and strength performance. These findings highlight the importance of a comprehensive assessment of anthropometric profiles, alongside muscle power and strength evaluations, to better predict punch impact power. This approach provides valuable insights for boxing training and may also inform exercise programming for the general population. Full article
Show Figures

Figure 1

Back to TopTop