Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,516)

Search Parameters:
Keywords = competition experiments

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5195 KB  
Article
Study on Experiment and Molecular Dynamics Simulation of Variation Laws of Crude Oil Distribution States in Nanopores
by Yukun Chen, Hui Zhao, Yongbin Wu, Rui Guo, Yaoli Shi and Yuhui Zhou
Appl. Sci. 2025, 15(21), 11308; https://doi.org/10.3390/app152111308 - 22 Oct 2025
Viewed by 24
Abstract
This study is based on an experiment and a molecular dynamics simulation to investigate the distribution states and property variation laws of crude oil in nanopores, aiming to provide theoretical support for efficient unconventional oil and gas development. Focus is placed on the [...] Read more.
This study is based on an experiment and a molecular dynamics simulation to investigate the distribution states and property variation laws of crude oil in nanopores, aiming to provide theoretical support for efficient unconventional oil and gas development. Focus is placed on the distribution mechanisms of multicomponent crude oil in oil-wet siltstone (SiO2) and dolomitic rock (dolomite, CaMg3(CO3)4) nanopores, with comprehensive consideration of key factors including pore size, rock type, and CO2 flooding on crude oil distribution at 353 K and 40 MPa. It is revealed that aromatic hydrocarbons (toluene) in multicomponent crude oil are preferentially adsorbed on pore walls due to π-π interactions, while n-hexane diffuses toward the pore center driven by hydrophobic effects. Pore size significantly affects the distribution states of crude oil: ordered adsorption structures form for n-hexane in 2 nm pores, whereas distributions become dispersed in 9 nm pores, with adsorption energy changing as pore size increases. Dolomite exhibits a significantly higher adsorption energy than SiO2 due to surface roughness and calcium–magnesium ion crystal fields. CO2 weakens the interaction between crude oil and pore walls through competitive adsorption and reduces viscosity via dissolution, promoting crude oil mobility. Nuclear magnetic resonance (NMR) experiments further verified the effect of CO2 on crude oil stripping in pores. This study not only clarifies the collaborative adsorption mechanisms and displacement regulation laws of multi-component crude oil in nanopores but also provides a solid theoretical basis for CO2 injection strategies in unconventional reservoir development. Full article
(This article belongs to the Special Issue Advances and Innovations in Unconventional Enhanced Oil Recovery)
Show Figures

Figure 1

31 pages, 2757 KB  
Article
Human–Machine Collaborative Learning for Streaming Data-Driven Scenarios
by Fan Yang, Xiaojuan Zhang and Zhiwen Yu
Sensors 2025, 25(21), 6505; https://doi.org/10.3390/s25216505 - 22 Oct 2025
Viewed by 28
Abstract
Deep learning has been broadly applied in many fields and has greatly improved efficiency compared to traditional approaches. However, it cannot resolve issues well when there are a lack of training samples, or in some varying cases, it cannot give a clear output. [...] Read more.
Deep learning has been broadly applied in many fields and has greatly improved efficiency compared to traditional approaches. However, it cannot resolve issues well when there are a lack of training samples, or in some varying cases, it cannot give a clear output. Human beings and machines that work in a collaborative and equal mode to address complicated streaming data-driven tasks can achieve higher accuracy and clearer explanations. A novel framework is proposed which integrates human intelligence and machine intelligent computing, taking advantage of both strengths to work out complex tasks. Human beings are responsible for the highly decisive aspects of the task and provide empirical feedback to the model, whereas the machines undertake the repetitive computing aspects of the task. The framework will be executed in a flexible way through interactive human–machine cooperation mode, while it will be more robust for some hard samples recognition. We tested the framework using video anomaly detection, person re-identification, and sound event detection application scenarios, and we found that the human–machine collaborative learning mechanism obtained much better accuracy. After fusing human knowledge with deep learning processing, the final decision making is confirmed. In addition, we conducted abundant experiments to verify the effectiveness of the framework and obtained the competitive performance at the cost of a small amount of human intervention. The approach is a new form of machine learning, especially in dynamic and untrustworthy conditions. Full article
(This article belongs to the Special Issue Smart Sensing System for Intelligent Human Computer Interaction)
Show Figures

Figure 1

22 pages, 3177 KB  
Article
RECAD: Retinex-Based Efficient Channel Attention with Dark Area Detection for Underwater Images Enhancement
by Tianchi Zhang, Qiang Liu, Hongwei Qin and Xing Liu
J. Mar. Sci. Eng. 2025, 13(11), 2027; https://doi.org/10.3390/jmse13112027 - 22 Oct 2025
Viewed by 39
Abstract
Focusing on visual target detection for Autonomous Underwater Vehicles (AUVs), this paper investigates enhancement methods for weakly illuminated underwater images, which typically suffer from blurring, color distortion, and non-uniform illumination. Although deep learning-based approaches have received considerable attention, existing methods still face limitations [...] Read more.
Focusing on visual target detection for Autonomous Underwater Vehicles (AUVs), this paper investigates enhancement methods for weakly illuminated underwater images, which typically suffer from blurring, color distortion, and non-uniform illumination. Although deep learning-based approaches have received considerable attention, existing methods still face limitations such as insufficient feature extraction, poor detail detection, and high computational costs. To address these issues, we propose RECAD—a lightweight and efficient underwater image enhancement method based on Retinex theory. The approach incorporates a dark region detection mechanism to significantly improve feature extraction from low-light areas, along with an efficient channel attention module to reduce computational complexity. A residual learning strategy is adopted in the image reconstruction stage to effectively preserve structural consistency, achieving an SSIM value of 0.91. Extensive experiments on the UIEB and LSUI benchmark datasets demonstrate that RECAD outperforms state-of-the-art models including FUnIEGAN and U-Transformer, achieving a high SSIM of 0.91 and competitive UIQM scores (up to 3.19), while improving PSNR by 3.77 dB and 0.69–1.09 dB, respectively, and attaining a leading inference speed of 97 FPS, all while using only 0.42 M parameters, which substantially reduces computational resource consumption. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 3720 KB  
Article
Adaptive Curve-Guided Convolution for Robust 3D Hand Pose Estimation from Corrupted Point Clouds
by Lihuang She, Haonan Sun, Hui Zou, Hanze Liang, Xiangli Guo and Yehan Chen
Electronics 2025, 14(21), 4133; https://doi.org/10.3390/electronics14214133 - 22 Oct 2025
Viewed by 115
Abstract
3D hand pose estimation has achieved remarkable progress in human computer interaction and computer vision; however, real-world hand point clouds often suffer from structural distortions such as partial occlusions, sensor noise, and environmental interference, which significantly degrade the performance of conventional point cloud-based [...] Read more.
3D hand pose estimation has achieved remarkable progress in human computer interaction and computer vision; however, real-world hand point clouds often suffer from structural distortions such as partial occlusions, sensor noise, and environmental interference, which significantly degrade the performance of conventional point cloud-based methods. To address these challenges, this study proposes a curve fitting-based framework for robust 3D hand pose estimation from corrupted point clouds, integrating an Adaptive Sampling (AS) module and a Hand-Curve Guide Convolution (HCGC) module. The AS module dynamically selects structurally informative key points according to local density and anatomical importance, mitigating sampling bias in distorted regions, while the HCGC module generates guided curves along fingers and employs dynamic momentum encoding and cross-suppression strategies to preserve anatomical continuity and capture fine-grained geometric features. Extensive experiments on the MSRA, ICVL, and NYU datasets demonstrate that our method consistently outperforms state-of-the-art approaches under local point removal across fixed missing-point ratios ranging from 30% to 50% and noise interference, achieving an average Robustness Curve Area (RCA) of 30.8, outperforming advanced methods such as TriHorn-Net. Notably, although optimized for corrupted point clouds, the framework also achieves competitive accuracy on intact datasets, demonstrating that enhanced robustness does not compromise general performance. These results validate that adaptive curve guided local structure modeling provides a reliable and generalizable solution for realistic 3D hand pose estimation and emphasize its potential for deployment in practical applications where point cloud quality cannot be guaranteed. Full article
Show Figures

Figure 1

14 pages, 1187 KB  
Article
Scapular Dyskinesis and Associated Factors in Adult Elite Swimmers
by Se Young Joo and Young Kyun Kim
Medicina 2025, 61(10), 1885; https://doi.org/10.3390/medicina61101885 - 21 Oct 2025
Viewed by 160
Abstract
Background and Objectives: Swimmers are repeatedly exposed to overhead shoulder movements, which overload the surrounding soft tissue and may contribute to shoulder pain. These repetitive demands have also been implicated in the development of scapular dyskinesis (SD). This cross-sectional study aimed to [...] Read more.
Background and Objectives: Swimmers are repeatedly exposed to overhead shoulder movements, which overload the surrounding soft tissue and may contribute to shoulder pain. These repetitive demands have also been implicated in the development of scapular dyskinesis (SD). This cross-sectional study aimed to determine the prevalence of SD and to examine its associations with extrinsic and intrinsic factors in adult elite swimmers. Materials and Methods: Fifty competitive swimmers (mean age, 23.9 years; mean training experience, 13.6 years) participated in this study. SD was graded using the Scapular Dyskinesis Test. Extrinsic factors included dominant side, breathing side, years of experience, and primary stroke. Intrinsic factors included Lateral Scapular Slide Test (LSST) distance, pectoralis minor length, glenohumeral internal rotation (IR) range of motion (ROM), shoulder pain, and Penn Shoulder Score. Results: SD was identified in 46% of swimmers. Years of experience and primary stroke showed no significant association with SD; however, obvious SD was observed only in butterfly and freestyle specialists. Increasing SD severity was associated with shorter pectoralis minor length (p < 0.001) and reduced IR ROM (p = 0.013), particularly in the obvious group. Although SD was not related to shoulder pain, it was significantly related to lower Penn Shoulder Scores (p = 0.039). Conclusions: SD is common in adult elite swimmers and is associated with shortened pectoralis minor, reduced IR ROM, and impaired shoulder function, but not to pain. Full article
(This article belongs to the Special Issue Sports Injuries: Prevention, Treatment and Rehabilitation)
Show Figures

Figure 1

21 pages, 2245 KB  
Article
Frequency-Aware and Interactive Spatial-Temporal Graph Convolutional Network for Traffic Flow Prediction
by Guoqing Teng, Han Wu, Hao Wu, Jiahao Cao and Meng Zhao
Appl. Sci. 2025, 15(20), 11254; https://doi.org/10.3390/app152011254 - 21 Oct 2025
Viewed by 169
Abstract
Accurate traffic flow prediction is pivotal for intelligent transportation systems; yet, existing spatial-temporal graph neural networks (STGNNs) struggle to jointly capture the long-term structural stability, short-term dynamics, and multi-scale temporal patterns of road networks. To address these shortcomings, we propose FISTGCN, a Frequency-Aware [...] Read more.
Accurate traffic flow prediction is pivotal for intelligent transportation systems; yet, existing spatial-temporal graph neural networks (STGNNs) struggle to jointly capture the long-term structural stability, short-term dynamics, and multi-scale temporal patterns of road networks. To address these shortcomings, we propose FISTGCN, a Frequency-Aware Interactive Spatial-Temporal Graph Convolutional Network. FISTGCN enriches raw traffic flow features with learnable spatial and temporal embeddings, thereby providing comprehensive spatial-temporal representations for subsequent modeling. Specifically, it utilizes an interactive dynamic graph convolutional block that generates a time-evolving fused adjacency matrix by combining adaptive and dynamic adjacency matrices. It then applies dual sparse graph convolutions with cross-scale interactions to capture multi-scale spatial dependencies. The gated spectral block projects the input features into the frequency domain and adaptively separates low- and high-frequency components using a learnable threshold. It then employs learnable filters to extract features from different frequency bands and adopts a gating mechanism to adaptively fuse low- and high-frequency information, thereby dynamically highlighting short-term fluctuations or long-term trends. Extensive experiments on four benchmark datasets demonstrate that FISTGCN delivers state-of-the-art predictive accuracy while maintaining competitive computational efficiency. Full article
Show Figures

Figure 1

24 pages, 797 KB  
Article
Towards a Sustainable Workforce in Big Data Analytics: Skill Requirements Analysis from Online Job Postings Using Neural Topic Modeling
by Fatih Gurcan, Ahmet Soylu and Akif Quddus Khan
Sustainability 2025, 17(20), 9293; https://doi.org/10.3390/su17209293 - 20 Oct 2025
Viewed by 222
Abstract
Big data analytics has become a cornerstone of modern industries, driving advancements in business intelligence, competitive intelligence, and data-driven decision-making. This study applies Neural Topic Modeling (NTM) using the BERTopic framework and N-gram-based textual content analysis to examine job postings related to big [...] Read more.
Big data analytics has become a cornerstone of modern industries, driving advancements in business intelligence, competitive intelligence, and data-driven decision-making. This study applies Neural Topic Modeling (NTM) using the BERTopic framework and N-gram-based textual content analysis to examine job postings related to big data analytics in real-world contexts. A structured analytical process was conducted to derive meaningful insights into workforce trends and skill demands in the big data analytics domain. First, expertise roles and tasks were identified by analyzing job titles and responsibilities. Next, key competencies were categorized into analytical, technical, developer, and soft skills and mapped to corresponding roles. Workforce characteristics such as job types, education levels, and experience requirements were examined to understand hiring patterns. In addition, essential tasks, tools, and frameworks in big data analytics were identified, providing insights into critical technical proficiencies. The findings show that big data analytics requires expertise in data engineering, machine learning, cloud computing, and AI-driven automation. They also emphasize the importance of continuous learning and skill development to sustain a future-ready workforce. By connecting academia and industry, this study provides valuable implications for educators, policymakers, and corporate leaders seeking to strengthen workforce sustainability in the era of big data analytics. Full article
Show Figures

Figure 1

41 pages, 2159 KB  
Systematic Review
Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024)
by Mohammad Ghattas, Suhail Odeh and Antonio M. Mora
Computers 2025, 14(10), 446; https://doi.org/10.3390/computers14100446 - 20 Oct 2025
Viewed by 289
Abstract
Website performance directly impacts user experience, trust, and competitiveness. While numerous studies have proposed evaluation methods, there is still no comprehensive synthesis that integrates performance metrics with predictive models. This study conducts a systematic literature review (SLR) following the PRISMA framework across seven [...] Read more.
Website performance directly impacts user experience, trust, and competitiveness. While numerous studies have proposed evaluation methods, there is still no comprehensive synthesis that integrates performance metrics with predictive models. This study conducts a systematic literature review (SLR) following the PRISMA framework across seven academic databases (2010–2024). From 6657 initial records, 30 high-quality studies were included after rigorous screening and quality assessment. In addition, 59 website performance metrics were identified and validated through an expert survey, resulting in 16 core indicators. The review highlights a dominant reliance on traditional evaluation metrics (e.g., Load Time, Page Size, Response Time) and reveals limited adoption of machine learning and deep learning approaches. Most existing studies focus on e-government and educational websites, with little attention to e-commerce, healthcare, and industry domains. Furthermore, the geographic distribution of research remains uneven, with a concentration in Asia and limited contributions from North America and Africa. This study contributes by (i) consolidating and validating a set of 16 critical performance metrics, (ii) critically analyzing current methodologies, and (iii) identifying gaps in domain coverage and intelligent prediction models. Future research should prioritize cross-domain benchmarks, integrate machine learning for scalable predictions, and address the lack of standardized evaluation protocols. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

25 pages, 2523 KB  
Article
Reference-Less Evaluation of Machine Translation: Navigating Through the Resource-Scarce Scenarios
by Archchana Sindhujan, Diptesh Kanojia and Constantin Orăsan
Information 2025, 16(10), 916; https://doi.org/10.3390/info16100916 - 18 Oct 2025
Viewed by 154
Abstract
Reference-less evaluation of machine translation, or Quality Estimation (QE), is vital for low-resource language pairs where high-quality references are often unavailable. In this study, we investigate segment-level QE methods comparing encoder-based models such as MonoTransQuest, CometKiwi, and xCOMET with various decoder-based [...] Read more.
Reference-less evaluation of machine translation, or Quality Estimation (QE), is vital for low-resource language pairs where high-quality references are often unavailable. In this study, we investigate segment-level QE methods comparing encoder-based models such as MonoTransQuest, CometKiwi, and xCOMET with various decoder-based methods (Tower+, ALOPE, and other instruction-fine-tuned language models). Our work primarily focused on utilizing eight low-resource language pairs, involving both English on the source side and the target side of the translation. Results indicate that while fine-tuned encoder-based models remain strong performers across most low-resource language pairs, decoder-based Large Language Models (LLMs) show clear improvements when adapted through instruction tuning. Importantly, the ALOPE framework further enhances LLM performance beyond standard fine-tuning, demonstrating its effectiveness in narrowing the gap with encoder-based approaches and highlighting its potential as a viable strategy for low-resource QE. In addition, our experiments demonstrates that with adaptation techniques such as LoRA (Low Rank Adapters) and quantization, decoder-based QE models can be trained with competitive GPU memory efficiency, though they generally require substantially more disk space than encoder-based models. Our findings highlight the effectiveness of encoder-based models for low-resource QE and suggest that advances in cross-lingual modeling will be key to improving LLM-based QE in the future. Full article
31 pages, 950 KB  
Article
How Does the Green Credit Policy Influence Corporate Carbon Information Disclosure?—A Quasi-Natural Experiment Based on the Green Credit Guidelines
by Xiuxiu Chen and Jing Peng
Sustainability 2025, 17(20), 9256; https://doi.org/10.3390/su17209256 - 18 Oct 2025
Viewed by 218
Abstract
The 2012 Green Credit Guidelines (GCG) release is used as a quasi-natural experiment in this study, which employs a sample of Chinese A-share-listed businesses from 2008 to 2023. We use the difference-in-differences method to examine the impact of enacting green credit policies on [...] Read more.
The 2012 Green Credit Guidelines (GCG) release is used as a quasi-natural experiment in this study, which employs a sample of Chinese A-share-listed businesses from 2008 to 2023. We use the difference-in-differences method to examine the impact of enacting green credit policies on corporate carbon information disclosure. The findings demonstrate that green credit policies affect carbon information disclosure through several channels: the signal transmission effect, the external pressure effect, and the environmental ethics effect. Furthermore, market competition has exerted a positive influence on the implementation of these policies. The heterogeneity results suggest that the policies’ beneficial impact is more significant in non-state-owned enterprises, firms with substantial financial constraints, and non-high-tech firms. Additionally, the study finds that increased disclosure of carbon information can elevate firm value and reduce audit fees. These findings contribute to theoretical research on green credit policies and carbon information disclosure, offering important guidance for relevant authorities in standardizing green credit operations and promoting carbon information transparency. Full article
Show Figures

Figure 1

22 pages, 2321 KB  
Article
Biohydrogen Production from Industrial Waste: The Role of Pretreatment Methods
by Weronika Cieciura-Włoch, Wiktoria Hajduk, Marta Ikert, Tobiasz Konopski, Min Hein Khant, Jarosław Domański, Bolin Zhang and Dorota Kręgiel
Energies 2025, 18(20), 5497; https://doi.org/10.3390/en18205497 - 18 Oct 2025
Viewed by 218
Abstract
This study aimed to investigate the effectiveness of dark fermentation in biohydrogen production from agro-industrial wastes, including apple pomace, brewer’s grains, molasses, and potato powder, subjected to different pretreatment methods. The experiments were conducted at a laboratory scale, using 1000 cm3 anaerobic [...] Read more.
This study aimed to investigate the effectiveness of dark fermentation in biohydrogen production from agro-industrial wastes, including apple pomace, brewer’s grains, molasses, and potato powder, subjected to different pretreatment methods. The experiments were conducted at a laboratory scale, using 1000 cm3 anaerobic reactors at a temperature of 35 °C and anaerobic sludge as the inoculum. The highest yield of hydrogen was obtained from pre-treated apple pomace (101 cm3/g VS). Molasses, a less complex substrate compared to the other raw materials, produced 25% more hydrogen yield following pretreatment. Methanogens are sensitive to high temperatures and low-pH conditions. Nevertheless, methane constituted 1–6% of the total biogas under these conditions. The key factor was appropriate treatment of the inoculum, to limit competition from methanogens. Increasing the inoculum dose from 150 cm3/dm3 to 250 cm3/dm3 had no further effect on biogas production. The physicochemical parameters and VFA data confirmed the stability and usefulness of activated sludge as a source of microbial cultures for H2 production via dark fermentation. Full article
Show Figures

Figure 1

26 pages, 10675 KB  
Article
DFAS-YOLO: Dual Feature-Aware Sampling for Small-Object Detection in Remote Sensing Images
by Xiangyu Liu, Shenbo Zhou, Jianbo Ma, Yumei Sun, Jianlin Zhang and Haorui Zuo
Remote Sens. 2025, 17(20), 3476; https://doi.org/10.3390/rs17203476 - 18 Oct 2025
Viewed by 433
Abstract
In remote sensing imagery, detecting small objects is challenging due to the limited representational ability of feature maps when resolution changes. This issue is mainly reflected in two aspects: (1) upsampling causes feature shifts, making feature fusion difficult to align; (2) downsampling leads [...] Read more.
In remote sensing imagery, detecting small objects is challenging due to the limited representational ability of feature maps when resolution changes. This issue is mainly reflected in two aspects: (1) upsampling causes feature shifts, making feature fusion difficult to align; (2) downsampling leads to the loss of details. Although recent advances in object detection have been remarkable, small-object detection remains unresolved. In this paper, we propose Dual Feature-Aware Sampling YOLO (DFAS-YOLO) to address these issues. First, the Soft-Aware Adaptive Fusion (SAAF) module corrects upsampling by applying adaptive weighting and spatial attention, thereby reducing errors caused by feature shifts. Second, the Global Dense Local Aggregation (GDLA) module employs parallel convolution, max pooling, and average pooling with channel aggregation, combining their strengths to preserve details after downsampling. Furthermore, the detection head is redesigned based on object characteristics in remote sensing imagery. Extensive experiments on the VisDrone2019 and HIT-UAV datasets demonstrate that DFAS-YOLO achieves competitive detection accuracy compared with recent models. Full article
Show Figures

Figure 1

15 pages, 3084 KB  
Article
Selective Regulatory Effects of Lactobacillus Plantarum Fermented Milk: Enhancing the Growth of Staphylococcus Epidermidis and Inhibiting Staphylococcus aureus and Escherichia coli
by Yajuan Sun, Ying Wang, Zixia Ren, Shasha Wang, Yun Ding, Nan Liu, Cheng Yang and Bingtian Zhao
Cosmetics 2025, 12(5), 232; https://doi.org/10.3390/cosmetics12050232 - 17 Oct 2025
Viewed by 249
Abstract
To address the limitation of traditional broad-spectrum antimicrobial agents in compromising skin microbiota homeostasis, this study developed Lactobacillus plantarum fermented milk (FM) as an innovative strategy for selectively regulating microbial communities to restore skin microbiota balance. FM was produced through protease hydrolysis in [...] Read more.
To address the limitation of traditional broad-spectrum antimicrobial agents in compromising skin microbiota homeostasis, this study developed Lactobacillus plantarum fermented milk (FM) as an innovative strategy for selectively regulating microbial communities to restore skin microbiota balance. FM was produced through protease hydrolysis in combination with L. plantarum fermentation. Selective antibacterial properties were evaluated via monoculture experiments (Escherichia coli, Staphylococcus aureus, and Staphylococcus epidermidis) and pathogen–commensal co-culture systems. It was found that FM can selectively inhibit pathogens (E. coli and S. aureus) and promote the growth of commensal bacteria (S. epidermidis) in monoculture, and can reduce the growth and competitiveness of E. coli and S. aureus while relatively increasing the colony count of S. epidermidis in the co-culture system. Metabolomic profiling was further performed to identify metabolic alterations induced by FM. It was found that FM can activate the pyruvate metabolic node, significantly enhancing the metabolic fluxes of lactic acid, citric acid, and short-chain fatty acids, which triggered the acid stress response of pathogenic bacteria while consuming a considerable amount of energy, attenuating their reproductive capacity without impacting the growth of commensal bacteria. Overall, FM showed selective antimicrobial activity against pathogens (E. coli, and S. aureus) and preservation of commensal S. epidermidis, offering a foundational reference for the development of postbiotics aimed at maintaining cutaneous microbial homeostasis. Full article
(This article belongs to the Special Issue Functional Molecules as Novel Cosmetic Ingredients)
Show Figures

Graphical abstract

29 pages, 7838 KB  
Article
MSLNet and Perceptual Grouping for Guidewire Segmentation and Localization
by Adrian Barbu
Sensors 2025, 25(20), 6426; https://doi.org/10.3390/s25206426 - 17 Oct 2025
Viewed by 156
Abstract
Fluoroscopy (real-time X-ray) images are used for monitoring minimally invasive coronary angioplasty operations such as stent placement. During these operations, a thin wire called a guidewire is used to guide different tools, such as a stent or a balloon, in order to repair [...] Read more.
Fluoroscopy (real-time X-ray) images are used for monitoring minimally invasive coronary angioplasty operations such as stent placement. During these operations, a thin wire called a guidewire is used to guide different tools, such as a stent or a balloon, in order to repair the vessels. However, fluoroscopy images are noisy, and the guidewire is very thin, practically invisible in many places, making its localization very difficult. Guidewire segmentation is the task of finding the guidewire pixels, while guidewire localization is the higher-level task aimed at finding a parameterized curve describing the guidewire points. This paper presents a method for guidewire localization that starts from a guidewire segmentation, from which it extracts a number of initial curves as pixel chains and uses a novel perceptual grouping method to merge these initial curves into a small number of curves. The paper also introduces a novel guidewire segmentation method that uses a residual network (ResNet) as a feature extractor and predicts a coarse segmentation that is refined only in promising locations to a fine segmentation. Experiments on two challenging datasets, one with 871 frames and one with 23,449 frames, show that the method obtains results competitive with existing segmentation methods such as Res-UNet and nnU-Net, while having no skip connections and a faster inference time. Full article
(This article belongs to the Special Issue Advanced Deep Learning for Biomedical Sensing and Imaging)
Show Figures

Figure 1

15 pages, 994 KB  
Article
Physiological Distinctions Between Elite and Non-Elite Fencers: A Comparative Analysis of Endurance, Explosive Power, and Lean Mass Using Sport-Specific Assessments
by Bartosz Hekiert, Adam Prokopczyk, Jamie O’Driscoll and Przemysław Guzik
Life 2025, 15(10), 1622; https://doi.org/10.3390/life15101622 - 17 Oct 2025
Viewed by 222
Abstract
Fencing demands a unique blend of endurance, explosive power, and asymmetric neuromuscular control. This study compared physiological profiles of elite (top 25 nationally ranked, n = 16) and non-elite (positions 26–102, n = 33) Polish male fencers using the Fencing Endurance Test (FET), [...] Read more.
Fencing demands a unique blend of endurance, explosive power, and asymmetric neuromuscular control. This study compared physiological profiles of elite (top 25 nationally ranked, n = 16) and non-elite (positions 26–102, n = 33) Polish male fencers using the Fencing Endurance Test (FET), countermovement jump (CMJ), 5-m sprint, body composition, and heart rate (HR) metrics. FET duration, CMJ-derived explosive power (flight time, reactive strength index), and relative lean mass were also assessed in relation to competitive experience. Quantile regression (age & BMI-adjusted), ROC analysis, and Spearman correlations evaluated group differences. Elite fencers demonstrated superior FET duration (median difference: +1.84 min, p < 0.0001), CMJ performance (e.g., 10.4 W/kg higher peak power, p = 0.014), and relative lean mass (+7.7%, p < 0.001), despite comparable 5-m sprint times. Elite athletes also showed more efficient HR recovery (HRR1) and lower pre-FET resting HR (p < 0.05). Competitive experience correlated strongly with FET endurance (rho = 0.62), CMJ power (rho = 0.42), and lean mass (rho = 0.55). ROC analysis identified FET ≥ 14.3 min, CMJ flight time ≥0.581 s, and ≥10 years of experience as optimal discriminators of elite status (AUCs 0.86–0.90). These findings confirm that elite performance is characterized by superior sport-specific endurance and explosive power, independent of age/BMI. The FET and CMJ emerge as practical tools for monitoring training progress, with identified thresholds serving as benchmarks for elite preparation. Training programs should prioritize individualized development of these traits, acknowledging inter-athlete variability in physiological strengths. Future research should explore sport-specific acceleration metrics and extended FET protocols for elite athletes. Full article
(This article belongs to the Section Physiology and Pathology)
Show Figures

Figure 1

Back to TopTop