Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (146,082)

Search Parameters:
Keywords = accuracy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1477 KB  
Article
A Data-Driven Method for Identifying Similarity in Transmission Sections Considering Energy Storage Regulation Capabilities
by Leibao Wang, Wei Zhao, Junru Gong, Jifeng Liang, Yangzhi Wang and Yifan Su
Electronics 2026, 15(4), 851; https://doi.org/10.3390/electronics15040851 (registering DOI) - 17 Feb 2026
Abstract
To address the challenges of real-time control in power systems with high renewable penetration, identifying historical transmission sections similar to future scenarios enables efficient reuse of mature control strategies. However, existing data-driven identification methods exhibit two primary limitations: they typically rely on static [...] Read more.
To address the challenges of real-time control in power systems with high renewable penetration, identifying historical transmission sections similar to future scenarios enables efficient reuse of mature control strategies. However, existing data-driven identification methods exhibit two primary limitations: they typically rely on static Total Transfer Capacity (TTC), ignoring the rapid regulation capability of Energy Storage Systems (ESS) in alleviating congestion; and they employ fixed weights for similarity measurement, failing to distinguish the varying importance of different features (e.g., critical line flows vs. ordinary voltages). To overcome these issues, this paper proposes a similarity identification method for transmission sections considering ESS regulation capabilities and adaptive feature weights. First, a hierarchical decision model is utilized to screen basic grid features. An optimization model incorporating ESS charge/discharge constraints and emergency power support potential is established to calculate the Dynamic TTC, constructing a multi-scale feature set that reflects the real-time safety margin of the grid. Second, a Dispersion-Weighted Fuzzy C-Means (DW-FCM) clustering algorithm is proposed. By introducing a dispersion-weighting mechanism, the algorithm utilizes data distribution characteristics to automatically learn and assign higher weights to key features with high distinguishability during the iteration process, overcoming the subjectivity of manual weighting. Furthermore, fuzzy validity indices (XB, PC, FS) are introduced to adaptively determine the optimal number of clusters. Finally, case studies on the IEEE 39-bus system verify that the proposed method significantly improves identification accuracy compared to traditional methods and provides more reliable references for dispatching decisions. Full article
(This article belongs to the Special Issue Security Defense Technologies for the New-Type Power System)
24 pages, 6631 KB  
Article
Application of Computer Vision to the Automated Extraction of Metadata from Natural History Specimen Labels: A Case Study on Herbarium Specimens
by Jacopo Zacchigna, Weiwei Liu, Felice Andrea Pellegrino, Adriano Peron, Francesco Roma-Marzio, Lorenzo Peruzzi and Stefano Martellos
Plants 2026, 15(4), 637; https://doi.org/10.3390/plants15040637 - 17 Feb 2026
Abstract
Metadata extraction from natural history collection labels is a pivotal task for the online publication of digitized specimens. However, given the scale of these collections—which are estimated to host more than 2 billion specimens worldwide, including ca. 400 million herbarium specimens—manual metadata extraction [...] Read more.
Metadata extraction from natural history collection labels is a pivotal task for the online publication of digitized specimens. However, given the scale of these collections—which are estimated to host more than 2 billion specimens worldwide, including ca. 400 million herbarium specimens—manual metadata extraction is an extremely time-consuming task. Thus, automated data extraction from digital images of specimens and their labels therefore is a promising application of state-of-the-art computer vision techniques. Extracting information from herbarium specimen labels normally involves three main steps: text segmentation, multilingual and handwriting recognition, and data parsing. The primary bottleneck in this workflow lies in the limitations of Optical Character Recognition (OCR) systems. This study explores how the general knowledge embedded in multimodal Transformer models can be transferred to the specific task of herbarium specimen label digitization. The final goal is to develop an easy-to-use, end-to-end solution to mitigate the limitations of classic OCR approaches while offering greater flexibility to adapt to different label formats. Donut-base, a pre-trained visual document understanding (VDU) transformer, was the base model selected for fine-tuning. A dataset from the University of Pisa served as a test bed. The initial attempt achieved an accuracy of 85%, measured using the Tree Edit Distance (TED), demonstrating the feasibility of fine-tuning for this task. Cases with low accuracies were also investigated to identify limitations of the approach. In particular, specimens with multiple labels, especially if combining handwritten and typewritten text, proved to be the most challenging. Strategies aimed at addressing these weaknesses are discussed. Full article
32 pages, 2137 KB  
Article
Research on Distribution Network Supply Reliability Based on Hierarchical Recursion, Entropy Measurement, and Fuzzy Membership Quantification Strategy
by Jikang Dong and Xianming Sun
Energies 2026, 19(4), 1048; https://doi.org/10.3390/en19041048 - 17 Feb 2026
Abstract
In the field of modern power systems, power supply reliability has become a core indicator for measuring distribution network performance. It serves not only as a fundamental criterion for judging the continuous power supply capacity of distribution networks but also as a key [...] Read more.
In the field of modern power systems, power supply reliability has become a core indicator for measuring distribution network performance. It serves not only as a fundamental criterion for judging the continuous power supply capacity of distribution networks but also as a key benchmark for evaluating their power quality. Considering the current status of reliability assessment for distribution network power supply, this study conducts an in-depth analysis of a series of key indicators, namely outage duration, outage frequency, the number of affected customers, power supply reliability rate, and the proportion of affected customers. Through a detailed deconstruction of these indicators, an evaluation model for distribution network power supply reliability is established. In the process of model construction, this study innovatively combines the hierarchical recursive weighting method with the entropy measurement weight determination method to accurately define the weights of each evaluation dimension. On this basis, a fuzzy membership quantification strategy is introduced to precisely determine the classification level of distribution networks, and Monte Carlo simulation combined with triangular fuzzy number is used to carry out uncertainty modeling on the reliability score, realizing the transformation from deterministic evaluation to probabilistic evaluation. This strategy is developed to transform qualitative issues into quantitative analysis, effectively clarify the fuzzy and complex interrelationships among multiple influencing factors, and thereby realize a comprehensive evaluation of power supply reliability for distribution networks. To verify the effectiveness and practicality of the proposed method, a distribution network in a specific region is selected as the research object. The aforementioned model and method are applied to assess its power supply reliability, and the precise classification of distribution network levels in this region is successfully realized. This combined model significantly improves the accuracy of evaluation while ensuring the scientific rigor and fairness of the evaluation process. It provides an innovative and practical method for the field of distribution network power supply reliability assessment, and offers substantive reference and support for relevant decision-making and practical operations. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
30 pages, 18301 KB  
Article
Optimizing Computer Vision for Edge Deployment in Industry 4.0: A Framework and Experimental Evaluation
by Eman Azab, Mohamed Ehab, Lamia Shihata and Maggie Mashaly
Technologies 2026, 14(2), 126; https://doi.org/10.3390/technologies14020126 - 17 Feb 2026
Abstract
Integrating high-performance computer vision (CV) into Industry 4.0 environments remains a challenge due to the computational disparity between state-of-the-art (SOTA) models and resource-constrained edge hardware. This study proposes a hardware-aware optimization framework designed to bridge this gap, focusing on real-time object detection for [...] Read more.
Integrating high-performance computer vision (CV) into Industry 4.0 environments remains a challenge due to the computational disparity between state-of-the-art (SOTA) models and resource-constrained edge hardware. This study proposes a hardware-aware optimization framework designed to bridge this gap, focusing on real-time object detection for high-speed, omnidirectional conveyor systems. Unlike conventional benchmarking, the proposed framework employs a multi-stage optimization pipeline—integrating backbone refinement, hyperparameter tuning, and quantization—to transition diverse architectures from baseline configurations (Mbase) to hardware-optimized variants (Mopt).The framework’s efficacy is validated using a custom-built standalone experimental platform detecting package features, brands, and disruptions on an omnidirectional-wheeled conveyor. A comprehensive comparative analysis is conducted across a heterogeneous edge ecosystem, including the NVIDIA Jetson Nano (GPU), Raspberry Pi 4 (CPU), and Google Coral (TPU). Our findings demonstrate that through systematic tuning, the YOLOv10n variant emerged as the superior architecture, achieving a precision of 98.1% and an mAP50:95 of 81.22%. Post-deployment characterization reveals that the optimized YOLOv10n model on the NVIDIA Jetson Nano achieved a peak inference speed of 25 frames per second (FPS), successfully striking the “Pareto-optimal” balance between predictive accuracy and real-time processing. The primary contributions of this work include a reproducible optimization methodology, a comparative performance map across three distinct hardware backends, and the release of a specialized industrial conveyor dataset. Full article
26 pages, 3774 KB  
Article
A Multimodal Dual-Stream Cross-Attention Deep Learning Framework for Diabetic Foot Ulcer Classification
by Mehmet Umut Salur
Appl. Sci. 2026, 16(4), 1993; https://doi.org/10.3390/app16041993 - 17 Feb 2026
Abstract
Finding diabetic foot ulcers (DFUs) early and accurately is essential for improving patients’ quality of life and lowering the risk of amputation. RGB images, commonly used in automated DFU detection, have limitations such as lighting variations, color inconsistencies, and inability to directly reflect [...] Read more.
Finding diabetic foot ulcers (DFUs) early and accurately is essential for improving patients’ quality of life and lowering the risk of amputation. RGB images, commonly used in automated DFU detection, have limitations such as lighting variations, color inconsistencies, and inability to directly reflect physiological information. Background/Objectives: Although thermal images can capture temperature anomalies associated with inflammation and circulatory disorders, they cannot provide consistent performance due to their low spatial resolution and limited availability in clinical datasets. Furthermore, the lack of paired RGB–thermal image pairs makes it difficult to develop effective multimodal deep learning models. Methods: This study proposes a two-stage multimodal deep learning approach to overcome these limitations. In the first stage, an RGB2T-cGAN (RGB to Thermal cGAN) model based on pix2pix was designed to generate synthetic thermal representations from RGB images that resemble clinical patterns, thereby addressing the missing modality problem. In the second stage, the Multimodal Dual-Stream Multi-Head Cross-Attention (MDS-MHCA) classifier model was developed, which processes DFU RGB and generated synthetic thermal images through separate streams, enabling the dynamic modeling of complementary information across modalities. Results: The proposed MDS-MHCA model achieved 99.06% accuracy, 99.09% recall, and 99.06% F1-score on the test set, demonstrating a clear advantage over models based solely on RGB (91.51% accuracy) or thermal (96.23% accuracy) modalities. Furthermore, patient-based 10-fold GroupKFold cross-validation results demonstrate that the model offers high generalization capability across different patient groups, with an average accuracy of 96.49 ± 1.04 and an AUC value of 0.9927 ± 0.0067. Conclusions: The findings reveal that the proposed approach, through the integration of synthetic thermal information and cross-attention-based multimodal fusion, overcomes the fundamental limitations of single-modality-based systems and offers a DFU detection system that is more robust and reliable and holds potential for integration into clinical decision support systems. Full article
Show Figures

Figure 1

11 pages, 787 KB  
Article
Role of Next-Generation Sequencing in Excluding the Nosocomial Origin of a Case of Legionnaires’ Disease Integrating Environmental Surveillance and Clinical Diagnosis
by Francesco Paglione, Cataldo Maria Mannavola, Marilena La Sorda, Maria Luisa Ricci, Maria Scaturro, Silvia Laura Bosello, Roberta Masnata, Francesca Romana Monzo, Sara Vincenti, Patrizia Laurenti, Maurizio Sanguinetti and Flavio De Maio
Microorganisms 2026, 14(2), 486; https://doi.org/10.3390/microorganisms14020486 - 17 Feb 2026
Abstract
Legionella pneumophila (Lp) remains one of the major causes of community- and hospital-acquired pneumonia, yet its diagnosis and source attribution continue to pose significant challenges. Here, we describe the case of an immunocompromised patient who developed Legionnaires’ disease during hospitalization. Following [...] Read more.
Legionella pneumophila (Lp) remains one of the major causes of community- and hospital-acquired pneumonia, yet its diagnosis and source attribution continue to pose significant challenges. Here, we describe the case of an immunocompromised patient who developed Legionnaires’ disease during hospitalization. Following activation of the hospital’s internal surveillance system, Lp and Legionella anisa (L. anisa) were recovered from multiple water distribution points using a simplified culture-based protocol. Whole-genome sequencing (WGS) demonstrated that all environmental isolates belonged to a single clonal strain, whereas the clinical isolate was genetically unrelated, thereby excluding the hospital water system as the source of infection. Although not implicated in the patient’s disease, the detection of both Lp and L. anisa within the plumbing system highlighted underlying structural contamination and the potential masking effect of non-L. pneumophila species during culture-based surveillance. These findings support the integration of conventional microbiological methods with high-resolution genomic tools to enhance surveillance accuracy, support outbreak investigations, and strengthen public health responses. Overall, this case underscores the value of WGS as a decisive tool for source attribution, including the robust exclusion of a suspected nosocomial source, in complex clinical and environmental scenarios. Full article
Show Figures

Figure 1

30 pages, 4364 KB  
Article
Research on an Automatic Solution Method for Plane Frames Based on Computer Vision
by Dejiang Wang and Shuzhe Fan
Sensors 2026, 26(4), 1299; https://doi.org/10.3390/s26041299 - 17 Feb 2026
Abstract
In the internal force analysis of plane frames, traditional mechanics solutions require the cumbersome derivation of equations and complex numerical calculations, a process that is both time-consuming and error-prone. While general-purpose Finite Element Analysis (FEA) software offers rapid and precise calculations, it is [...] Read more.
In the internal force analysis of plane frames, traditional mechanics solutions require the cumbersome derivation of equations and complex numerical calculations, a process that is both time-consuming and error-prone. While general-purpose Finite Element Analysis (FEA) software offers rapid and precise calculations, it is limited by tedious modeling pre-processing and a steep learning curve, making it difficult to meet the demand for rapid and intelligent solutions. To address these challenges, this paper proposes a deep learning-based automatic solution method for plane frames, enabling the extraction of structural information from printed plane structural schematics and automatically completing the internal force analysis and visualization. First, images of printed plane frame schematics are captured using a smartphone, followed by image pre-processing steps such as rectification and enhancement. Second, the YOLOv8 algorithm is utilized to detect and recognize the plane frame, obtaining structural information including node coordinates, load parameters, and boundary constraints. Finally, the extracted data is input into a static analysis program based on the Matrix Displacement Method to calculate the internal forces of nodes and elements, and to generate the internal force diagrams of the frame. This workflow was validated using structural mechanics problem sets and the analysis of a double-span portal frame structure. Experimental results demonstrate that the detection accuracy of structural primitives reached 99.1%, and the overall solution accuracy of mechanical problems in the final test set exceeded 90%, providing a more convenient and efficient computational method for the analysis of plane frames. Full article
(This article belongs to the Special Issue Object Detection and Recognition Based on Deep Learning)
Show Figures

Figure 1

20 pages, 580 KB  
Article
A Maturation-Aware Machine Learning Framework for Screening the Nutritional Status of Adolescents
by Hatem Ghouili, Zouhaier Farhani, Narimen Yousfi, Halil İbrahim Ceylan, Amel Dridi, Andrea de Giorgio, Nicola Luigi Bragazzi, Noomen Guelmami, Ismail Dergaa and Anissa Bouassida
Nutrients 2026, 18(4), 660; https://doi.org/10.3390/nu18040660 - 17 Feb 2026
Abstract
Background: Malnutrition in adolescents remains a significant public health issue worldwide, with undernutrition and overweight often coexisting. Accurate nutritional screening during adolescence is complicated by variability in biological maturation and class imbalance, particularly among underweight adolescents. Objective: This study aims to develop and [...] Read more.
Background: Malnutrition in adolescents remains a significant public health issue worldwide, with undernutrition and overweight often coexisting. Accurate nutritional screening during adolescence is complicated by variability in biological maturation and class imbalance, particularly among underweight adolescents. Objective: This study aims to develop and validate machine learning models for classifying the nutritional status of adolescents, accounting for class imbalance and biological maturation, and to evaluate model stability and variable importance at different stages of peak height velocity (PHV). Methods: In this cross-sectional study, 4232 adolescents aged 11 to 18 years were recruited from nine educational institutions in Tunisia. Their nutritional status was classified according to the International Obesity Task Force (IOTF) BMI thresholds into three categories: underweight (14.4%), normal weight (68.3%), and overweight (17.2%). Ten anthropometric, behavioral, and maturation-related predictors were analyzed. Six supervised machine learning algorithms were evaluated using a 70/30 stratified split between training and test sets, with five-fold cross-validation. Class imbalance was addressed by ROSE combined with cost-sensitive learning. Model performance was assessed using accuracy, Cohen’s kappa coefficient, macro F1 score, sensitivity, specificity, and AUC. Results: The cost-sensitive Random Forest (RF) model achieved the best overall performance, with an accuracy of 0.830, a macro F1 score of 0.767, a macro-AUC of 0.921, and a macro- sensitivity of 0.743. The class-specific sensitivities were 0.70 (underweight), 0.91 (normal weight), and 0.62 (overweight), with no major misclassification between the extreme categories. Performance remained stable across the different maturation phases (accuracy from 0.823 to 0.839), with optimal discrimination in the pre-PHV (macro-AUC = 0.936; sensitivity for underweight = 0.82) and post-PHV (macro-AUC = 0.931) periods. Body mass was the main predictor (importance = 1.00), followed by waist circumference (0.34–0.53). The importance of age for classifying underweight increased significantly from the pre-PHV (0.10) to the post-PHV (0.75) period. A two-stage hierarchical model further improved underweight detection (stage 1 AUC = 0.911; sensitivity = 0.732). Conclusions: A cost-sensitive RF model, combined with ROSE, provides robust classification of adolescents’ nutritional status maturation, significantly improving underweight detection while preserving overall accuracy. This approach is particularly well-suited to public health screening in schools as a first-stage assessment that requires clinical confirmation and promotes a maturation-aware interpretation of nutritional risk among adolescents. Full article
Show Figures

Graphical abstract

19 pages, 1689 KB  
Article
Bio-Adaptive Robot Control: Integrating Biometric Feedback and Gesture-Based Interfaces for Intuitive Human–Robot Interaction (HRI)
by Antonio Di Tecco, Daniele Leonardis, Edoardo Ragusa, Antonio Frisoli and Claudio Loconsole
Robotics 2026, 15(2), 45; https://doi.org/10.3390/robotics15020045 - 17 Feb 2026
Abstract
AI-driven assistance can help the user perform complex teleoperated tasks, introduce autonomous patterns, or adapt the workbench to objects of interest. On the other hand, the level of assistance should be responsive to the user’s response and adapt accordingly to promote a positive [...] Read more.
AI-driven assistance can help the user perform complex teleoperated tasks, introduce autonomous patterns, or adapt the workbench to objects of interest. On the other hand, the level of assistance should be responsive to the user’s response and adapt accordingly to promote a positive and effective experience. Envisaging this final goal, this article investigates whether physiological signals can be used to estimate the user’s performance and response in a teleoperation setup, with and without AI-driven assistance. In more detail, a teleoperated pick-and-place task was performed with or without AI-driven assistance during the grasping phase. A deep-learning algorithm for affordance detection provided assistance, helping participants align the robotic hand with the target object. Physiological and kinematic data were measured and processed by machine learning models to predict the effects of AI assistance on task performance during teleoperation. Results showed that AI-driven assistance, as expected, affected pick-and-place performance. Beyond this, the assistance affected the participant’s fatigue level, which the machine learning models could predict with an average accuracy of 84% based on the physiological response. In addition, the success or failure of the pick-and-place task could be predicted with an average accuracy of 88%. These findings highlight the potential of integrating deep learning with biometric feedback and gesture-based control to create more intuitive and adaptive HRI systems. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses, 2nd Edition)
Show Figures

Figure 1

25 pages, 4998 KB  
Article
Pareto-Aware Dual-Preference Optimization for Task-Oriented Dialogue
by Shenghui Bao and Mideth Abisado
Symmetry 2026, 18(2), 372; https://doi.org/10.3390/sym18020372 - 17 Feb 2026
Abstract
Task-oriented dialogue systems face a tension between comprehensive constraint elicitation (task adequacy) and conversational efficiency (minimizing turns). Current preference learning frameworks treat preferences as static, unable to capture the dynamic evolution of interaction states that evolve across dialogue progression. We present Dual-DPO, a [...] Read more.
Task-oriented dialogue systems face a tension between comprehensive constraint elicitation (task adequacy) and conversational efficiency (minimizing turns). Current preference learning frameworks treat preferences as static, unable to capture the dynamic evolution of interaction states that evolve across dialogue progression. We present Dual-DPO, a framework that embeds multi-objective preferences into data construction via turn-aware scoring. Our approach decouples objective balancing from policy updates through offline preference scalarization, addressing the optimization instability challenges in online multi-objective reinforcement learning. Experiments on MultiWOZ 2.4 demonstrate 28–35% dialogue turn reduction while maintaining Joint Goal Accuracy > 89% (p<0.001). Pareto frontier analysis shows 94% coverage with hypervolume HV=0.847. Independent expert evaluation by 10 PhD-level researchers (n=300 assessments, inter-rater agreement α=0.78) confirms 32% user satisfaction improvement (p<0.001). Theoretical analysis demonstrates that offline scalarization, which correlates with improved optimization stability, achieves 3.2× lower gradient variance than online multi-reward optimization by eliminating sampling stochasticity. Our approach enables balanced treatment of competing objectives through Pareto-optimal trade-offs. These results highlight a symmetric and balanced treatment of competing objectives within a Pareto-optimal optimization framework. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

14 pages, 1432 KB  
Article
A Lung Ultrasound Radiomics-Based Machine Learning Model for Diagnosing Acute Heart Failure in the Emergency Department
by Jifei Cai, Nan Tong, Chenchen Hang, Xuan Qi, Lulu Su and Shubin Guo
Diagnostics 2026, 16(4), 598; https://doi.org/10.3390/diagnostics16040598 - 17 Feb 2026
Abstract
Background/Objectives: Acute heart failure (AHF) is a common critical condition in emergency departments, and traditional diagnostic methods have limitations, including high subjectivity and limited accuracy. This study aimed to develop an integrated machine learning model based on lung ultrasound (LUS) radiomics and [...] Read more.
Background/Objectives: Acute heart failure (AHF) is a common critical condition in emergency departments, and traditional diagnostic methods have limitations, including high subjectivity and limited accuracy. This study aimed to develop an integrated machine learning model based on lung ultrasound (LUS) radiomics and clinical data for diagnosing AHF in patients presenting with acute dyspnea. Methods: A total of 301 patients were included and randomly split into training (n = 210) and testing (n = 91) sets. Using PyRadiomics 3.0, 107 radiomics features were extracted from standardized 6-zone LUS images, combined with 52 clinical features. Three random forest models were developed: clinical-only, radiomics-only, and integrated models. Results: The integrated model achieved optimal performance on the testing set with an AUC of 0.976 (95% CI: 0.950–0.994), accuracy of 90.1%, sensitivity of 91.1%, and specificity of 89.1%, significantly outperforming the radiomics model (AUC 0.940, p = 0.046) and clinical model (AUC 0.931, p = 0.111). Feature importance analysis revealed that radiomics features contributed 75.6% of the model’s predictive power, with gray level run length matrix (GLRLM) features dominating the top-ranked features. Conclusions: As a proof-of-concept study, this research demonstrates the potential value of multimodal data fusion strategies for AHF diagnosis in the emergency department; however, external validation and prospective studies are required to further confirm its clinical applicability. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

22 pages, 2946 KB  
Article
Tissue IL-6/LIF/LIFR and CXCL9 Expression Correlates with High-Risk NBI Patterns and Squamous Cell Carcinoma in Vocal Fold Lesions
by Magda Barańska, Katarzyna Taran and Wioletta Pietruszewska
Int. J. Mol. Sci. 2026, 27(4), 1923; https://doi.org/10.3390/ijms27041923 - 17 Feb 2026
Abstract
Laryngeal squamous cell carcinoma (SCC) remains a major clinical challenge due to substantial mortality and limited preoperative risk stratification. Narrow-Band Imaging (NBI) enables real-time visualization of mucosal microvasculature, yet the molecular correlates of high-risk NBI phenotypes in vocal fold lesions are incompletely defined. [...] Read more.
Laryngeal squamous cell carcinoma (SCC) remains a major clinical challenge due to substantial mortality and limited preoperative risk stratification. Narrow-Band Imaging (NBI) enables real-time visualization of mucosal microvasculature, yet the molecular correlates of high-risk NBI phenotypes in vocal fold lesions are incompletely defined. In a prospective cohort of 145 patients with vocal fold lesions, NBI microvascular patterns were graded using the Ni classification and dichotomized using a pre-specified high-risk threshold (Ni ≥ 4 vs. Ni ≤ 3). Histopathology was classified according to WHO 2017. Epithelial expression of IL-6, LIF, LIFR and CXCL9 was quantified by immunohistochemistry using the immunoreactive score (IRS). Associations were tested using non-parametric methods and logistic regression, and diagnostic performance was assessed by ROC analysis. SCC was diagnosed in 63/145 cases. The Ni category showed a strong stepwise association with WHO 2017 histopathological severity. Using Ni ≥ 4, diagnostic performance for SCC was balanced (sensitivity 82.5%, specificity 82.9%; accuracy 82.8%). LIF and LIFR expression decreased with increasing histopathological severity and higher-NBI-risk categories, whereas CXCL9 increased with more suspicious NBI patterns; epithelial IL-6 did not differ across lesion categories. In multivariable logistic regression, Ni ≥ 4 was the strongest independent predictor of SCC (adjusted OR 8.90), while higher LIF (adjusted OR 0.73) and LIFR (adjusted OR 0.78) were independently associated with lower odds of SCC (model AUC 0.943). Multivariable analysis confirmed NBI as the strongest independent predictor of carcinoma, while epithelial LIF and LIFR expression showed inverse associations with histological malignancy and high-risk NBI vascular patterns. LIF/LIFR and CXCL9 show distinct, biologically plausible associations with NBI risk phenotypes, suggesting that selected tissue markers may complement NBI for refined SCC risk stratification. Full article
(This article belongs to the Special Issue Pathogenesis and Treatments of Head and Neck Cancer: 2nd Edition)
Show Figures

Figure 1

14 pages, 898 KB  
Article
A Cross-Corpus Evaluation on Spontaneous and Dynamic Facial Expressions for Automated Emotion Classification
by Yifan Bian, Hyunwoo Kim and Eva G. Krumhuber
Electronics 2026, 15(4), 849; https://doi.org/10.3390/electronics15040849 - 17 Feb 2026
Abstract
The growing availability of facial expression databases (FEDBs) has accelerated the development of empathic AI systems designed to promote emotional awareness and well-being. However, most existing systems are trained solely on posed (acted), static databases featuring exaggerated and stereotypical displays. Such portrayals may [...] Read more.
The growing availability of facial expression databases (FEDBs) has accelerated the development of empathic AI systems designed to promote emotional awareness and well-being. However, most existing systems are trained solely on posed (acted), static databases featuring exaggerated and stereotypical displays. Such portrayals may not accurately represent the real-world expressions that are often subtle, heterogeneous, and ambiguous, raising concerns about the performance of these AI systems in inferring human emotions. Furthermore, the lack of cross-database evaluation has limited assessments of how well these systems generalize to diverse facial behaviors. To address these gaps, the present study evaluates five spontaneous and dynamic databases that provide more ecologically valid representations of affective responses observed in everyday life. We assessed the performance of a widely adopted affective computing system, AFFDEX (v1.0; iMotions, Copenhagen, Denmark), to examine how basic emotions are inferred from spontaneous facial movements. Results reveal substantial variability in decoding accuracy across emotion categories, database contexts, and demographic factors. Prototypical and complex expressions were decoded more accurately than subtle or heterogeneous ones, while ambiguous expressions that blend multiple affective signals impaired machine predictions. Together, these findings underscore the crucial need to train and validate affective computing systems using diverse FEDBs that encompass a wider spectrum of behaviors to improve robustness and real-world generalizability. Full article
Show Figures

Figure 1

20 pages, 1435 KB  
Article
A Multi-Modal Expert-Driven ISAC Framework with Hierarchical Federated Learning for 6G Network
by Behzod Mukhiddinov, Di He, Wenxian Yu and Trieu-Kien Truong
Sensors 2026, 26(4), 1298; https://doi.org/10.3390/s26041298 - 17 Feb 2026
Abstract
We propose a novel Expert-Driven Conditional Auxiliary Classifier Generative Adversarial Network (AC-GAN) framework tailored for heterogeneous multi-modal federated learning at edge AI devices such as the NVIDIA Jetson Orin Nano. Unlike prior works that assume idealized distributions or rely on centralized data, our [...] Read more.
We propose a novel Expert-Driven Conditional Auxiliary Classifier Generative Adversarial Network (AC-GAN) framework tailored for heterogeneous multi-modal federated learning at edge AI devices such as the NVIDIA Jetson Orin Nano. Unlike prior works that assume idealized distributions or rely on centralized data, our approach jointly addresses statistical non-IID data, model heterogeneity, privacy protection, and resource constraints through an expert-guided training pipeline and hierarchical model updates. Specifically, we introduce a collaborative synthesis and aggregation mechanism where local experts guide conditional data generation, enabling realistic data augmentation on resource-constrained edge nodes and enhancing global model generalization without sharing raw data. Through hierarchical updates between client and server levels, our method mitigates bias from skewed local distributions and significantly reduces communication overhead compared to classical federated averaging baselines. We demonstrate that while “perfect precision” is theoretically unattainable under non-IID and real-world conditions, our framework achieves substantially improved precision and false positive trade-offs (e.g., precision 0.89) relative to benchmarks, validating robustness in practical multi-modal settings. Extensive experiments across synthetic and real datasets show that the proposed AC-GAN approach consistently outperforms federated baselines in accuracy, convergence stability, and privacy preservation. Our results suggest that expert-guided conditional generative modeling is a promising direction for scalable, privacy-aware edge intelligence. Full article
16 pages, 2674 KB  
Article
Research on Multi-Feature Fusion and Lightweight Recognition for Radar Compound Jamming
by Weiyu Zha, Jianyin Cao, Hao Wang and Wenming Yu
Sensors 2026, 26(4), 1296; https://doi.org/10.3390/s26041296 - 17 Feb 2026
Abstract
To recognize radar compound jamming under complex electromagnetic environments, this paper proposes a lightweight multi-feature fusion network for compound jamming recognition. Three complementary time–frequency representations are employed to extract various features of compound jamming, which are processed by a multi-branch architecture for parallel, [...] Read more.
To recognize radar compound jamming under complex electromagnetic environments, this paper proposes a lightweight multi-feature fusion network for compound jamming recognition. Three complementary time–frequency representations are employed to extract various features of compound jamming, which are processed by a multi-branch architecture for parallel, multi-scale feature learning. Attention mechanisms are incorporated to enhance the discriminative characteristics of jamming, and a weighted fusion strategy is adopted to integrate multi-channel features effectively. Furthermore, an improved lightweight module, GSENet, is introduced to construct the recognition network with low complexity. Experiments on simulated radar jamming datasets demonstrate that the proposed network achieves over 87% recognition accuracy for seven compound jamming types under low jamming-to-noise ratio (JNR) conditions while maintaining a parameter count below 0.14 M. These results indicate that the proposed network provides an effective trade-off between recognition performance and model complexity, making it suitable for electronic counter-countermeasure (ECCM) applications. Full article
(This article belongs to the Section Radar Sensors)
Back to TopTop