Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,873)

Search Parameters:
Keywords = subjective classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2512 KB  
Systematic Review
Optimization of Loss Determination in Claims Settlement Using Smart Industry Tools: A Systematic Review and Implications for the Construction Industry
by Jorge Acevedo-Bastías, Sebastián González Fernández, Luis López-Quijada and Vinicius Minatogawa
Buildings 2026, 16(6), 1175; https://doi.org/10.3390/buildings16061175 - 17 Mar 2026
Abstract
The claims resolution process is a cornerstone of the insurance industry, aiming to fairly and accurately determine the economic losses caused by adverse events. Traditionally, adjusters have relied heavily on expert judgment to perform this task. While this approach is essential, it often [...] Read more.
The claims resolution process is a cornerstone of the insurance industry, aiming to fairly and accurately determine the economic losses caused by adverse events. Traditionally, adjusters have relied heavily on expert judgment to perform this task. While this approach is essential, it often suffers from subjectivity, inconsistent criteria, and difficulty integrating complex data sources into objective analyses. In this context, Smart Industry tools—such as Artificial Intelligence (AI), Machine Learning (ML), Computer Vision (CV), and the Internet of Things (IoT)—have demonstrated high potential to automate damage detection and assessment; however, their effective integration into loss determination remains uneven across different productive sectors. This study addresses this problem through two objectives. First, we conducted a systematic literature review following PRISMA guidelines to identify which Smart Industry tools are currently used in the insurance sector for loss determination and to analyze their level of maturity in different productive sectors. We searched the Web of Science and Scopus databases, identifying 253 studies, of which 23 met our inclusion criteria. Second, based on the gaps we identified between the construction sector and more advanced industries such as automotive, we propose a methodological framework based on Building Information Modeling (BIM). Our results show that most solutions focus on the detection and technical classification of damage, especially in the automotive sector, while construction lacks methods to convert these technical findings into operational economic estimates. The proposed framework addresses this gap by standardizing technical and economic data from the underwriting stage, enabling more automated, traceable, and objective loss determination for infrastructure claims. Full article
Show Figures

Figure 1

25 pages, 3328 KB  
Article
End-to-End Acoustic Classification of Respiratory Sounds Using Multi-Architecture Deep Neural Networks
by Btissam Bouzammour, Ghita Zaz, Malika Alami Marktani, Abdellah Touhafi, Anas El Ouali and Mohammed Jorio
Technologies 2026, 14(3), 178; https://doi.org/10.3390/technologies14030178 - 16 Mar 2026
Abstract
Respiratory diseases constitute a major global health burden, necessitating accurate and reliable diagnostic support tools. Conventional auscultation, despite its widespread clinical use, remains inherently subjective and susceptible to inter-observer variability. In this study, we propose a unified deep learning framework for the automated [...] Read more.
Respiratory diseases constitute a major global health burden, necessitating accurate and reliable diagnostic support tools. Conventional auscultation, despite its widespread clinical use, remains inherently subjective and susceptible to inter-observer variability. In this study, we propose a unified deep learning framework for the automated classification of respiratory sound recordings into four clinically relevant categories: Normal, Crackles, Wheezes, and Crackles + Wheezes. The experimental evaluation was conducted on a publicly available dataset comprising heterogeneous respiratory recordings collected from both patients with pulmonary pathologies and healthy individuals. All audio signals were subjected to standardized preprocessing procedures to enhance signal consistency and ensure reliable feature extraction across acquisition conditions. To ensure methodological rigor and prevent optimistic bias, a strict subject-independent validation strategy was adopted using 5-fold GroupKFold cross-validation based on patient identifiers. Six deep learning architectures were systematically implemented and comparatively evaluated under a controlled and reproducible training protocol, including convolutional (1D-CNN, Deep-CNN), recurrent hybrid (CNN–LSTM, CNN–BiLSTM), and attention-based (CNN–Attention, CNN–Transformer) models. Performance metrics were reported as mean ± standard deviation across folds. The CNN–Attention architecture achieved the best overall performance, yielding a Balanced Accuracy of 90.1% ± 1.8% and a macro F1-score of 89.7% ± 2.1%, demonstrating stable inter-patient generalization. These findings indicate that attention-enhanced hybrid architectures effectively capture both local spectral structures and long-range temporal dependencies inherent in respiratory signals. The proposed framework provides a robust foundation for subject-independent automated lung sound classification and contributes to the development of clinically reliable decision-support systems. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

36 pages, 10741 KB  
Article
Remote Sensing Recognition Framework for Straw Burning Integrating Spatio-Temporal Weights and Semi-Supervised Learning
by Xiangguo Lyu, Hui Chen, Ye Tian, Change Zheng and Guolei Chen
Remote Sens. 2026, 18(6), 903; https://doi.org/10.3390/rs18060903 - 15 Mar 2026
Abstract
Straw burning is a major source of regional air pollution. However, its reliable remote sensing detection faces problems in distinguishing agricultural fires from non-agricultural thermal anomalies, adequately leveraging burning seasonality, and overcoming the scarcity of pixel-level annotations. To comprehensively address these issues, this [...] Read more.
Straw burning is a major source of regional air pollution. However, its reliable remote sensing detection faces problems in distinguishing agricultural fires from non-agricultural thermal anomalies, adequately leveraging burning seasonality, and overcoming the scarcity of pixel-level annotations. To comprehensively address these issues, this study proposes an end-to-end framework for straw burning identification that integrates spatio-temporal weighting and semi-supervised learning. The framework introduces a data-driven spatial weight optimization method to automatically learn discriminative weights for diverse land cover types (e.g., farmland, industry), replacing subjective empirical settings. Furthermore, a temporal weighting model, developed using Kernel Density Estimation, dynamically adjusts classification confidence according to historical burning seasonality, enhancing recall during peak seasons while suppressing off-season false positives. Finally, an adapted Dual-Backbone Dynamic Mutual Training (DB-DMT) strategy collaboratively leverages both limited labeled (24.5%) and abundant unlabeled (75.5%) high-resolution imagery, significantly improving model generalization in label-scarce scenarios. Validation across five representative regions of China demonstrated the framework’s superior performance, achieving a semantic segmentation mean Intersection over Union (mIoU) improvement of 3.33% (to 71.92%) and increasing precision in Henan from 95.21% to 97.71%. Crucially, the framework effectively reduced the off-season false positive rate (FPR) from 5.14% to a mere 0.23% in highly industrialized regions like Tianjin. By systematically mitigating both spatial geolocation bias and seasonal phenology confusion, our approach offers a robust and scalable solution for straw burning monitoring and a transferable paradigm for other environmental remote sensing applications. Full article
Show Figures

Figure 1

33 pages, 7928 KB  
Article
eXCube2: Explainable Brain-Inspired Spiking Neural Network Framework for Emotion Recognition from Audio, Visual and Multimodal Audio–Visual Data
by N. K. Kasabov, A. Yang, Z. Wang, I. Abouhassan, A. Kassabova and T. Lappas
Biomimetics 2026, 11(3), 208; https://doi.org/10.3390/biomimetics11030208 - 14 Mar 2026
Abstract
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube [...] Read more.
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube that is spatially structured according to a human brain template. The BIAI models developed in eXCube2 are trainable on spatio- and spectro-temporal data using brain-inspired learning rules. Such models are explainable in terms of revealing patterns in data and are adaptable to new data. The eXCube2 models are implemented as software systems and tested on speech and video data of subjects expressing emotional states. The use of a brain template for the SNN structure enables brain-inspired tonotopic and stereo mapping of audio inputs, topographic mapping of visual data, and the combined use of both modalities. This novel approach brings AI-based emotional state recognition closer to human perception, provides a better explainability and adaptability than existing AI systems. It also results in a higher or competitive accuracy, even though this was not the main goal here. This is demonstrated through experiments on benchmark datasets, achieving classification accuracy above 80% on single-modality data and 88.9% when multimodal audio–visual data are used, and a “don’t know” output is introduced. The paper further discusses possible applications of the proposed eXCube2 framework to other audio, visual, and audio–visual data for solving challenging problems, such as recognizing emotional states of people from different origins; brain state diagnosis (e.g., Parkinson’s disease, Alzheimer’s disease, ADHD, dementia); measuring response to treatment over time; evaluating satisfaction responses from online clients; cognitive robotics; human–robot interaction; chatbots; and interactive computer games. The SNN-based implementation of BIAI also enables the use of neuromorphic chips and platforms, leading to reduced power consumption, smaller device size, higher performance accuracy, and improved adaptability and explainability. This research shows a step toward building brain-inspired AI systems. Full article
Show Figures

Figure 1

23 pages, 2450 KB  
Article
A Lightweight and Explainable AI Framework Toward Automated Infraocclusion Detection in Pediatric Panoramic Radiographs
by Zeliha Hatipoglu Palaz, Ecem Elif Cege, Bamoye Maiga, Yaser Dalveren, Gonca Gokce Menekse Dalveren, Ali Kara, Ahmet Soylu and Mohammad Derawi
Diagnostics 2026, 16(6), 866; https://doi.org/10.3390/diagnostics16060866 - 14 Mar 2026
Abstract
Background/Objectives: Infraocclusion in pediatric patients may result in space loss, malocclusion and the need for complex orthodontic treatment if not detected early. Conventional diagnosis may be subject to human error and can be challenging, particularly in pediatric cases. The aim of this [...] Read more.
Background/Objectives: Infraocclusion in pediatric patients may result in space loss, malocclusion and the need for complex orthodontic treatment if not detected early. Conventional diagnosis may be subject to human error and can be challenging, particularly in pediatric cases. The aim of this study is to design and evaluate a lightweight, two-stage deep learning framework with integrated explainable AI (XAI) techniques for automated infraocclusion detection in pediatric panoramic radiographs. Methods: Annotated panoramic radiographs of pediatric patients aged 7–11 years were used for training and validation. In the first stage, a MobileNet V2 Lite model was used to detect the region of interest (ROI) comprising premolars and molars. In the second stage, a custom CNN classifier was proposed to distinguish between infraocclusion and no infraocclusion. Model performance was evaluated in terms of diagnostic accuracy, computational complexity, and statistical significance. XAI techniques were also incorporated to visualize model attention and enhance interpretability. Results: The detection stage achieved high reliability with a precision, recall, F1-score, and AP50 values of 0.99, and an AP75 of 0.89, indicating accurate ROI localization. The classification stage reached an overall accuracy of 98.78%, with class-specific accuracies of 99.25% for infraocclusion and 98.31% for no infraocclusion cases. The framework also demonstrated computational efficiency, requiring only 1.88 M trainable parameters (7.19 MB), with short training times and low inference latency (0.8 ms for classification and 19 ms for detection). XAI visualizations consistently highlighted clinically relevant regions, such as occlusal margins and interproximal areas, confirming the model’s alignment with radiographic features recognized by clinicians. Conclusions: The proposed two-stage framework provides an accurate, computationally efficient, and interpretable solution for automated infraocclusion detection in pediatric patients. Its modular design and reduced complexity support practical integration into routine clinical workflows, including resource-constrained environments. These findings indicate that lightweight and XAI systems may enhance early infraocclusion detection while maintaining clinical transparency. Full article
Show Figures

Figure 1

20 pages, 3878 KB  
Article
A Hybrid Multimodal Cancer Diagnostic Framework Integrating Deep Learning of Histopathology and Whispering Gallery Mode Optical Sensors
by Shereen Afifi, Amir R. Ali, Nada Haytham Abdelbasset, Youssef Poulis, Yasmin Yousry, Mohamed Zinal, Hatem S. Abdullah, Miral Y. Selim and Mohamed Hamed
Diagnostics 2026, 16(6), 848; https://doi.org/10.3390/diagnostics16060848 - 12 Mar 2026
Viewed by 181
Abstract
Background/Objectives: Biopsy examination remains the gold standard for cancer diagnosis, relying on histopathological assessment of tissue samples to identify malignant changes. However, manual interpretation of histopathological slides is time-consuming, subjective, and susceptible to inter-observer variability. The digitization of histopathological images enables automated analysis [...] Read more.
Background/Objectives: Biopsy examination remains the gold standard for cancer diagnosis, relying on histopathological assessment of tissue samples to identify malignant changes. However, manual interpretation of histopathological slides is time-consuming, subjective, and susceptible to inter-observer variability. The digitization of histopathological images enables automated analysis and offers opportunities to support clinicians with more consistent and objective diagnostic tools. This study aims to enhance cancer diagnosis by proposing a hybrid framework that integrates deep-learning-based histopathological image analysis with Whispering Gallery Mode (WGM) optical sensing for complementary tissue characterization. Methods: The proposed framework combines automated tumor classification from histopathological images with biochemical signal analysis obtained from WGM optical sensors. Deep learning models, including EfficientNet-B0, InceptionV3, and Vision Transformer (ViT), were employed for binary and multi-class tumor classification using the BreakHis dataset. To address class imbalance, a Deep Convolutional Generative Adversarial Network (DCGAN) was utilized to generate synthetic histopathological images alongside conventional data augmentation techniques. In parallel, WGM optical sensors were incorporated to capture subtle tissue-specific signatures, with machine learning algorithms enabling automated feature extraction and classification of the acquired signals. Results: In multi-class classification, InceptionV3 combined with DCGAN-based augmentation achieved an accuracy of 94.45%, while binary classification reached 96.49%. Fine-tuned Vision Transformer models achieved a higher classification accuracy of 98% on the BreakHis dataset. The integration of WGM optical sensing provided additional biochemical information, offering complementary insights to image-based analysis and supporting more robust diagnostic decision-making. Conclusions: The proposed hybrid framework demonstrates the potential of combining deep-learning-based histopathological image analysis with WGM optical sensing to improve the accuracy and reliability of cancer classification. By integrating morphological and biochemical information, the framework offers a promising approach for enhanced, objective, and supportive cancer diagnostic systems. Full article
Show Figures

Figure 1

26 pages, 2632 KB  
Article
Automated Malaria Ring Form Classification in Blood Smear Images Using Ensemble Parallel Neural Networks
by Pongphan Pongpanitanont, Naparat Suttidate, Manit Nuinoon, Natthida Khampeeramao, Sakhone Laymanivong and Penchom Janwan
J. Imaging 2026, 12(3), 127; https://doi.org/10.3390/jimaging12030127 - 12 Mar 2026
Viewed by 58
Abstract
Manual microscopy for malaria diagnosis is labor-intensive and prone to inter-observer variability. This study presents an automated binary classification approach for detecting malaria ring-form infections in thin blood smear single-cell images using a parallel neural network framework. Utilizing a balanced Kaggle dataset of [...] Read more.
Manual microscopy for malaria diagnosis is labor-intensive and prone to inter-observer variability. This study presents an automated binary classification approach for detecting malaria ring-form infections in thin blood smear single-cell images using a parallel neural network framework. Utilizing a balanced Kaggle dataset of 27,558 erythrocyte crops, images were standardized to 128 × 128 pixels and subjected to on-the-fly augmentation. The proposed architecture employs a dual-branch fusion strategy, integrating a convolutional neural network for local morphological feature extraction with a multi-head self-attention branch to capture global spatial relationships. Performance was rigorously evaluated using 10-fold stratified cross-validation and an independent 10% hold-out test set. Results demonstrated high-level discrimination, with all models achieving an ROC–AUC of approximately 0.99. The primary model (Model#1) attained a peak mean accuracy of 0.9567 during cross-validation and 0.97 accuracy (macro F1-score: 0.97) on the independent test set. In contrast, increasing architectural complexity in Model#3 led to a performance decline (0.95 accuracy) due to higher false-positive rates. These findings suggest that moderate-capacity feature fusion, combining convolutional descriptors with attention-based aggregation, provides a robust and generalizable solution for automated malaria screening without the risks associated with over-parameterization. Despite a strong performance, immediate clinical use remains limited because the model was developed on pre-segmented single-cell images, and external validation is still required before routine implementation. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

28 pages, 5635 KB  
Article
Interpretable Multimodal Framework for Human-Centered Street Assessment: Integrating Visual-Language Models for Perceptual Urban Diagnostics
by Kaiqing Yuan, Haotian Lan, Yao Gao and Kun Wang
Land 2026, 15(3), 449; https://doi.org/10.3390/land15030449 - 12 Mar 2026
Viewed by 169
Abstract
While objective street metrics derived from imagery or GIS have become standard in urban analytics, they remain insufficient to capture subjective perceptions essential to inclusive urban design. This study introduces a novel Multimodal Street Evaluation Framework (MSEF) that fuses a vision transformer (VisualGLM-6B) [...] Read more.
While objective street metrics derived from imagery or GIS have become standard in urban analytics, they remain insufficient to capture subjective perceptions essential to inclusive urban design. This study introduces a novel Multimodal Street Evaluation Framework (MSEF) that fuses a vision transformer (VisualGLM-6B) with a large language model (GPT-4), enabling interpretable dual-output assessment of streetscapes. Leveraging over 15,000 annotated street-view images from Harbin, China, we fine-tune the framework using Low-Rank Adaptation(LoRA) and P-Tuning v2 for parameter-efficient adaptation. The model achieves an F1 score of 0.863 on objective features and 89.3% agreement with aggregated resident perceptions, validated across stratified socioeconomic geographies. Beyond classification accuracy, MSEF captures context-dependent contradictions: for instance, informal commerce boosts perceived vibrancy while simultaneously reducing pedestrian comfort. It also identifies nonlinear and semantically contingent patterns—such as the divergent perceptual effects of architectural transparency across residential and commercial zones—revealing the limits of universal spatial heuristics. By generating natural-language rationales grounded in attention mechanisms, the framework bridges sensory data with socio-affective inference, enabling transparent diagnostics aligned with Sustainable Development Goal 11(SDG 11). This work offers both methodological innovation in urban perception modeling and practical utility for planning systems seeking to reconcile infrastructural precision with lived experience. Full article
(This article belongs to the Special Issue Big Data-Driven Urban Spatial Perception)
Show Figures

Figure 1

17 pages, 354 KB  
Article
Multicenter Analytical Performance Evaluation of the BD Phoenix NMIC-461 Panel for Carbapenemase Classification and Antimicrobial Susceptibility Testing of Enterobacterales, Pseudomonas aeruginosa, and Acinetobacter spp.
by Jingjia Zhang, Liying Sun, Ge Zhang, Wei Kang, Tong Wang, Jin Li, Haotian Gao, Qiwen Yang, Kuixia Sun, Qian Wang and Hongli Sun
Antibiotics 2026, 15(3), 286; https://doi.org/10.3390/antibiotics15030286 - 12 Mar 2026
Viewed by 123
Abstract
Objectives: To evaluate the capability of the BD Phoenix NMIC-461 panel in the detection and classification of carbapenemase production and antimicrobial susceptibility testing of 10 antimicrobial agents among Enterobacterales, Pseudomonas aeruginosa, and Acinetobacter spp. Methods: A total of 714 non-repetitive clinical [...] Read more.
Objectives: To evaluate the capability of the BD Phoenix NMIC-461 panel in the detection and classification of carbapenemase production and antimicrobial susceptibility testing of 10 antimicrobial agents among Enterobacterales, Pseudomonas aeruginosa, and Acinetobacter spp. Methods: A total of 714 non-repetitive clinical isolates from three tertiary hospitals in China were enrolled. Carbapenemase production was confirmed by the modified carbapenem inactivation method (mCIM), while carbapenemase typing was validated by polymerase chain reaction (PCR) and Sanger sequencing. Antimicrobial susceptibility testing (AST) for ten antimicrobial agents was performed using broth microdilution (BMD) as the reference method. Results: The sensitivity and specificity of carbapenemase detection were 98.8% (95% CI, 96.6–99.6) and 92.4% (95% CI, 89.5–94.6) separately compared to sequencing. Classification accuracy was compromised by carbapenemase-positive unclassified strains, particularly reducing sensitivity for Enterobacterales. Excluding unclassified strains, the sensitivity and specificity were: for class A, 100% (95% CI, 94.0–100) and 97.3% (95% CI, 95.6–98.4); for class B, 97.1% (95% CI, 89.7–99.2) and 97.6% (95% CI, 96.0–98.6); and for class D, 94.0% (95% CI, 87.9–97.3) and 99.1% (95% CI, 97.8–99.7). The panel was subject to limitations for carbapenemase detection when applied to Pseudomonas aeruginosa. The NMIC-461 panel demonstrated excellent performance for ten BMD-evaluated agents across four bacterial categories, with essential agreement (EA) exceeding 95% and category agreement (CA) exceeding 90% except for Levofloxacin, and major error (ME) and very major error (VME) rates below 3% and 1.5%, respectively. Conclusions: The BD Phoenix NMIC-461 panel provides reliable AST results for commonly encountered Gram-negative bacterial isolates. Regarding carbapenemase detection, the panel demonstrates high sensitivity but only moderate specificity in classifying carbapenemase-producing organisms (CPO), with a relatively high proportion of positive unclassified isolates among Enterobacterales and low specificity for P. aeruginosa. Overall, the implementation of NMIC-461 testing holds promise for significantly reducing turnaround time in both carbapenemase detection and classification. Full article
Show Figures

Figure 1

14 pages, 4793 KB  
Article
Scale-Free Neurodynamics as Functional Fingerprint of Brain Regions
by Karolina Armonaite, Franca Tecchio, Baingio Pinna, Camillo Porcaro and Livio Conti
Bioengineering 2026, 13(3), 323; https://doi.org/10.3390/bioengineering13030323 - 11 Mar 2026
Viewed by 132
Abstract
This study investigates the ongoing electrical activity of local neural networks—referred to as neurodynamics—across 37 anatomically defined brain regions. We analyzed stereotactic intracranial EEG (sEEG) recordings from 106 subjects during wakeful rest, focusing on scale-free (power-law) properties to determine whether distinct brain regions [...] Read more.
This study investigates the ongoing electrical activity of local neural networks—referred to as neurodynamics—across 37 anatomically defined brain regions. We analyzed stereotactic intracranial EEG (sEEG) recordings from 106 subjects during wakeful rest, focusing on scale-free (power-law) properties to determine whether distinct brain regions exhibit unique neurodynamic signatures. Results revealed a power-law regime in two frequency ranges (approximately 0.5–4 Hz and 33–80 Hz). Notably, the power-law exponent (slope) in the high-frequency band differed significantly between cortical and subcortical areas (p < 0.01). These findings suggest that local neurodynamics, as reflected in scale-free characteristics, may serve as a functional “fingerprint” for brain region classification. This approach may contribute to functional brain parcellation efforts and offer new insights into the intrinsic organization of neuronal networks as revealed by resting-state activity analysis. Full article
Show Figures

Figure 1

10 pages, 2733 KB  
Proceeding Paper
Mild Cognitive Impairment Identification System Based on Physiological Characteristics and Interactive Games
by Ming-An Chung, Zhi-Xuan Zhang, Jun-Hao Zhang, Chia-Chun Hsu, Yi-Ju Yao, Jin-Hong Chou, Ming-Chun Hsieh, Sung-Yun Chai, Shang-Jui Huang, Kai-Xiang Chen, Chia-Wei Lin and Pin-Han Chen
Eng. Proc. 2026, 128(1), 19; https://doi.org/10.3390/engproc2026128019 - 10 Mar 2026
Viewed by 106
Abstract
As the global aging population increases, the early detection and prevention of Alzheimer’s disease (AD) have become important in public health. To solve the problems of subjectivity and low timeliness of traditional assessment methods, this paper proposes a multimodal dementia prevention system that [...] Read more.
As the global aging population increases, the early detection and prevention of Alzheimer’s disease (AD) have become important in public health. To solve the problems of subjectivity and low timeliness of traditional assessment methods, this paper proposes a multimodal dementia prevention system that combines physiological sensing, a gamification interface, and a classification model. The system includes an interactive joystick to measure pulse and blood pressure. A Chinese music game app increases the participation of the elderly and reduces their sense of rejection through gamification interaction. After the physiological data were standardized by Z-score, they were input into three small sample classifiers (Gaussian Naïve Bayes, Fisher Linear Discriminant Analysis, and Logistic Regression) for the binary classification of AD. The system performance was evaluated using the Leave-One-Out cross-validation method. Experimental results show that Logistic Regression performed best in situations with extremely small samples and class imbalance, with an F1-score of 0.700, which was higher than the other two. Dynamic features and model fusion technologies need to be integrated to further enhance the clinical application potential of the system in the early prediction of dementia. Full article
Show Figures

Figure 1

26 pages, 6684 KB  
Article
AI-Based Automated Visual Condition Assessment of Municipal Road Infrastructure Using High-Resolution 3D Street-Level Imagery
by Elia Ferrari, Jonas Meyer and Stephan Nebiker
Infrastructures 2026, 11(3), 90; https://doi.org/10.3390/infrastructures11030090 - 10 Mar 2026
Viewed by 236
Abstract
The effective management of municipal road infrastructure requires up-to-date, standardized and reliable condition information to support sustainable maintenance. While visual road-condition assessment methods based on established standards are widely applied to municipal roads, they remain largely manual, time-consuming, costly and subjective. This study [...] Read more.
The effective management of municipal road infrastructure requires up-to-date, standardized and reliable condition information to support sustainable maintenance. While visual road-condition assessment methods based on established standards are widely applied to municipal roads, they remain largely manual, time-consuming, costly and subjective. This study presents an end-to-end workflow for the automated visual inspection and condition assessment of municipal road infrastructure using high-resolution, 3D street-level imagery acquired by professional mobile mapping systems. The proposed approach integrates an efficient preprocessing pipeline for precise road-surface extraction with deep learning models trained for the specific task and an advanced postprocessing method for robust results aggregation. For this purpose, a large dataset covering approximately 352 km of municipal roads across eight municipalities was created by combining street-level imagery with expert-annotated road-condition index (RCI) values. Two neural network variants were implemented: a regression model predicting standardized RCI values and a binary classifier distinguishing between roads requiring maintenance and those in good condition. To ensure decision-oriented outputs at the infrastructure-asset level, frame-based predictions are aggregated into homogeneous road segments using outlier detection and change-point analysis along the road axis. The regression model achieved a mean absolute error of 0.48 RCI values at frame level and 0.40 RCI values at road-segment level, outperforming conventional inter-expert variability, while the binary classification model reached an F1-score of 0.85. These findings demonstrate that AI-based visual road-condition assessment using professional mobile mapping data can provide accurate, standardized and scalable condition information for municipal road infrastructure. The proposed workflow supports maintenance prioritization and infrastructure management decisions without requiring explicit detection of individual pavement defects, offering a practical pathway toward automated, cost-effective road-condition monitoring. Full article
(This article belongs to the Section Infrastructures Inspection and Maintenance)
Show Figures

Figure 1

18 pages, 2234 KB  
Article
A Gated Attention-Based Multiple Instance Learning and Test-Time Augmentation Approach for Diagnosing Active Sacroiliitis in Sacroiliac Joint MRI Scans
by Zeynep Keskin, Onur İnan, Ömer Özberk, Reyhan Bilici, Sema Servi, Selma Özlem Çelikdelen and Mehmet Yıldırım
J. Clin. Med. 2026, 15(6), 2101; https://doi.org/10.3390/jcm15062101 - 10 Mar 2026
Viewed by 112
Abstract
Background and Objective: Axial spondyloarthritis (axSpA) is a group of chronic inflammatory diseases that primarily affect the sacroiliac joints. Early diagnosis is crucial for preventing irreversible structural damage. Magnetic Resonance Imaging (MRI) is the gold standard for detecting early inflammatory changes such as [...] Read more.
Background and Objective: Axial spondyloarthritis (axSpA) is a group of chronic inflammatory diseases that primarily affect the sacroiliac joints. Early diagnosis is crucial for preventing irreversible structural damage. Magnetic Resonance Imaging (MRI) is the gold standard for detecting early inflammatory changes such as sacroiliitis. However, conventional MRI interpretation is inherently subjective and susceptible to both intra- and inter-observer variability. Therefore, artificial intelligence (AI)-driven diagnostic solutions are increasingly being explored. Among them, the Gated Attention Multiple Instance Learning (MIL) framework holds strong potential in modeling heterogeneous inflammatory distributions, thanks to its slice-level attention mechanism. This study aims to evaluate the diagnostic performance of a deep learning model based on Gated Attention MIL for automated sacroiliitis detection. Furthermore, its results are compared with a baseline deep learning architecture (standard ResNet-18), and its consistency with radiologist annotations is analyzed. Materials and Methods: The dataset included 554 subjects, comprising 276 patients diagnosed with axSpA and 278 healthy controls. All MRI data were derived from axial T2-weighted fat-suppressed (T2_TSE_TRA_FS) sequences. Patient-wise data splitting was employed to construct training, validation, and independent test sets. The proposed model architecture integrates ResNet-18-based feature extraction, a gated attention mechanism for instance-level weighting, and bag-level classification. Additionally, Test-Time Augmentation (TTA) was implemented to enhance robustness during inference. Results: On the independent test set, the model achieved an accuracy of 85.88%, sensitivity of 92.86%, specificity of 79.07%, and an F1-score of 86.67%. Attention heatmaps generated by the MIL module showed strong spatial overlap with bone marrow edema regions annotated by expert radiologists. Implementation of TTA led to an approximate 10% improvement in overall classification accuracy. Conclusions: The Gated Attention MIL framework demonstrated high diagnostic performance for sacroiliitis detection, indicating its value as a reliable decision support tool for early axSpA diagnosis. Validation on larger, multi-center datasets is warranted to ensure generalizability and to support clinical integration in routine radiology workflows. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Graphical abstract

19 pages, 1065 KB  
Article
Entropy-Based Dual-Teacher Distillation for Efficient Motor Imagery EEG Classification
by Zefeng Xu and Zhuliang Yu
Entropy 2026, 28(3), 310; https://doi.org/10.3390/e28030310 - 10 Mar 2026
Viewed by 182
Abstract
Motor imagery (MI) EEG classification is a key component of noninvasive brain–computer interfaces (BCIs) and often must satisfy strict latency constraints in online or edge deployments. Although ensembling can reliably improve MI decoding accuracy, its inference cost grows linearly with the number of [...] Read more.
Motor imagery (MI) EEG classification is a key component of noninvasive brain–computer interfaces (BCIs) and often must satisfy strict latency constraints in online or edge deployments. Although ensembling can reliably improve MI decoding accuracy, its inference cost grows linearly with the number of ensemble members, making it impractical for low-latency applications. To address these issues, we propose an entropy-based dual-teacher distillation framework that transfers ensemble teacher knowledge to a single deployable backbone. From an information theoretic perspective, two failure modes are common in small and noisy MI datasets: elevated predictive entropy (noisy decisions) and large fluctuation across late training epochs (unstable convergence and unreliable checkpoint selection). Thus, we introduce an exponential moving average (EMA) teacher with entropy-gated activation as a low-pass filter in parameter space to reduce the student’s prediction noise. In addition, a two-stage cosine annealing schedule is employed to suppress late-stage oscillations and improve the robustness of final checkpoint selection. Experiments on two public MI benchmarks (BCI Competition IV-2a and IV-2b) with three representative backbones (EEGNet, ShallowConvNet, and ATCNet) under the subject dependent protocol show consistent accuracy gains over the ensemble teacher and strong distillation baselines. On IV-2a, our method achieves an average accuracy of 0.7713 across the backbones, surpassing both the original models (0.7222) and the corresponding ensembles (0.7482); on IV-2b, it achieves 0.8583 versus 0.8432 (original) and 0.8529 (ensemble). Full article
(This article belongs to the Special Issue Entropy Analysis of Electrophysiological Signals)
Show Figures

Figure 1

11 pages, 1102 KB  
Article
Characteristics of Recurrent Hepatocellular Carcinoma Based on Serum AFP, PIVKA-II, and Genetic Mutations
by In Soo Cho, Keun Soo Ahn, Sangkyun Jeong, Tae-Seok Kim, Min Jae Kim, Seung Kyoung Yang, Sunwha Cho and Yong Hoon Kim
Medicina 2026, 62(3), 508; https://doi.org/10.3390/medicina62030508 - 10 Mar 2026
Viewed by 123
Abstract
Background and Objectives: Reliable tools for evaluating tumor biology and forecasting clinical outcomes in recurrent hepatocellular carcinoma (HCC) remain scarce, and molecular characterization through genetic profiling is equally limited in this setting. This investigation explores whether serum tumor marker expression patterns correlate with [...] Read more.
Background and Objectives: Reliable tools for evaluating tumor biology and forecasting clinical outcomes in recurrent hepatocellular carcinoma (HCC) remain scarce, and molecular characterization through genetic profiling is equally limited in this setting. This investigation explores whether serum tumor marker expression patterns correlate with genomic mutation profiles, and whether such correlations may facilitate more accurate prediction of tumor biology and patient prognosis in recurrent HCC. Materials and Methods: We analyzed a cohort of 20 patients who underwent curative-intent resection for both primary and recurrent HCC. Tumor specimens collected at the time of each operation were subjected to targeted next-generation sequencing for mutation profiling. Based on pre-operative serum levels of AFP (alpha-fetoprotein) and PIVKA-II (Protein Induced by Vitamin K Absence or Antagonist-II) measured before each surgery, patients were stratified into four biomarker subgroups. Those who maintained the same biomarker subgroup at both operations were designated the ‘serum concordant group’, whereas those who transitioned between subgroups were classified as the ‘serum discordant group’. Clinical characteristics and mutation data were subsequently compared between these two classifications. Results: The interval from primary surgery to disease recurrence was significantly shorter in the serum concordant group relative to the serum discordant group (mean 11.16 ± 1.86 vs. 44.8 ± 9.45 months, p < 0.001). Additionally, disease-free survival following reoperation was significantly inferior in the concordant group compared with the discordant group (p = 0.039). Regarding mutational patterns, the concordant group demonstrated shared gene mutations between primary and recurrent lesions, while the discordant group exhibited divergent mutational landscapes across both timepoints. Conclusions: The concordance or discordance of serum tumor marker profiles between primary and recurrent HCC lesions may serve as a clinically accessible surrogate for underlying tumor biology and prognostic stratification. These results are preliminary and hypothesis-generating. Further studies in larger, independent cohorts are warranted to confirm the observed associations. Full article
(This article belongs to the Section Gastroenterology & Hepatology)
Show Figures

Figure 1

Back to TopTop