Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,766)

Search Parameters:
Keywords = automated classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2368 KB  
Article
MitoGEx: An Integrated Platform for Streamlined Human Mitochondrial Genome Analysis
by Kongpop Jeenkeawpiam, Pemikar Srifa, Natakorn Nokchan, Natthapon Khongcharoen, Anas Binkasem and Surasak Sangkhathat
Genes 2026, 17(3), 338; https://doi.org/10.3390/genes17030338 - 18 Mar 2026
Abstract
Background/Objectives: Mitochondrial DNA (mtDNA) is an important resource for understanding human ancestry, population diversity, and the molecular mechanisms of mitochondrial diseases. However, analyzing mtDNA thoroughly often requires advanced bioinformatics skills and command-line knowledge. To address this challenge, we created Mitochondrial Genome Explorer [...] Read more.
Background/Objectives: Mitochondrial DNA (mtDNA) is an important resource for understanding human ancestry, population diversity, and the molecular mechanisms of mitochondrial diseases. However, analyzing mtDNA thoroughly often requires advanced bioinformatics skills and command-line knowledge. To address this challenge, we created Mitochondrial Genome Explorer (MitoGEx), a user-friendly computational pipeline optimized for human mtDNA analysis that combines multiple mtDNA analysis modules within a single graphical user interface. Methods: The platform simplifies key analytical steps, such as quality control, sequence alignment, alignment quality assessment, variant detection, haplogroup classification, and phylogenetic reconstruction. Users can choose between Quick and Advanced modes, which offer default settings or customizable options based on their analysis needs. To demonstrate its effectiveness, we analyzed 15 whole-exome sequencing (WES) samples from Songklanagarind Hospital using MitoGEx. Results: The sequencing data were of high quality, with over 92 percent of bases scoring above a Phred score and consistent GC content across all samples. Variant detection using the GATK mitochondrial pipeline and annotation with ANNOVAR and the MitImpact database revealed multiple high-confidence variants. Haplogroup classification with Haplogrep 3 and phylogenetic analysis with IQ-TREE 2 confirmed diverse maternal lineages within the cohort. Conclusions: Taken together, MitoGEx facilitates mitochondrial genome analysis in a reproducible and accessible manner for both research and clinical bioinformatics applications. The analytical results produced by MitoGEx are concordant with those obtained using standalone bioinformatic tools, demonstrating analytical correctness. By integrating all analysis steps into a single automated workflow, MitoGEx reduces execution time and limits human error inherent to manual, multi-step pipelines. Full article
(This article belongs to the Special Issue Molecular Basis in Rare Genetic Disorders)
Show Figures

Figure 1

24 pages, 2081 KB  
Article
Research on Large Language Model-Based Bibliographic Cataloging Agent in the CNMARC Context
by Zhuoxi Tan, Xin Yang, Qinyu Chen and Tao Chen
Publications 2026, 14(1), 19; https://doi.org/10.3390/publications14010019 - 18 Mar 2026
Abstract
To address the efficiency and cost limitations of traditional manual cataloging, this study proposes a large language model-driven automated cataloging workflow in which the Metadata Extraction Agent (MEA), Description Cataloging Agent (DCA), Subject Analysis & Indexing Agent (SAIA), and Quality Control Agent (QCA) [...] Read more.
To address the efficiency and cost limitations of traditional manual cataloging, this study proposes a large language model-driven automated cataloging workflow in which the Metadata Extraction Agent (MEA), Description Cataloging Agent (DCA), Subject Analysis & Indexing Agent (SAIA), and Quality Control Agent (QCA) collaborate to perform cataloging tasks. Experiments are conducted using a dataset of over 33,000 CNMARC bibliographic records from a University Library, together with data from the Chinese Library Classification (5th edition). Meanwhile, the agent-based workflow framework directly employs large language models without additional enhancement techniques, thereby providing a useful experimental benchmark for evaluating future AI-assisted cataloging systems. The results show that the framework performs well in metadata recognition, bibliographic description, and macro-level classification tasks, and can relatively stably generate standardized records. However, limitations remain in fine-grained semantic indexing and the interpretation of complex contexts. Therefore, in light of the capability limitations revealed by the experimental results, the study argues that fully automated end-to-end cataloging relying solely on generative AI is not yet entirely feasible. Future improvements should integrate techniques such as retrieval-augmented generation, supervised fine-tuning, and structured reasoning prompts, while establishing traceable mechanisms to enhance the reliability of intelligent cataloging. Full article
(This article belongs to the Special Issue Overview on Today’s AI Tools for Authors)
Show Figures

Figure 1

13 pages, 1912 KB  
Article
Accelerating Evidence Synthesis: A BERT-Assisted Workflow for Meta-Analyses of Radiotherapy Complications in Nasopharyngeal Carcinoma
by Tsair-Fwu Lee, Wen-Ping Yun, Hung-Wei Hsu, Jyun-Jie Wu, Ya-Shin Kuan, Yi-Lun Liao, Cheng-Shie Wuu, Liyun Chang, Yang-Wei Hsieh and Pei-Ju Chao
Reports 2026, 9(1), 90; https://doi.org/10.3390/reports9010090 - 18 Mar 2026
Abstract
Background/Objectives: This study developed and evaluated a BERT-assisted literature screening workflow to support meta-analyses of postradiotherapy complications in nasopharyngeal carcinoma patients. The aim was to automate key screening steps to improve downstream screening efficiency and consistency, while minimizing time and bias during [...] Read more.
Background/Objectives: This study developed and evaluated a BERT-assisted literature screening workflow to support meta-analyses of postradiotherapy complications in nasopharyngeal carcinoma patients. The aim was to automate key screening steps to improve downstream screening efficiency and consistency, while minimizing time and bias during manual reviews. Materials and Methods: A bidirectional encoder representations from transformers (BERT) model was integrated into a standard systematic review pipeline for studies on postradiotherapy complications in nasopharyngeal carcinoma. The workflow combined automated BERT-based classification with manual verification and followed PRISMA and PICOS guidelines for literature identification, screening, and eligibility assessment. Model training involved hyperparameter tuning and comparison of different optimizers to maximize screening performance against a manually curated reference set, with particular attention to discrimination (AUC) and processing time. Results: From an initial corpus of 6496 records, the combined automated and manual workflow identified 23 eligible studies for meta-analysis. The included studies showed substantial heterogeneity (I2 = 86.85%), supporting the use of a random-effects model to pool outcomes. The BERT model optimized with an Adagrad optimizer achieved an AUC of 0.77 for relevant-study classification and reduced screening time to 1142 s. To demonstrate the workflow’s utility, a downstream meta-analysis was conducted using the identified studies. As a downstream application based on the identified studies, a quantitative synthesis was conducted, in which (meta-analysis of the 23 included studies), a random forest model—evaluated across those studies—achieved an AUC of 0.92 under a fixed-effect analysis for predicting postradiotherapy complications. Conclusions: Integrating BERT into the literature screening phase of meta-analysis for postradiotherapy nasopharyngeal carcinoma complications markedly improved screening efficiency while maintaining acceptable classification performance. This workflow demonstrates the feasibility of transformer-based assistance for systematic reviews and provides a foundation for developing disease-specific, AI-augmented evidence synthesis pipelines in oncology. Full article
Show Figures

Figure 1

12 pages, 710 KB  
Article
FTIR-Based Machine Learning Identification of Virgin and Recycled Polyester for Textile Recycling in Industry 4.0
by Maria Inês Barbosa, Ana Margarida Teixeira, Maria Leonor Sousa, Pedro Ribeiro, Clara Sousa and Pedro Miguel Rodrigues
Processes 2026, 14(6), 964; https://doi.org/10.3390/pr14060964 - 18 Mar 2026
Abstract
Advances in Industry 4.0 manufacturing have accelerated the adoption of machine learning (ML) for automated classification. Polyester (PES), a widely used synthetic fiber, competes with natural fibers like cotton and other synthetics, highlighting the need for continuous research and improvement. In the textile [...] Read more.
Advances in Industry 4.0 manufacturing have accelerated the adoption of machine learning (ML) for automated classification. Polyester (PES), a widely used synthetic fiber, competes with natural fibers like cotton and other synthetics, highlighting the need for continuous research and improvement. In the textile sector, distinguishing recycled polyester (rPES) from virgin polyester (vPES) remains challenging due to overlapping chemical signatures and material variability. A combination of Fourier transform infrared (FTIR) spectroscopy and ML has not been explored for this purpose. In this study, we evaluated ML models to discriminate three PES fiber types (45 vPES, 65 rPES, and 55 mixed PES) using 165 FTIR spectra across four spectral regions, R1, R2, R3, and R4, as well as their combined representation. Six ML approaches were tested on data reduced with fast independent component analysis (FastICA) (1–30 components) using an 80/20 train–test dataset split. The Decision Tree classifier achieved the highest Accuracy in four of the five spectral evaluations, with classification accuracies ranging from 66.67% to 77.78% for region R4, which also had a balanced classification profile with an area-under-the-curve (AUC) value of 0.81. Notably, despite the moderate overall Accuracy, the model achieved 100% discrimination of rPES when distinguishing it from both mixed and vPES. Mixed fibers remained the most difficult to classify, highlighting the need for improved feature representation. Full article
Show Figures

Figure 1

26 pages, 977 KB  
Article
KE-MLLM: A Knowledge-Enhanced Multi-Sensor Learning Framework for Explainable Fake Review Detection
by Jiaying Chen, Jingyi Liu, Yiwen Liang and Mengjie Zhou
Appl. Sci. 2026, 16(6), 2909; https://doi.org/10.3390/app16062909 - 18 Mar 2026
Abstract
The proliferation of fake reviews on e-commerce and social platforms has severely undermined consumer trust and market integrity, necessitating robust and interpretable real-time detection mechanisms with multi-sensor data fusion capabilities. While traditional machine learning approaches have shown promise in identifying fraudulent reviews, they [...] Read more.
The proliferation of fake reviews on e-commerce and social platforms has severely undermined consumer trust and market integrity, necessitating robust and interpretable real-time detection mechanisms with multi-sensor data fusion capabilities. While traditional machine learning approaches have shown promise in identifying fraudulent reviews, they often lack transparency and fail to leverage the rich contextual knowledge embedded in large-scale datasets. In this paper, we propose KE-MLLM (Knowledge-Enhanced Multimodal Large Language Model), a unified framework that integrates knowledge-enhanced prompting with parameter-efficient fine-tuning for explainable fake review detection. Our approach employs LoRA (Low-Rank Adaptation) to fine-tune lightweight large language models (LLaMA-3-8B) on review text, while incorporating multimodal behavioral sensor signals including temporal patterns, user metadata, and social network characteristics for comprehensive anomaly sensing. To address the critical need for interpretability in fraud detection systems, we implement a Chain-of-Thought (CoT) reasoning module that generates human-understandable explanations for classification decisions, highlighting linguistic anomalies, sentiment inconsistencies, and behavioral red flags. We enhance the model’s discriminative capability through a knowledge distillation strategy that transfers domain-specific expertise from larger teacher models while maintaining computational efficiency suitable for edge sensing devices. Extensive experiments on two benchmark datasets—YelpChi and Amazon Reviews from the DGL Fraud Dataset—show that KE-MLLM achieves strong performance, reaching an F1-score of 94.3% and an AUC-ROC of 96.7% on YelpChi and outperforming the strongest baseline in our comparison by 5.8 and 4.2 percentage points, respectively. Furthermore, human evaluation indicates that the generated explanations achieve 89.5% consistency with expert annotations, suggesting that the framework can improve the interpretability and practical usefulness of automated fraud detection systems. The proposed framework provides a useful step toward more accurate and interpretable fake review detection and offers a practical reference for building more transparent and accountable AI systems in high-stakes applications. Full article
Show Figures

Figure 1

29 pages, 5790 KB  
Article
Self-Supervised Reservoir Water Area Detection Across Multi-Source Optical Imagery
by Guiyan Mo, Qing Yang and Xiaofeng Zhou
Remote Sens. 2026, 18(6), 918; https://doi.org/10.3390/rs18060918 - 18 Mar 2026
Abstract
Reservoirs are critical infrastructure for water and energy security, and require accurate and timely monitoring of reservoir water extent to make informed decisions. Optical remote sensing provides frequent, large-area observations; however, automated water extraction is often complicated by dam operation and surface heterogeneity, [...] Read more.
Reservoirs are critical infrastructure for water and energy security, and require accurate and timely monitoring of reservoir water extent to make informed decisions. Optical remote sensing provides frequent, large-area observations; however, automated water extraction is often complicated by dam operation and surface heterogeneity, which increase spectral variability. Supervised methods, though widely used, generally require manual labels and often perform poorly when transferred across sensors and regions, limiting operational deployment. In this paper, we develop a geo-spectral feature-guided Self-Supervised Water Detection (SWD) framework, an automated algorithm designed for multi-source optical imagery. SWD consists of two stages: pixel-level classification and object-level refinement. Initially, SWD integrates spatial priors with spectral features to automatically derive high-confidence samples, which are then utilized to parameterize Gaussian mixture model to represent multimodal spectral distribution throughout the image. Furthermore, superpixel-constrained region growing is applied to refine shoreline and ensure object-level consistency. We validated SWD across 36 test cases comprising three sensors, six reservoirs, and two hydrological conditions. Compared with Random Forest and U-Net, SWD achieved the best performance. Specifically, (1) in cross-scale tests, SWD achieved high consistency with IoU ≥ 0.774; (2) in cross-region transfers, SWD maintained stable generalization (SD: 0.010); and (3) in hydrological response assessments, SWD captured water-level fluctuations with minimal bias variation (ΔRE < 1%). In addition, SWD framework is computationally efficient, with processing times of 0.49–1.29 s/Mpx on a standard CPU. This study demonstrates that SWD effectively addresses spectral variability and surface complexity in reservoir water area detection across multi-source optical imagery. It operates without manual labels or model training, enabling automated, large-scale and multi-temporal reservoir water monitoring. Full article
Show Figures

Figure 1

24 pages, 2611 KB  
Article
MF-DFA–Enhanced Deep Learning for Robust Sleep Disorder Classification from EEG Signals
by Abdulaziz Alorf
Fractal Fract. 2026, 10(3), 199; https://doi.org/10.3390/fractalfract10030199 - 18 Mar 2026
Abstract
Sleep disorders are prevalent in the world, and they lead to severe health issues such as cardiovascular disease and cognitive disabilities. Conventional polysomnography-based diagnosis is based on manual EEG analysis under the supervision of trained specialists, which is time-consuming and may have inter-rater [...] Read more.
Sleep disorders are prevalent in the world, and they lead to severe health issues such as cardiovascular disease and cognitive disabilities. Conventional polysomnography-based diagnosis is based on manual EEG analysis under the supervision of trained specialists, which is time-consuming and may have inter-rater variability. Although the predictions of deep learning (DL) models on the task of sleep classification of EEG have been promising, they, in many cases, do not explain the multiscale, temporal dynamics that physiological signals are characterized by. In this work, a hybrid model that is a combination of CNN and multifractal detrended fluctuation analysis (MF-DFA) was proposed to detect localized temporal features and long-term fractal-based dynamics of single-channel EEG recordings. The performance of the suggested model was tested using two separate polysomnographic datasets: the CAP Sleep Dataset of five-class sleep disorder classification (Healthy, Insomnia, Narcolepsy, PLM, and RBD) and the ISRUC Sleep Dataset on the three-class subject-independent validation. In the CAP dataset, the framework had an accuracy of 86.38%. Cross-dataset transfer to the ISRUC Sleep Dataset, where only the classification head was fine-tuned on a small labeled subset while all feature-extraction layers remained frozen from CAP training, achieved 87.50% accuracy, demonstrating that the learned representations generalize across differing recording protocols, sampling rates, and diagnostic label spaces. The experiments of ablation proved the paramount importance of the MF-DFA features, and the lack of them led to low classification rates. The findings demonstrate the clinical feasibility of applying fractal analysis in conjunction with DL to detect sleep disorders in an automated, generalizable manner, suitable for use in large-scale monitoring and resource-starved clinical environments. Full article
(This article belongs to the Special Issue Fractals in Physiology and Medicine)
Show Figures

Figure 1

23 pages, 5091 KB  
Article
Multiclass Anomaly Detection in Bridge Health Monitoring Data via Attention Enhancement and Class Imbalance Mitigation
by Wenda Ma, Qizhi Tang, Lei Huang and Shihao Zhang
Buildings 2026, 16(6), 1181; https://doi.org/10.3390/buildings16061181 - 17 Mar 2026
Abstract
Bridge structural health monitoring (BSHM) systems are essential for assessing the operational performance and safety of long-span bridges. However, monitoring data are often affected by factors such as sensor malfunctions, environmental disturbances, or power interruptions, leading to various anomalous data. Moreover, the multiclass [...] Read more.
Bridge structural health monitoring (BSHM) systems are essential for assessing the operational performance and safety of long-span bridges. However, monitoring data are often affected by factors such as sensor malfunctions, environmental disturbances, or power interruptions, leading to various anomalous data. Moreover, the multiclass imbalance of the data presents a major challenge to traditional anomaly detection methods. To address this issue, a novel multiclass anomaly detection method based on an improved deep convolutional neural network is proposed. Specifically, a ResNet50 architecture integrated with the convolutional block attention module (CBAM) is developed to enhance the extraction of discriminative features. Additionally, the Focal Loss function is introduced to emphasize the loss weight of minority samples, reducing the influence of majority classes, thereby effectively overcoming the class imbalance issue in multiclass anomaly detection. The proposed method is trained and validated using measured acceleration data collected from a large-scale cable-stayed bridge. The experimental results indicate that the model achieves an overall accuracy of 98.28%, while effectively improving the classification performance of minority categories. The method further reproduces the spatiotemporal distribution of anomalies in full-month monitoring data, confirming its robustness and engineering applicability for large-scale automated anomaly diagnosis in BSHM systems. Full article
Show Figures

Figure 1

31 pages, 1934 KB  
Review
Artificial Intelligence for Detecting Electoral Disinformation on Social Media: Models, Datasets, and Evaluation
by Félix Díaz, Nhell Cerna, Rafael Liza and Bryan Motta
Information 2026, 17(3), 292; https://doi.org/10.3390/info17030292 - 17 Mar 2026
Abstract
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining [...] Read more.
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining performance indicators, science mapping, and a focused full-text synthesis of highly cited papers. The literature grows sharply after 2019, peaks in 2025, and shows geographically uneven production, with collaboration structured around a small set of hubs. The thematic structure suggests that, during the pandemic era, infodemic-related research served as a catalyst, intensifying scientific attention to fake news and disinformation and expanding the associated detection and monitoring agendas. In addition, socio-political harm constructs such as hate speech, extremism, and polarization appear as recurrent and structurally central targets, highlighting that election-relevant work often extends beyond veracity assessment toward monitoring discourse risks. Blockchain also emerges as a novel and adjacent integrity theme, aligned with authenticity and provenance-oriented mitigation rather than mainstream detection pipelines. AI for electoral disinformation is not reducible to veracity classification, as influential studies also target automation and coordinated behavior, verification support, diffusion analysis, and estimation frameworks that focus on exposure and impact. Evaluation remains heterogeneous and is often shaped by benchmark settings, making high accuracy values hard to compare and potentially misleading when labeling quality, topic leakage, or context shift are not characterized. Overall, the findings motivate evaluation protocols that align operational objectives with modeling roles and explicitly address robustness to temporal and platform changes, asymmetric error costs during election windows, and representativeness across electoral contexts and languages, while also guiding future work on emerging integrity challenges and governance-relevant deployment settings. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

31 pages, 2512 KB  
Systematic Review
Optimization of Loss Determination in Claims Settlement Using Smart Industry Tools: A Systematic Review and Implications for the Construction Industry
by Jorge Acevedo-Bastías, Sebastián González Fernández, Luis López-Quijada and Vinicius Minatogawa
Buildings 2026, 16(6), 1175; https://doi.org/10.3390/buildings16061175 - 17 Mar 2026
Abstract
The claims resolution process is a cornerstone of the insurance industry, aiming to fairly and accurately determine the economic losses caused by adverse events. Traditionally, adjusters have relied heavily on expert judgment to perform this task. While this approach is essential, it often [...] Read more.
The claims resolution process is a cornerstone of the insurance industry, aiming to fairly and accurately determine the economic losses caused by adverse events. Traditionally, adjusters have relied heavily on expert judgment to perform this task. While this approach is essential, it often suffers from subjectivity, inconsistent criteria, and difficulty integrating complex data sources into objective analyses. In this context, Smart Industry tools—such as Artificial Intelligence (AI), Machine Learning (ML), Computer Vision (CV), and the Internet of Things (IoT)—have demonstrated high potential to automate damage detection and assessment; however, their effective integration into loss determination remains uneven across different productive sectors. This study addresses this problem through two objectives. First, we conducted a systematic literature review following PRISMA guidelines to identify which Smart Industry tools are currently used in the insurance sector for loss determination and to analyze their level of maturity in different productive sectors. We searched the Web of Science and Scopus databases, identifying 253 studies, of which 23 met our inclusion criteria. Second, based on the gaps we identified between the construction sector and more advanced industries such as automotive, we propose a methodological framework based on Building Information Modeling (BIM). Our results show that most solutions focus on the detection and technical classification of damage, especially in the automotive sector, while construction lacks methods to convert these technical findings into operational economic estimates. The proposed framework addresses this gap by standardizing technical and economic data from the underwriting stage, enabling more automated, traceable, and objective loss determination for infrastructure claims. Full article
Show Figures

Figure 1

15 pages, 2783 KB  
Article
CFSS-YOLO: A Detection Method for Cotton Top Bud in Real Farmland
by Xi Wu, Tingting Zhu, Sheng Xue, Jian Wu, Hongzhen Guo and Chao Ni
Agriculture 2026, 16(6), 672; https://doi.org/10.3390/agriculture16060672 - 16 Mar 2026
Abstract
Accurate identification of the cotton top bud is a prerequisite for automated cotton topping. However, the detection of the cotton top bud is low due to the small target size and a similar background. Therefore, a method named CFSS-YOLO was proposed to detect [...] Read more.
Accurate identification of the cotton top bud is a prerequisite for automated cotton topping. However, the detection of the cotton top bud is low due to the small target size and a similar background. Therefore, a method named CFSS-YOLO was proposed to detect the cotton top bud based on YOLOv12 with an attention mechanism. Firstly, a Convolutional Block Attention Module (CBAM) was introduced into the neck structure of YOLOv12 to suppress background interference and improve target recognition accuracy. Secondly, a new loss function, FSSLoss, was designed where the Shape-IoU (Intersection over Union) optimized by Focaler-IoU was used for the part of localization loss, and Slideloss was integrated to improve the classification loss. The improvement of the loss function aimed to balance the relationship between classification loss and localization loss and accelerate the convergence speed of the model. The experimental results show that the precision, recall and mAP50 of the proposed CFSS-YOLO are 87.6%, 75.3% and 84.8%, respectively. The detection performance of the proposed method is superior to mainstream object detection models such as YOLOv12s, YOLOv5s, SSD, RT-DETR, and DEIM-R18 in outer farmland. These results demonstrate that the proposed CFSS-YOLO has high potential for application and promotion in the cotton top bud recognition task. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

26 pages, 12081 KB  
Article
DEPART: Multi-Task Interpretable Depression and Parkinson’s Disease Detection from In-the-Wild Video Data
by Elena Ryumina, Alexandr Axyonov, Mikhail Dolgushin, Dmitry Ryumin and Alexey Karpov
Big Data Cogn. Comput. 2026, 10(3), 89; https://doi.org/10.3390/bdcc10030089 - 16 Mar 2026
Abstract
Automated video-based detection of cognitive disorders can enable a scalable non-invasive health monitoring. However, existing methods focus on a single disease and provide limited interpretability, whereas real-world videos often contain co-occurring conditions. We propose a novel unified multi-task method to detect depression and [...] Read more.
Automated video-based detection of cognitive disorders can enable a scalable non-invasive health monitoring. However, existing methods focus on a single disease and provide limited interpretability, whereas real-world videos often contain co-occurring conditions. We propose a novel unified multi-task method to detect depression and Parkinson’s disease (PD) from in-the-wild video data called DEPART (DEpression and PArkinson’s Recognition Technique). It performs body region extraction, Contrastive Language-Image Pre-training (CLIP)-based visual encoding, Transformer-based temporal modeling, and prototype-aware classification with a gated fusion technique. Gradient-based attention maps are used to visualize task-specific regions that drive predictions. Experiments on the In-the-Wild Speech Medical (WSM) corpus demonstrate competitive performance: the multi-task model achieves Recall of 82.39% for depression and 78.20% for PD, compared with 87.76% and 78.20%, for the best single-task models. The multi-task learning initially increases false positives for healthy persons in the PD subset, mainly due to annotation–modality mismatches, static visual content misinterpreted as motor impairments, and occasional body detection failures. After cleaning the test data, Recall for healthy individuals becomes comparable across models; the multi-task model improves Recall for both depression (from 82.39% to 87.50%) and PD (from 78.20% to 86.14%), suggesting better robustness for real-life clinical applications. Full article
Show Figures

Figure 1

25 pages, 3328 KB  
Article
End-to-End Acoustic Classification of Respiratory Sounds Using Multi-Architecture Deep Neural Networks
by Btissam Bouzammour, Ghita Zaz, Malika Alami Marktani, Abdellah Touhafi, Anas El Ouali and Mohammed Jorio
Technologies 2026, 14(3), 178; https://doi.org/10.3390/technologies14030178 - 16 Mar 2026
Abstract
Respiratory diseases constitute a major global health burden, necessitating accurate and reliable diagnostic support tools. Conventional auscultation, despite its widespread clinical use, remains inherently subjective and susceptible to inter-observer variability. In this study, we propose a unified deep learning framework for the automated [...] Read more.
Respiratory diseases constitute a major global health burden, necessitating accurate and reliable diagnostic support tools. Conventional auscultation, despite its widespread clinical use, remains inherently subjective and susceptible to inter-observer variability. In this study, we propose a unified deep learning framework for the automated classification of respiratory sound recordings into four clinically relevant categories: Normal, Crackles, Wheezes, and Crackles + Wheezes. The experimental evaluation was conducted on a publicly available dataset comprising heterogeneous respiratory recordings collected from both patients with pulmonary pathologies and healthy individuals. All audio signals were subjected to standardized preprocessing procedures to enhance signal consistency and ensure reliable feature extraction across acquisition conditions. To ensure methodological rigor and prevent optimistic bias, a strict subject-independent validation strategy was adopted using 5-fold GroupKFold cross-validation based on patient identifiers. Six deep learning architectures were systematically implemented and comparatively evaluated under a controlled and reproducible training protocol, including convolutional (1D-CNN, Deep-CNN), recurrent hybrid (CNN–LSTM, CNN–BiLSTM), and attention-based (CNN–Attention, CNN–Transformer) models. Performance metrics were reported as mean ± standard deviation across folds. The CNN–Attention architecture achieved the best overall performance, yielding a Balanced Accuracy of 90.1% ± 1.8% and a macro F1-score of 89.7% ± 2.1%, demonstrating stable inter-patient generalization. These findings indicate that attention-enhanced hybrid architectures effectively capture both local spectral structures and long-range temporal dependencies inherent in respiratory signals. The proposed framework provides a robust foundation for subject-independent automated lung sound classification and contributes to the development of clinically reliable decision-support systems. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

17 pages, 10058 KB  
Article
AI-Based Potato Crop Abiotic Stress Detection via Instance Segmentation
by Emmanouil Savvakis, Dimitrios Kapetas, María del Carmen Martínez-Ballesta, Nikolaos Katsoulas and Eleftheria Maria Pechlivani
AI 2026, 7(3), 111; https://doi.org/10.3390/ai7030111 - 16 Mar 2026
Abstract
Background: Automated monitoring of crop health and the precise detection of abiotic stress, such as herbicide damage, are demanding challenges for modern agriculture. Abiotic stresses are a demanding challenge for modern agriculture, responsible for up to 82% of yield losses in major food [...] Read more.
Background: Automated monitoring of crop health and the precise detection of abiotic stress, such as herbicide damage, are demanding challenges for modern agriculture. Abiotic stresses are a demanding challenge for modern agriculture, responsible for up to 82% of yield losses in major food crops. To address this, researchers are increasingly leveraging artificial intelligence (AI) to automate the detection and management of these stressors. Methods: In particular, this paper presents an instance segmentation framework to precisely detect interveinal chlorosis and leaf curling on potato leaves, two common symptoms of herbicide damage and soft wind. Within the context of precision agriculture and the need to address the inherent ambiguity in manual leaf assessment, this study employs a partial label learning approach to refine the dataset. This method utilizes an EfficientNet-b1 model to classify ambiguous samples, generating high-confidence pseudo-labels for instances that are difficult to categorize visually. The core of the proposed framework is a Mask2Former model, which is first fine-tuned on general potato leaf dataset to enhance its segmentation capabilities and then transferred on the refined, pseudo-labeled dataset. Results & Conclusions: This two-stage approach yields a highly accurate segmentation tool, achieving 89% mAP50 and a pseudo-label classification accuracy of 95%, designed for integration into smart agriculture systems like ground level robotics or unmanned aerial vehicles for real-time, automated crop monitoring. Full article
Show Figures

Figure 1

22 pages, 2762 KB  
Article
Automated Classification of Medical Image Modality and Anatomy
by Jean de Smidt, Kian Anderson and Andries Engelbrecht
Algorithms 2026, 19(3), 222; https://doi.org/10.3390/a19030222 - 16 Mar 2026
Abstract
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow [...] Read more.
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow to improve service quality and efficiency. Transfer learning methods were applied to various convolutional neural network (CNN) architectures and compared to classify medical images across different modalities, i.e., X-rays, ultrasound, magnetic resonance imaging (MRI), and angiography, through a two-component model: medical image modality prediction and anatomical region prediction. Several publicly available datasets were combined to create a representative dataset to evaluate residual networks (ResNet), dense networks (DenseNet), efficient networks (EfficientNet), and the Swin Transformer (Swin-T). The models were evaluated through accuracy, precision, recall, and F1-score metrics with macro-averaging to account for class imbalance. The results demonstrate that lightweight transfer learning methods effectively classify medical imagery, with an accuracy of 97.21% on test data for the combined transfer learning pipeline. EfficientNet-B4 demonstrated the best performance on both components of the proposed pipeline and achieved a 99.6% accuracy for modality prediction and 99.21% accuracy for anatomical region prediction on unseen test data. This approach offers the potential for streamlined radiological workflows while maintaining diagnostic quality. The strong model performance across diverse modalities and anatomical regions indicates robust generalisability for practical implementation in clinical settings. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Show Figures

Figure 1

Back to TopTop