Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,186)

Search Parameters:
Keywords = artifacts

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3365 KB  
Article
A Hybrid Automatic Model for Circle Detection in X-Ray Imagery: A Case Study on Hip Prosthesis Wear
by Mehmet Öztürk and Yahia Adwan
Bioengineering 2026, 13(2), 235; https://doi.org/10.3390/bioengineering13020235 - 17 Feb 2026
Abstract
This study presents a fully automatic hybrid framework for circle detection and geometric feature extraction from anteroposterior (AP) X-ray images. Detecting circular structures in X-ray imagery is challenging due to low contrast, noise, and metal-induced artifacts, which often limit the robustness of purely [...] Read more.
This study presents a fully automatic hybrid framework for circle detection and geometric feature extraction from anteroposterior (AP) X-ray images. Detecting circular structures in X-ray imagery is challenging due to low contrast, noise, and metal-induced artifacts, which often limit the robustness of purely learning-based or purely geometric approaches. To address these challenges, a hybrid deep learning and computer vision pipeline is proposed that combines data-driven region localization with robust geometric fitting. A YOLOv5-based detector is first employed to identify a compact region of interest (ROI) containing circular components. Within this ROI, edge-based processing using Canny detection is applied, followed by an Edge-Snap refinement stage and robust RANSAC-based circle fitting with a Hough-transform fallback to ensure anatomically plausible circle estimation. The resulting circle centers and radii provide stable geometric parameters that can be consistently extracted across images with varying contrast, noise levels, and prosthesis appearances. The applicability of the proposed framework is demonstrated through a case study on hip prosthesis wear analysis, where the automatically detected circle parameters are used to compute medial, superior, and resultant displacement components using established two-dimensional radiographic formulations. Experimental evaluation on AP hip radiographs shows that the YOLOv5 detector achieves high ROI localization performance (mAP@0.5 = 0.971) and that the hybrid pipeline produces consistent circle parameters across longitudinal image sequences. Overall, the proposed method provides an end-to-end automatic solution for robust circle detection in X-ray imagery, with hip prosthesis wear presented solely as a case study without clinical or diagnostic claims. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

18 pages, 990 KB  
Perspective
From Network Governance to Real-World-Time Learning: A High-Reliability Operating Model for Rare Cancers
by Bruno Fuchs, Anna L. Falkowski, Ruben Jaeger, Barbara Kopf, Christian Rothermundt, Kim van Oudenaarde, Ralph Zacchariah, Philip Heesen, Georg Schelling and Gabriela Studer
Cancers 2026, 18(4), 643; https://doi.org/10.3390/cancers18040643 - 16 Feb 2026
Abstract
Background: Rare cancers combine low incidence with high biological heterogeneity and multi-institutional care trajectories. These features make single-center learning structurally incomplete and render pathway fragmentation a dominant driver of preventable harm, variability, and waste. In this context, care quality is best understood as [...] Read more.
Background: Rare cancers combine low incidence with high biological heterogeneity and multi-institutional care trajectories. These features make single-center learning structurally incomplete and render pathway fragmentation a dominant driver of preventable harm, variability, and waste. In this context, care quality is best understood as a property of pathway integrity across routing, diagnostics (imaging/biopsy planning), multidisciplinary intent-setting, definitive treatment, and surveillance—rather than as a department-level attribute. Objective: To define a pragmatic, transferable operating blueprint for a rare-cancer Learning Health System (LHS) that turns routine care into continuous, auditable learning under explicit governance, while maintaining claims discipline and protecting measurement validity. Approach: We synthesize an implementation-oriented operating model using the Swiss Sarcoma Network (SSN) as an exemplar. The blueprint couples clinical governance (Integrated Practice Unit logic, hub-and-spoke routing, auditable multidisciplinary team decision systems) with an interoperable real-world-time data backbone designed for benchmarking, pathway mapping, and feedback. The operating logic is expressed as a closed-loop control cycle: capture → harmonize → benchmark → learn → implement → re-measure, with explicit owners, minimum requirements, and failure modes. Results/Blueprint: (i) The model specifies a minimal set of data primitives—time-stamped and traceable decision points covering baseline and tumor characteristics, pathway timing, treatment exposure, outcomes and complications, and feasible longitudinal PROMs and PREMs; (ii) a VBHC-ready, multi-domain measurement backbone spanning outcomes, harms, timeliness, function, process fidelity, and resource stewardship; and (iii) two non-negotiable validity guardrails: explicit applicability (“N/A”) rules and mandatory case-mix/complexity stratification. Implementation is treated as a governed step with defined workflow levers, fidelity criteria, balancing measures, and escalation thresholds to prevent “dashboard medicine” and surrogate-driven optimization. Conclusions: This perspective contributes an operating model—not a platform or single intervention—that enables credible improvement science and establishes prerequisites for downstream causal learning and minimum viable digital twins. By distinguishing enabling infrastructure from the governed clinical system as the primary intervention, the blueprint supports scalable, learnable excellence in rare-cancer care while protecting against gaming, inequity, and inference drift. Distinct from generic LHS or VBHC frameworks, this blueprint specifies validity gates required for rare-cancer benchmarking—explicit applicability (“N/A”) rules, denominator integrity/capture completeness disclosure, anti-gaming safeguards, and escalation governance. These elements are critical in rare cancers because small denominators, high heterogeneity, and multi-institutional pathways otherwise make benchmarking prone to artifacts and unsafe inferences. Full article
Show Figures

Figure 1

17 pages, 630 KB  
Article
Prevalence of Pulmonary Nematodes in Cats and Lung Ultrasound Findings in Separate Animal Cohorts: A Coprological, Molecular and Clinical Study
by Dawid Jańczak, Agata Moroz-Fik, Karolina Radziejewska, Aleksandra Kornelia Maj, Piotr Górecki, Jakub Kędziorek, Mateusz Antecki, Anna Maria Pyziel and Olga Szaluś-Jordanow
Animals 2026, 16(4), 622; https://doi.org/10.3390/ani16040622 - 15 Feb 2026
Viewed by 82
Abstract
Background: Pulmonary nematodes are an underrecognized cause of respiratory disease in domestic cats, with diagnosis often complicated by nonspecific clinical signs and limitations of fecal-based testing. Methods: This study aimed to assess the prevalence of feline lungworms in Poland and to describe lung [...] Read more.
Background: Pulmonary nematodes are an underrecognized cause of respiratory disease in domestic cats, with diagnosis often complicated by nonspecific clinical signs and limitations of fecal-based testing. Methods: This study aimed to assess the prevalence of feline lungworms in Poland and to describe lung ultrasound findings in a separate clinical cohort of cats. A nationwide coprological survey was conducted using pooled fecal samples from 1058 cats examined with Baermann and flotation techniques, supported by molecular diagnostics where available. Results: Overall, 9.83% of cats were positive for at least one parasite. Aelurostrongylus abstrusus was the most frequently detected lungworm (7.18%), followed by Eucoleus aerophilus (2.17%) and Troglostrongylus brevior (0.47%). Lungworm infections were strongly associated with younger age and showed marked seasonal variation, with higher prevalence in autumn and winter. Lung ultrasound consistently revealed diffuse B-line artifacts and other signs of reduced lung aeration, often in the absence of severe respiratory signs. Following treatment with topical imidacloprid/moxidectin, complete resolution of ultrasonographic abnormalities and clinical signs was observed. Conclusions: These findings confirm that feline pulmonary nematodes are present in Poland and may be underdiagnosed. Lung ultrasound represents a sensitive and non-invasive tool for detecting and monitoring lung involvement, but should be interpreted in conjunction with epidemiological data, parasitological results and therapeutic response. Full article
(This article belongs to the Special Issue Respiratory Diseases of Companion Animals)
Show Figures

Figure 1

26 pages, 26398 KB  
Article
WEMFusion: Wavelet-Driven Hybrid-Modality Enhancement and Discrepancy-Aware Mamba for Optical–SAR Image Fusion
by Jinwei Wang, Yongjin Zhao, Liang Ma, Bo Zhao, Fujun Song and Zhuoran Cai
Remote Sens. 2026, 18(4), 612; https://doi.org/10.3390/rs18040612 - 15 Feb 2026
Viewed by 72
Abstract
Optical and synthetic aperture radar (SAR) imagery are highly complementary in terms of texture details and structural scattering characterization. However, their imaging mechanisms and statistical distributions differ substantially. In particular, pseudo-high-frequency components introduced by SAR coherent speckle can be easily entangled with genuine [...] Read more.
Optical and synthetic aperture radar (SAR) imagery are highly complementary in terms of texture details and structural scattering characterization. However, their imaging mechanisms and statistical distributions differ substantially. In particular, pseudo-high-frequency components introduced by SAR coherent speckle can be easily entangled with genuine optical edges, leading to texture mismatch, structural drift, and noise diffusion. To address these issues, we propose WEMFusion, a wavelet-prior-driven framework for frequency-domain decoupling and discrepancy-aware state-space fusion. Specifically, a multi-scale discrete wavelet transform (DWT) explicitly decomposes the inputs into low-frequency structural components and directional high-frequency sub-bands, providing an interpretable frequency-domain constraint for cross-modality alignment. We design a hybrid-modality enhancement (HME) module: in the high-frequency branch, it effectively injects optical edges and directional textures while suppressing the propagation of pseudo-high-frequency artifacts, and in the low-frequency branch, it reinforces global structural consistency and prevents speckle perturbations from leaking into the structural component, thereby mitigating structural drift. Furthermore, we introduce a discrepancy-aware gated Mamba fusion (DAG-MF) block, which generates dynamic gates from modality differences and complementary responses to modulate the parameters of a directionally scanned two-dimensional state-space model, so that long-range dependency modeling focuses on discrepant regions while preserving directional coherence. Extensive quantitative evaluations and qualitative comparisons demonstrate that WEMFusion consistently improves structural fidelity and edge detail preservation across multiple optical–SAR datasets, achieving superior fusion quality with lower computational overhead. Full article
Show Figures

Figure 1

20 pages, 1394 KB  
Article
Verifying Machine Learning Interpretability and Explainability Requirements Through Provenance
by Lynn Vonderhaar, Juan Couder, Tyler Thomas Procko, Eva Lueddeke, Daryela Cisneros and Omar Ochoa
Software 2026, 5(1), 9; https://doi.org/10.3390/software5010009 - 14 Feb 2026
Viewed by 58
Abstract
Machine learning (ML) engineering increasingly incorporates principles from software and requirements engineering to improve development rigor; however, key non-functional requirements (NFRs) such as interpretability and explainability remain difficult to specify and verify using traditional requirements practices. Although prior work defines these qualities conceptually, [...] Read more.
Machine learning (ML) engineering increasingly incorporates principles from software and requirements engineering to improve development rigor; however, key non-functional requirements (NFRs) such as interpretability and explainability remain difficult to specify and verify using traditional requirements practices. Although prior work defines these qualities conceptually, their lack of measurable criteria prevents systematic verification. This paper presents a novel provenance-driven approach that decomposes ML interpretability and explainability NFRs into verifiable functional requirements (FRs) by leveraging model and data provenance to make model behavior transparent. The approach identifies the specific provenance artifacts required to validate each FR and demonstrates how their verification collectively establishes compliance with interpretability and explainability NFRs. The results show that ML provenance can operationalize otherwise abstract NFRs, transforming interpretability and explainability into quantifiable, testable properties and enabling more rigorous, requirements-based ML engineering. Full article
Show Figures

Figure 1

31 pages, 1196 KB  
Review
Beyond the Cuff: State-of-the-Art on Cuffless Blood Pressure Monitoring
by Yaheya Shafti, Steven Hughes, William Taylor, Muhammad A. Imran, David Owens and Shuja Ansari
Sensors 2026, 26(4), 1243; https://doi.org/10.3390/s26041243 - 14 Feb 2026
Viewed by 138
Abstract
Blood pressure (BP) monitoring is crucial for identifying high BP (hypertension) and is an important aspect of patient care. However, traditional cuff-based methods for BP monitoring are unsuitable for continuous monitoring and can cause discomfort to patients. This survey critically examines the emerging [...] Read more.
Blood pressure (BP) monitoring is crucial for identifying high BP (hypertension) and is an important aspect of patient care. However, traditional cuff-based methods for BP monitoring are unsuitable for continuous monitoring and can cause discomfort to patients. This survey critically examines the emerging field of cuffless BP monitoring, highlighting advances beyond traditional cuff-based methods. Technologies such as radar, optical, acoustic, and capacitive sensors offer the potential for continuous, non-invasive BP estimation, enabling applications in remote health monitoring and ambient clinical intelligence. We introduce a unifying taxonomy covering sensing modalities, physiological measurement principles, signal processing techniques, and translational challenges. Emphasis is placed on methods that eliminate subject-specific calibration, overcome motion artifacts, and satisfy international validation standards. The review also analyses Machine Learning (ML) and sensor fusion approaches that enhance predictive accuracy. Despite encouraging results, challenges remain in achieving clinically acceptable accuracy across diverse populations and real-world conditions. This work delineates the current landscape, benchmarks performance against gold standards, and identifies key future directions for scalable, explainable, and regulatory-compliant BP monitoring systems. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

31 pages, 6599 KB  
Article
A Generative AI-Driven Industrial Design Framework for Human–GenAI Co-Creation
by Chen Chen, Fangmin Cheng, Boyi Zhang, Ruozhen Jin, Chaoyi Dong, Zhixue Sun and Yaxuan Zhou
Symmetry 2026, 18(2), 352; https://doi.org/10.3390/sym18020352 - 13 Feb 2026
Viewed by 97
Abstract
Generative AI (GenAI) is accelerating design space exploration and multimodal prototyping in industrial design (ID), bringing new efficiencies and possibilities to early-stage ideation and cross-media expression. Yet many studies do not clearly define stage-wise human–GenAI roles, preserve constraints as traceable cross-stage artifacts, or [...] Read more.
Generative AI (GenAI) is accelerating design space exploration and multimodal prototyping in industrial design (ID), bringing new efficiencies and possibilities to early-stage ideation and cross-media expression. Yet many studies do not clearly define stage-wise human–GenAI roles, preserve constraints as traceable cross-stage artifacts, or provide verifiable stage-wise evaluation, undermining traceability in both concept convergence and concept-to-engineering handover. To address these issues, this paper proposes GID-HGCC, a GenAI-driven human–GenAI co-creation ID framework that links four core stages: requirements confirmation, concept generation, concept evaluation, and 3D modeling. First, it specifies stage-wise responsibilities and defines the corresponding inputs and outputs. Second, it establishes a traceable cross-stage artifact flow—“structured prompts–candidate concepts–evaluation outputs–3D engineering issue list”—to support continuous constraint transmission and explicit documentation. Third, it integrates a multi-dimensional evaluation criteria system with IVIFNs–CRITIC–TOPSIS for concept ranking, and further strengthens convergence reliability via preference–consistency diagnostics. The framework is validated through a case study on a portable passive cervical spine rehabilitation training device. Expert preferences over stage-wise co-creation artifacts exhibit an overall medium-to-high level of consistency, and the Top-5 overlap between each expert and the group ranking ranges from 0.80 to 1.00. These results demonstrate that GID-HGCC offers an operational reference for constraint-guided human–GenAI co-creation in ID, improving traceability and handover reliability from requirements confirmation to engineering refinement. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

35 pages, 5490 KB  
Article
An Evaluation Method for Model Maturity Supporting Model-Based Systems Engineering at the Conceptual Design Stage
by Chong Jiang, Wu Zhao, Tianxiang Li and Jun Li
Processes 2026, 14(4), 639; https://doi.org/10.3390/pr14040639 - 12 Feb 2026
Viewed by 197
Abstract
Multi-level models are core artifacts of Model-Based Systems Engineering (MBSE) for cross-disciplinary collaboration and staged evolution, yet assessing their maturity in the conceptual design phase remains difficult. This paper proposes a systematic, model-centric maturity assessment method for instrumentation conceptual design. By tailoring ISO/IEC [...] Read more.
Multi-level models are core artifacts of Model-Based Systems Engineering (MBSE) for cross-disciplinary collaboration and staged evolution, yet assessing their maturity in the conceptual design phase remains difficult. This paper proposes a systematic, model-centric maturity assessment method for instrumentation conceptual design. By tailoring ISO/IEC 25010 to instrumentation characteristics, we establish a seven-dimensional quality attribute framework (functional suitability, performance efficiency, interaction capability, reliability, maintainability, flexibility, and structural completeness) and an L0–L4 maturity scale for multi-level MBSE models. The indicators are structured using a Quality Attribute Utility Tree. CRITIC derives the objective weights by jointly considering the score dispersion and inter-indicator correlation, and Dempster–Shafer evidence theory is used to map the indicator values and expert ratings onto basic belief assignments and fuses the multi-source evidence to the output maturity levels with explicit confidence and uncertainty. A case study of an automatic dosing instrument for solid foam drainage agents at a high-pressure gas wellhead yields an overall maturity of L1 (Structured), with BetPL1= 0.424, and an overall unknown mass of 0.186. The results highlight reliability and performance efficiency as the main bottlenecks and support targeted model refinement and resource allocation in early-stage design. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

17 pages, 4681 KB  
Article
Towards Adaptive Adverse Weather Removal via Semantic and Low-Level Visual Perceptual Priors
by Wei Dong, Han Zhou, Terry Ji and Jun Chen
Mach. Learn. Knowl. Extr. 2026, 8(2), 45; https://doi.org/10.3390/make8020045 - 12 Feb 2026
Viewed by 155
Abstract
Adverse weather removal aims to restore images degraded by haze, rain, or snow. However, existing unified models often rely on implicit degradation cues, making them vulnerable to inaccurate weather perception and insufficient semantic guidance, which leads to over-smoothing or residual artifacts in real [...] Read more.
Adverse weather removal aims to restore images degraded by haze, rain, or snow. However, existing unified models often rely on implicit degradation cues, making them vulnerable to inaccurate weather perception and insufficient semantic guidance, which leads to over-smoothing or residual artifacts in real scenes. In this work, we propose AWR-VIP, a prior-guided adverse weather removal framework that explicitly extracts semantic and perceptual priors using a frozen vision–language model (VLM). Given a degraded input, we first employ a degradation-aware prompt extractor to produce a compact set of semantic tags describing key objects and regions, and simultaneously perform weather-type perception by prompting the VLM with explicit weather definitions. Conditioned on the predicted weather type and selected tags, the VLM further generates two levels of restoration guidance: a global instruction that summarizes image-level enhancement goals (e.g., visibility/contrast) and local instructions that specify tag-aware refinement cues (e.g., recover textures for specific regions). These textual outputs are encoded by a text encoder into a pair of priors (Pglobal and Plocal), which are injected into a UNet-based restorer through global-prior-modulated normalization and instruction-guided attention, enabling weather-adaptive and content-aware restoration. Extensive experiments on a combined benchmark show that AWR-VIP consistently outperforms state-of-the-art methods. Moreover, the VLM-derived priors are plug-and-play and can be integrated into other restoration backbones to further improve performance. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

21 pages, 891 KB  
Article
Architectural Constraints in LLM-Simulated Cognitive Decline: In Silico Dissociation of Memory Deficits and Generative Language as Candidate Digital Biomarkers
by Rubén Pérez-Elvira, Javier Oltra-Cucarella, María Agudo Juan, Luis Polo-Ferrero, Manuel Quintana Díaz, Jorge Bosch-Bayard, Alfonso Salgado Ruiz, A. N. M. Mamun Or Rashid and Raúl Juárez-Vela
AI 2026, 7(2), 69; https://doi.org/10.3390/ai7020069 - 12 Feb 2026
Viewed by 293
Abstract
This study examined whether large language models (LLMs) can generate clinically realistic profiles of cognitive decline and whether simulated deficits reflect architectural constraints rather than superficial role-playing artifacts. Using GPT-4o-mini, we generated synthetic cohorts (n = 10 per group) representing healthy aging, mild [...] Read more.
This study examined whether large language models (LLMs) can generate clinically realistic profiles of cognitive decline and whether simulated deficits reflect architectural constraints rather than superficial role-playing artifacts. Using GPT-4o-mini, we generated synthetic cohorts (n = 10 per group) representing healthy aging, mild cognitive impairment (MCI), and Alzheimer’s disease (AD), assessed through a conversational neuropsychological battery covering episodic memory, verbal fluency, narrative production, orientation, naming, and comprehension. Experiment 1 tested whether synthetic subjects exhibited graded cognitive profiles consistent with clinical progression (Control > MCI > AD). Experiment 2 systematically manipulated prompt context in AD subjects (short, rich biographical, and few-shot prompts) to dissociate robust from manipulable deficits. Significant cognitive gradients emerged (p < 0.001) across eight of thirteen domains. AD subjects showed impaired episodic memory (Cohen’s d = 4.71), increased memory intrusions, and reduced narrative length (d = 3.07). Critically, structurally constrained memory tasks (episodic recall, digit span) were invariant to prompting (p > 0.05), whereas generative tasks (narrative length, verbal fluency) showed high sensitivity (F > 100, p < 0.001). Rich biographical prompts paradoxically increased memory intrusions by 343%, indicating semantic interference rather than cognitive rescue. These results demonstrate that LLMs can serve as in silico test benches for exploring candidate digital biomarkers and clinical training protocols, while highlighting architectural constraints that may inform computational hypotheses about memory and language processing. Full article
Show Figures

Figure 1

12 pages, 2881 KB  
Article
Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation
by Muhammet Pasaoglu and Irem Demirkan
Appl. Sci. 2026, 16(4), 1819; https://doi.org/10.3390/app16041819 - 12 Feb 2026
Viewed by 101
Abstract
Skin cancer is a widespread and fatal disease in which early and accurate detection is an important aspect for effective treatment. The issues that arise when performing automated analysis of dermatoscopic images include artifacts such as hair, low contrast, and irregular edges of [...] Read more.
Skin cancer is a widespread and fatal disease in which early and accurate detection is an important aspect for effective treatment. The issues that arise when performing automated analysis of dermatoscopic images include artifacts such as hair, low contrast, and irregular edges of lesions that interfere with segmentation and classification. This study proposes an automated image preprocessing pipeline designed to remove artifacts while saving lesion texture and boundary. The method combines various computer vision methods and processes to produce a hairless dermatoscopic image of the sample, and lesion segmentation is subsequently performed using the HSV color space and binary masking. The effectiveness of the proposed preprocessing approach is evaluated using five state-of-the-art models: VGG16, ResNet50, InceptionV3, EfficientNet-B4, and DenseNet121. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

6 pages, 699 KB  
Proceeding Paper
Towards Electoral Digitization: Automatic Classification of Handwritten Numbers in PREP System Records
by Miguel Angel Camargo Rojas, Gabriel Sánchez Pérez, José Portillo-Portillo, Linda Karina Toscano Medina, Aldo Hernández Suárez, Jesús Olivares Mercado, Héctor Manuel Pérez Meana and Luis Javier García Villalba
Eng. Proc. 2026, 123(1), 35; https://doi.org/10.3390/engproc2026123035 - 12 Feb 2026
Viewed by 194
Abstract
The digitization of electoral processes requires robust systems for processing handwritten numerical data from voting documents. This paper presents a convolutional neural network study for handwritten digit recognition in Mexico’s PREP (Programa de Resultados Electorales Preliminares) system. Rather than individual digit classification, we [...] Read more.
The digitization of electoral processes requires robust systems for processing handwritten numerical data from voting documents. This paper presents a convolutional neural network study for handwritten digit recognition in Mexico’s PREP (Programa de Resultados Electorales Preliminares) system. Rather than individual digit classification, we approach the problem as direct 1000-class classification, treating each three-digit combination as a single class to maximize accuracy and simplify inference. We evaluated eight CNN architectures including ResNet variants, MobileNetV3, ShuffleNetV2, and EfficientNet, with ResNet-18 emerging as optimal for balancing accuracy and computational efficiency under CPU-only deployment. To address dataset challenges including class imbalance and image artifacts, we developed a customized RandAugment strategy applying photometric and limited geometric transformations that preserve semantic integrity. Our methodology demonstrates feasibility of deploying robust digit recognition systems in resource-constrained electoral environments while maintaining high accuracy. The research provides a practical framework for automated electoral data processing adaptable to similar systems across Latin America. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
Show Figures

Figure 1

19 pages, 2743 KB  
Article
Anti-Aliasing for Downsampling in CNNs Based on Gaussian Filter Convolution
by Guangyu Zheng, Xiqiang Ma, Xin Jin, Jiaran Du, Mengjie Zuo and Yaoyao Li
Electronics 2026, 15(4), 780; https://doi.org/10.3390/electronics15040780 - 12 Feb 2026
Viewed by 148
Abstract
Convolutional neural networks leverage their efficient ability to extract common features of images, playing a crucial role in numerous computer vision tasks. Key details such as edges and textures in images often present themselves in the form of high-frequency components, which contain rich [...] Read more.
Convolutional neural networks leverage their efficient ability to extract common features of images, playing a crucial role in numerous computer vision tasks. Key details such as edges and textures in images often present themselves in the form of high-frequency components, which contain rich semantic information and are essential for accurate image recognition and understanding. However, during the downsampling process, these high-frequency components are improperly mapped to low-frequency components, leading to signal aliasing. This aliasing results in the loss of image detail information and blurred features, significantly affecting the precise extraction of image features by convolutional neural networks and ultimately reducing the performance of the model in various tasks. To effectively address this challenge, this study innovatively proposes the Gaussian Filter Convolution (GFC) module. This module ingeniously utilizes convolution kernels with filtering functions, which can specifically suppress the high-frequency components in the image, reducing the occurrence of signal aliasing at its source, thereby significantly alleviating the aliasing artifacts generated during downsampling. Experimental data show that the model integrated with GFC has significant improvements in key indicators such as model accuracy. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

42 pages, 3053 KB  
Review
A Comprehensive Review of Deepfake Detection Techniques: From Traditional Machine Learning to Advanced Deep Learning Architectures
by Ahmad Raza, Abdul Basit, Asjad Amin, Zeeshan Ahmad Arfeen, Muhammad I. Masud, Umar Fayyaz and Touqeer Ahmed Jumani
AI 2026, 7(2), 68; https://doi.org/10.3390/ai7020068 - 11 Feb 2026
Viewed by 456
Abstract
Deepfake technology is causing unprecedented threats to the authenticity of digital media, and demand is high for reliable digital media detection systems. This systematic review focuses on an analysis of deepfake detection methods using deep learning approaches, machine learning methods, and the classical [...] Read more.
Deepfake technology is causing unprecedented threats to the authenticity of digital media, and demand is high for reliable digital media detection systems. This systematic review focuses on an analysis of deepfake detection methods using deep learning approaches, machine learning methods, and the classical methods of image processing from 2018 to 2025 with a specific focus on the trade-off between accuracy, computing efficiency, and cross-dataset generalization. Through lavish analysis of a robust peer-reviewed studies using three benchmark data sets (FaceForensics++, DFDC, Celeb-DF) we expose important truths to bring some of the field’s prevailing assumptions into question. Our analysis produces three important results that radically change the understanding of detection abilities and limitations. Transformer-based architectures have significantly better cross-dataset generalization (11.33% performance decline) than CNN-based (more than 15% decline), at the expense of computation (3–5× more). To the contrary, there is no strong reason to assume the superiority of deep learning, and the performance of traditional machine learning methods (in our case, Random Forest) is quite comparable (accuracy of 99.64% on the DFDC) with dramatically lower computing needs, which opens up the prospects for their application in resource-constrained deployment scenarios. Most critically, we demonstrate deterioration of performance (10–15% on average) systematically across all methodological classes and we provide empirical support for the fact that current detection systems are, to a high degree, learning dataset specific compression artifacts, rather than deepfake characteristics that are generalizable. These results highlight the importance of moving from an accuracy-focused evaluation approach toward more comprehensive evaluation approaches that balance either generalization capability, computational feasibility, or practical deployment constraints, and therefore further direct future research efforts towards designing systems for detection that could be deployed in practical applications. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

19 pages, 6143 KB  
Article
Research on Density-Adaptive Feature Enhancement and Lightweight Spectral Fine-Tuning Algorithm for 3D Point Cloud Analysis
by Wenquan Huang, Teng Li, Qing Cheng, Ping Qi and Jing Zhu
Information 2026, 17(2), 184; https://doi.org/10.3390/info17020184 - 11 Feb 2026
Viewed by 185
Abstract
To address fragile feature representation in sparse regions and detail loss in occluded scenes caused by uneven sampling density in 3D point cloud semantic segmentation on the SemanticKITTI dataset, this article proposes an innovative framework that integrates density-adaptive feature enhancement with lightweight spectral [...] Read more.
To address fragile feature representation in sparse regions and detail loss in occluded scenes caused by uneven sampling density in 3D point cloud semantic segmentation on the SemanticKITTI dataset, this article proposes an innovative framework that integrates density-adaptive feature enhancement with lightweight spectral fine-tuning, which involves frequency-domain transformations (e.g., Fast Fourier Transform) applied to point cloud features to optimize computational efficiency and enhance robustness in sparse regions, which involves frequency-domain transformations to optimize features efficiently. The method begins by accurately calculating each point’s local neighborhood density using KD tree radius search, subsequently injecting this as an additional feature channel to enable the network’s adaptation to density variations. A density-aware loss function is then employed, dynamically adjusting the classification loss weights—by approximately 40% in low-density areas—to strongly penalize misclassifications and enhance feature robustness from sparse points. Additionally, a multi-view projection fusion mechanism is introduced that projects point clouds onto multiple 2D views, capturing detailed information via mature 2D models, with the primary focus on semantic segmentation tasks using the SemanticKITTI dataset to ensure task specificity. This information is then fused with the original 3D features through backprojection, thereby complementing geometric relationships and texture details to effectively alleviate occlusion artifacts. Experiments on the SemanticKITTI dataset for semantic segmentation show significant performance improvements over the baseline, achieving Precision 0.91, Recall 0.89, and F1-Score 0.90. In low-density regions, the F1-Score improved from 0.73 to 0.80. Ablation studies highlight the contributions of density feature injection, multi-view fusion, and density-aware loss, enhancing F1-Score by 3.8%, 2.5%, and 5.0%, respectively. This framework offers an effective approach for accurate and robust point cloud analysis through optimized density techniques and spectral domain fine-tuning. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop