Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,710)

Search Parameters:
Keywords = computer architectures

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3167 KB  
Article
MedScanGAN: Synthetic PET & CT Scan Generation Using Conditional Generative Adversarial Networks for Medical AI Data Augmentation
by Agorastos-Dimitrios Samaras, Ioannis D. Apostolopoulos and Nikolaos Papandrianos
Bioengineering 2026, 13(3), 281; https://doi.org/10.3390/bioengineering13030281 - 27 Feb 2026
Abstract
This study tackles the challenge of data scarcity in medical AI, focusing on Non-Small-Cell Lung Cancer (NSCLC) diagnosis from Positron Emission Tomography (PET) and Computed Tomography (CT) images. We introduce MedScanGAN, a conditional Generative Adversarial Network designed to generate high-fidelity synthetic PET [...] Read more.
This study tackles the challenge of data scarcity in medical AI, focusing on Non-Small-Cell Lung Cancer (NSCLC) diagnosis from Positron Emission Tomography (PET) and Computed Tomography (CT) images. We introduce MedScanGAN, a conditional Generative Adversarial Network designed to generate high-fidelity synthetic PET and CT images of Solitary Pulmonary Nodules (SPNs) to enhance computer-aided diagnosis systems. The framework incorporates advanced architectural features, including residual blocks, spectral normalization, and stabilized training strategies. MedScanGAN produces realistic images—particularly for PET representations—capable of plausibly misleading medical professionals. More importantly, when used to augment training datasets for established deep learning models such as YOLOv8, VGG-16, ResNet, and MobileNet, the synthetic data significantly improves NSCLC classification performance. Accuracy gains of up to +5.8 absolute percentage points were observed, with YOLOv8 achieving the best results at 94.14% accuracy, 93.12% specificity, and 95.33% sensitivity using the augmented dataset. The conditional generation mechanism enables the targeted synthesis of underrepresented classes, effectively addressing class imbalance. Overall, this work demonstrates both state-of-the-art medical image synthesis and its practical value in improving real-world diagnostic systems, bridging generative AI research and clinical pulmonary oncology. Full article
31 pages, 15013 KB  
Article
BiFusion-LDSeg: A Latent Diffusion Framework with Bi-Directional Attention Fusion for Landslide Segmentation in Satellite Imagery
by Bingxin Shi, Hongmei Guo, Yin Sun, Jianyu Long, Li Yang, Yadong Zhou, Jingjing Jiao, Jingren Zhou, Yusen He and Huajin Li
Remote Sens. 2026, 18(5), 719; https://doi.org/10.3390/rs18050719 - 27 Feb 2026
Abstract
Rapid and accurate mapping of earthquake-triggered landslides from satellite imagery is critical for emergency response and hazard assessment, yet remains challenging due to irregular boundaries, extreme size variations, and atmospheric noise. This paper proposes BiFusion-LDSeg, a novel bi-directional fusion enhanced latent diffusion framework [...] Read more.
Rapid and accurate mapping of earthquake-triggered landslides from satellite imagery is critical for emergency response and hazard assessment, yet remains challenging due to irregular boundaries, extreme size variations, and atmospheric noise. This paper proposes BiFusion-LDSeg, a novel bi-directional fusion enhanced latent diffusion framework that synergistically combines CNN-Transformer architectures with generative diffusion models for robust landslide segmentation. The framework introduces three key innovations: (1) a dual-encoder with Bi-directional Attention Gates (Bi-AG) enabling sophisticated cross-modal feature calibration between local CNN textures and global Transformer context; (2) a conditional latent diffusion process operating in learned low-dimensional landslide shape manifolds, reducing computational complexity by 100× while enabling inference with only 10 sampling steps versus 1000+ in standard diffusion models; and (3) a boundary-aware progressive decoder employing multi-scale reverse attention mechanisms for precise boundary delineation. Comprehensive experiments on three earthquake datasets from Sichuan Province, China (Lushan Mw 7.0, Jiuzhaigou Mw 6.5, Luding Mw 6.8) demonstrate superior performance, outperforming state-of-the-art methods by 7–13% in IoU and 5–7% in DSC across all three datasets. The framework exhibits exceptional noise robustness, strong cross-dataset generalization, and inherent uncertainty quantification, enabling reliable deployment for post-earthquake landslide inventory mapping at regional scales. Full article
21 pages, 4471 KB  
Article
MCS-YOLO: A Mamba-Enhanced Coordinate and Spatial YOLO Network for Lightweight Weed Detection
by Qi Yan, Ning Jin, Si Li, Huaji Zhu and Huarui Wu
Agriculture 2026, 16(5), 539; https://doi.org/10.3390/agriculture16050539 - 27 Feb 2026
Abstract
Precision weeding is crucial for maximizing crop yields and minimizing herbicide use. However, deploying standard deep learning models in agriculture faces challenges due to the high morphological diversity of weeds and the computational constraints of edge devices. Hence, this study proposes MCS-YOLO, a [...] Read more.
Precision weeding is crucial for maximizing crop yields and minimizing herbicide use. However, deploying standard deep learning models in agriculture faces challenges due to the high morphological diversity of weeds and the computational constraints of edge devices. Hence, this study proposes MCS-YOLO, a lightweight detection model based on the YOLOv8 architecture. First, a channel-level Mamba module is integrated into the backbone to model long-range feature dependencies and enhance global texture representation. The LMAB module employs parallel depthwise separable convolutions with varying receptive fields and coordinate attention to improve multi-scale weed discrimination. To mitigate feature blurring and misalignment during upsampling, the LCAU module adopts dynamic offset sampling beyond fixed interpolation methods. Finally, the SCS-Head integrates dual-branch depthwise separable convolution with channel shuffling to reduce parameter redundancy while preserving efficient feature expression. Experimental results on the Weed-Crop dataset demonstrate that MCS-YOLO achieves 76.4% mAP@50 and 38.3% mAP@50–95, outperforming YOLOv8s by 3.1% and 1.5%, respectively. Furthermore, the parameter count is reduced by 20.7%, from 11.13 M to 8.83 M, and GFLOPs are reduced by 39.6%, from 28.5 to 17.2. These results confirm that MCS-YOLO effectively balances a lightweight design with high detection accuracy, offering a viable solution for real-time weed detection and automated weeding on embedded agricultural platforms. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

23 pages, 2799 KB  
Review
The Architecture of Intelligent Governance (AIG): A Conceptual Framework for Integration AI, Quantum Computing, and Global Resource Resilience
by Ali Ayoub
Sustainability 2026, 18(5), 2312; https://doi.org/10.3390/su18052312 - 27 Feb 2026
Abstract
Artificial intelligence is transforming global resource systems and reshaping the foundations of corporate governance. This paper develops the Architecture of Intelligent Governance (AIG), a hybrid governance framework that integrates AI-enabled analytical capabilities with human judgment, ethical reasoning, and strategic foresight. Drawing on evidence [...] Read more.
Artificial intelligence is transforming global resource systems and reshaping the foundations of corporate governance. This paper develops the Architecture of Intelligent Governance (AIG), a hybrid governance framework that integrates AI-enabled analytical capabilities with human judgment, ethical reasoning, and strategic foresight. Drawing on evidence from energy systems, supply chains, critical mineral dependencies, agribusiness, and emerging quantum-computing infrastructures, the analysis demonstrates how AI enhances forecasting precision, strengthens transparency, and supports more adaptive decision-making in environments characterized by volatility and interdependence. At the same time, the paper introduces a criticality perspective to examine the systemic risks associated with AI, including energy intensity, technological concentration, and algorithmic opacity. These risks underscore the need for leadership models that extend beyond technical expertise to encompass interpretive judgment, ethical stewardship, cultural competence, and long-term strategic thinking. The unified leadership framework presented here positions leadership as the human anchor of intelligent governance, ensuring that AI-enabled decisions remain aligned with organizational values and societal expectations. The AIG model offers a comprehensive approach to governing AI-intensive systems, advancing a vision of corporate governance that is resilient, transparent, and oriented toward long-term sustainability. Full article
33 pages, 2674 KB  
Review
Application of Artificial Intelligence in Environmental Analysis for Decision Making in Energy Efficiency in University Classrooms Monitored with IoT
by Ana Bustamante-Mora, Francisco Escobar-Jara, Jaime Díaz-Arancibia, Gabriel Mauricio Ramírez and Javier Medina-Gómez
Appl. Sci. 2026, 16(5), 2322; https://doi.org/10.3390/app16052322 - 27 Feb 2026
Abstract
The integration of Artificial Intelligence (AI) and Internet of Things (IoT) technologies in educational buildings represents an emerging opportunity to enhance intelligent environmental monitoring, data analysis, and energy optimization. This article presents a systematic literature review focused on AI-based applications in IoT-enabled learning [...] Read more.
The integration of Artificial Intelligence (AI) and Internet of Things (IoT) technologies in educational buildings represents an emerging opportunity to enhance intelligent environmental monitoring, data analysis, and energy optimization. This article presents a systematic literature review focused on AI-based applications in IoT-enabled learning environments, with special attention to indoor air quality (IAQ) management. A total of 585 documents were initially retrieved from Web of Science, Scopus, and IEEE Xplore using two targeted search strings. After removing duplicates and applying successive relevance filters based on title, abstract, and pertinence, 128 final documents were selected for full-text analysis. This study addresses four research questions: (RQ1) Which AI techniques are applied to environmental data analysis in educational contexts? (RQ2) What methods are used to detect sensor anomalies in IoT-based monitoring systems? (RQ3) How is AI applied in real-time decision making based on air quality indicators? (RQ4) What AI-driven strategies support energy efficiency in classrooms? The results reveal a growing use of machine learning and deep learning models, such as convolutional neural networks, decision trees, and LSTM architectures, particularly in applications focused on air quality classification, fault detection, and predictive control. Supervised learning methods were the most frequently applied, with CNN-based models leading in air quality prediction tasks and decision trees being preferred for anomaly detection. Deep learning approaches showed higher accuracy but required greater computational resources, limiting their use in low-cost educational environments. However, the literature also shows a lack of contextualized implementations, especially in low-resource or Latin American environments, and a limited focus on user-centered and educationally integrable systems. In addition, the review identifies a research gap regarding the integration of environmental and educational data, suggesting the potential for future empirical studies that evaluate real classroom conditions using IoT devices to inform AI-driven energy optimization strategies in academic settings. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in the Internet of Things)
Show Figures

Figure 1

32 pages, 4314 KB  
Article
A Hardware-Aware Federated Meta-Learning Framework for Intraday Return Prediction Under Data Scarcity and Edge Constraints
by Zhe Wen, Xin Cheng, Ruixin Xue, Jinao Ye, Zhongfeng Wang and Meiqi Wang
Appl. Sci. 2026, 16(5), 2319; https://doi.org/10.3390/app16052319 - 27 Feb 2026
Abstract
Although deep learning has achieved remarkable success in time-series prediction, intraday algorithmic trading is characterized by frequent regime shifts (concept drift), which can rapidly render models trained on historical data obsolete in real applications. This motivates on-device adaptation at edge trading terminals. However, [...] Read more.
Although deep learning has achieved remarkable success in time-series prediction, intraday algorithmic trading is characterized by frequent regime shifts (concept drift), which can rapidly render models trained on historical data obsolete in real applications. This motivates on-device adaptation at edge trading terminals. However, practical deployment is constrained by a tripartite bottleneck: real-time samples are scarce, hardware resources on edge are limited, and communication overhead between cloud and edge must be kept low to satisfy stringent latency requirements. To address these challenges, we develop a hardware-aware edge learning framework that combines federated learning (FL) and meta-learning to enable rapid few-shot personalization without exposing local data. Importantly, the framework incorporates our proposed Sleep Node Algorithm (SNA), which turns the “FL + meta-learning” combination into a practical and efficient edge solution. Specifically, SNA dynamically deactivates “inertial” (insensitive) network components during adaptation: it provides a structural regularizer that stabilizes few-shot updates and mitigates overfitting under concept drift, while inducing sparsity that reduces both on-device computation and cloud-edge communication. To efficiently leverage these unstructured zero nodes introduced by SNA, we further design a dedicated accelerator, EPAST (Energy-efficient Pipelined Accelerator for Sparse Training). EPAST adopts a heterogeneous architecture and introduces a dedicated Backward Pipeline (BPIP) dataflow that overlaps backpropagation stages, thereby improving hardware utilization under irregular sparse workloads. Experimental results demonstrate that our system consistently outperforms strong baselines, including DQN, GARCH-XGBoost, and LRU, in terms of Pearson IC. A 55 nm CMOS ASIC implementation further validates robust learning under an extreme 5-shot setting (IC = 0.1176), achieving an end-to-end training speed-up of 11.35× and an energy efficiency of 45.78 TOPS/W. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
Show Figures

Figure 1

25 pages, 24156 KB  
Article
MLCANet: Multi-Level Composite Attention-Guided Network for Non-Homogeneous Image Dehazing in Adverse Weather Conditions
by Yongsheng Qiu
Sensors 2026, 26(5), 1505; https://doi.org/10.3390/s26051505 - 27 Feb 2026
Abstract
Image dehazing is a challenging ill-posed problem in low-level computer vision tasks, requiring the restoration of high-quality, haze-free images from complex and foggy conditions. Deep learning-based dehazing methods struggle to effectively remove non-homogeneous fog distributions due to the uneven and dense nature of [...] Read more.
Image dehazing is a challenging ill-posed problem in low-level computer vision tasks, requiring the restoration of high-quality, haze-free images from complex and foggy conditions. Deep learning-based dehazing methods struggle to effectively remove non-homogeneous fog distributions due to the uneven and dense nature of fog patches, making it difficult to clear real-world fog variations. A key challenge for non-homogeneous image dehazing algorithms is efficiently capturing the spatial distribution of haze in areas with varying fog densities while restoring fine image details. To address these challenges, we propose MLCANet, a multi-level composite attention-guided network for non-homogeneous image dehazing. MLCANet mitigates the impact of uneven haze areas through two main components: the Multi-level Composite Attention Generation Network (MCAGN) and the Dehazed Image Reconstruction Network (DIRN). The MCAGN integrates channel attention (CA), spatial attention (SA), and multi-scale pixel attention (MSPA) to capture haze features at different spatial scales. The DIRN, based on a decoder-encoder architecture, combines multi-scale dilated convolutions and deformable convolutions to restore fine image details more flexibly and efficiently. Extensive qualitative and quantitative experiments, along with ablation studies, demonstrate the effectiveness and feasibility of this method for non-homogeneous image dehazing. Full article
Show Figures

Figure 1

26 pages, 8499 KB  
Article
Research into and Application of Lightweight Models Based on Model Pruning and Knowledge Distillation in Desert Grassland Plant Recognition
by Hongxing Ma, Lin Li, Kaiwen Chen, Jintai Chi, Shuhua Wei, Xiaobin Ren, Wei Sun and Jianping Gou
Agriculture 2026, 16(5), 526; https://doi.org/10.3390/agriculture16050526 - 27 Feb 2026
Abstract
Accurate plant recognition in desert grasslands is essential for ecological monitoring, yet existing models face critical limitations: poor generalization in complex natural environments and excessive computational demands for mobile deployment. This study proposes YOLOv11-PKD, a lightweight model integrating structured pruning and knowledge distillation [...] Read more.
Accurate plant recognition in desert grasslands is essential for ecological monitoring, yet existing models face critical limitations: poor generalization in complex natural environments and excessive computational demands for mobile deployment. This study proposes YOLOv11-PKD, a lightweight model integrating structured pruning and knowledge distillation for efficient desert grassland plant identification. First, we develop YOLOv11-STC, a high-capacity teacher model incorporating the SPPCSPC module for multi-scale feature extraction, Triplet Attention for spatial refinement, and a GSConv-based Slim Neck for optimized feature fusion. This architecture achieves 88.3% mAP50 on the DGPlant48 dataset, outperforming the baseline YOLOv11n by 6.8%. To enable edge deployment, we apply channel pruning guided by BatchNorm scaling factors, compressing the model by 19.75% in PParameters and 20% in GFLOPS (YOLOv11-Pruned: 79.5% mAP50, 4.7 MB). Subsequently, L2-based knowledge distillation recovers performance, yielding YOLOv11-PKD with 87.9% mAP50—approaching teacher-level accuracy—while maintaining 5.0 MB size, 2.150 M parameters, and 5.5 GFLOPS. The model is successfully deployed via a mobile application, achieving ~1 s response times for field-based plant identification. This work demonstrates a practical balance between accuracy and efficiency for resource-constrained ecological monitoring. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

42 pages, 3268 KB  
Article
LITO: Lemur-Inspired Task Offloading for Edge–Fog–Cloud Continuum Systems
by Asma Almulifi and Heba Kurdi
Sensors 2026, 26(5), 1497; https://doi.org/10.3390/s26051497 - 27 Feb 2026
Abstract
Edge, fog, and cloud continuum architectures that interconnect resource-constrained devices, intermediate edge servers, and remote cloud data centers face persistent challenges in handling heterogeneous and latency-sensitive workloads while reducing energy consumption and improving resource utilization. Classical task offloading approaches either rely on static [...] Read more.
Edge, fog, and cloud continuum architectures that interconnect resource-constrained devices, intermediate edge servers, and remote cloud data centers face persistent challenges in handling heterogeneous and latency-sensitive workloads while reducing energy consumption and improving resource utilization. Classical task offloading approaches either rely on static heuristics, which lack adaptability to dynamic conditions, or on metaheuristic optimizers, which often incur high computational overhead and centralized coordination. This paper proposes LITO, a lemur-inspired task offloading algorithm for edge, fog, and cloud continuum systems that models the infrastructure as a social system in which computing nodes assume distinct roles that mirror lemur social hierarchies. Building on an abstracted model of lemur group behavior, LITO incorporates two key lemur-inspired mechanisms: an energy-aware task assignment mechanism based on sun basking, a thermoregulation behavior in which lemurs seek favorable warm spots, mapped here to selecting energetically efficient execution nodes, and a cooperative scheduling policy based on huddling, group clustering under stress, mapped here to sharing load among overloaded nodes. These mechanisms are combined with a continual supervised policy-learning layer with contextual bandit feedback that refines offloading decisions from online feedback. The resulting multi-objective formulation jointly minimizes energy consumption and deadline violations while maximizing resource utilization and throughput under high-load conditions in the edge and fog segment of the continuum. Simulations under diverse workload regimes and task complexities show that LITO outperforms representative multi-objective offloading baselines in terms of energy consumption, resource utilization, latency, Service Level Agreement (SLA) violations, and throughput in congested scenarios. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 1627 KB  
Article
EGTJ: An Unsupervised and Non-Parametric Approach for Efficient Text Classification Under Resource-Limited Environments
by Haifeng Lv and Yong Ding
Mathematics 2026, 14(5), 801; https://doi.org/10.3390/math14050801 - 27 Feb 2026
Abstract
Deep neural networks (DNNs) dominate text classification but suffer from high computational costs and poor generalization in data-scarce or Out-of-Distribution (OOD) environments. Conversely, non-parametric methods like compression-based offer robustness but incur prohibitive inference latency due to the reliance on exhaustive pairwise comparisons. To [...] Read more.
Deep neural networks (DNNs) dominate text classification but suffer from high computational costs and poor generalization in data-scarce or Out-of-Distribution (OOD) environments. Conversely, non-parametric methods like compression-based offer robustness but incur prohibitive inference latency due to the reliance on exhaustive pairwise comparisons. To bridge this gap, this study proposes EGTJ, a training-free framework that introduces a novel retrieval-augmented compression architecture. Unlike prior works that apply similarity metrics in isolation, EGTJ utilizes an inverted-index pre-filtering mechanism to dynamically constrain the comparison scope, effectively reducing algorithmic complexity from linear to constant time relative to the training set size. Furthermore, a tri-metric fusion strategy is introduced that integrates information-theoretic (gzip), lexical (TF-IDF), and structural (Jaccard) similarities to mitigate the inherent biases of individual metrics. Experimental results across five in-distribution and four OOD datasets demonstrate that EGTJ achieves superior accuracy over all baseline methods—notably outperforming BERT by over 30% in 5-shot OOD scenarios—while simultaneously slashing inference latency by orders of magnitude compared to standard compression-based approaches. These findings present EGTJ as a scalable, high-performance alternative for resource-constrained NLP, effectively solving the scalability bottleneck of non-parametric classification. Full article
Show Figures

Figure 1

21 pages, 472 KB  
Article
Efficient CNN–GRU Transfer Learning for Edge IoT Intrusion Detection
by Amjad Gamlo, Sanaa Sharaf and Rania Molla
Electronics 2026, 15(5), 981; https://doi.org/10.3390/electronics15050981 - 27 Feb 2026
Abstract
Intrusion detection in Internet of Things (IoT) environments is challenged by severe class imbalance, evolving attack patterns, and the limited computational resources of edge devices. To address these challenges, this paper proposes a lightweight transfer-learning framework based on a combined architecture of Convolutional [...] Read more.
Intrusion detection in Internet of Things (IoT) environments is challenged by severe class imbalance, evolving attack patterns, and the limited computational resources of edge devices. To address these challenges, this paper proposes a lightweight transfer-learning framework based on a combined architecture of Convolutional Neural Network and Gated Recurrent Unit (CNN–GRU) for IoT intrusion detection. The model is first pretrained on a large-scale source dataset containing mixed benign and attack traffic, then adapted to a smaller and structurally different target dataset using partial finetuning. To enable efficient edge adaptation, early convolutional layers are frozen while only the GRU and classification head are updated on the target domain. A leakage-free, group-aware data preparation strategy with overlapping temporal windows is employed to ensure reliable evaluation. Experimental results demonstrate that the proposed lightweight transfer approach achieves solid macro-level detection performance while reducing training cost compared to full finetuning. Additional analysis using a CPU-based inference proxy shows low latency and a small model footprint. This supports the feasibility of edge deployment. The results confirm that lightweight transfer learning offers an effective balance between detection performance and adaptation efficiency for resource-constrained IoT intrusion detection systems. Full article
Show Figures

Figure 1

11 pages, 590 KB  
Article
Design and Performance Evaluation of Communication Systems Based on Non-Orthogonal Overlapped Chirp Modulation
by Guoping Liu, Jiaju Zhang, Qiusheng Gao, Wenjiang Pei, Junpeng Zhang and Sinuo Jiao
Symmetry 2026, 18(3), 412; https://doi.org/10.3390/sym18030412 - 27 Feb 2026
Abstract
With the evolution of smart grids, power communication networks are increasingly required to support high-bandwidth and diversified services such as high-definition video, real-time control, and positioning—services that impose dual challenges of communication capacity and spectrum constraints—under severe resource limitations. Conventional orthogonal modulation schemes [...] Read more.
With the evolution of smart grids, power communication networks are increasingly required to support high-bandwidth and diversified services such as high-definition video, real-time control, and positioning—services that impose dual challenges of communication capacity and spectrum constraints—under severe resource limitations. Conventional orthogonal modulation schemes exhibit significant limitations in spectral efficiency and concurrent access capabilities, particularly in supporting high-density user environments. To address this, we propose a communication system based on non-orthogonal overlapped chirp modulation, in which the intrinsic symmetry properties of chirp waveforms are utilized to enhance system design and performance. We first construct the system architecture with a multi-symbol concurrent transmission scheme and introduce continuous orthogonal phase modulation to improve symbol distinguishability and mitigate inter-symbol interference—an approach that effectively harnesses signal symmetry for interference suppression. At the receiver, a low-complexity demodulation algorithm based on correlation matrix computation is developed, further improved through oversampling techniques that exploit temporal and spectral symmetry in signal design. Monte Carlo simulations confirm that the proposed system outperforms traditional orthogonal chirp and orthogonal frequency division multiplexing systems in bit error rate performance and spectral efficiency across varying signal-to-noise ratios and modulation schemes. The proposed NOOC system achieves spectral efficiency scaling linearly with concurrency level K, reaching up to 16 bits/s/Hz for K = 16 with BPSK, compared to 1 bit/s/Hz in orthogonal systems. The study provides both a theoretical foundation and practical insights for developing symmetry-aware, efficient, and reliable air interface technologies suitable for future power-private networks. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

23 pages, 3630 KB  
Article
Improving Object Detection in Generalized Foggy Conditions of Insulator Defect Detection Based on Drone Images
by Abdulrahman Kariri and Khaled Elleithy
Electronics 2026, 15(5), 979; https://doi.org/10.3390/electronics15050979 - 27 Feb 2026
Abstract
Routine evaluation of insulator performance is important for maintaining the reliability and safety of power system operations. The use of unmanned aerial vehicles (UAVs) has been a significant advancement in transmission line monitoring, effectively replacing traditional manual inspection methods. With the rapid advancement [...] Read more.
Routine evaluation of insulator performance is important for maintaining the reliability and safety of power system operations. The use of unmanned aerial vehicles (UAVs) has been a significant advancement in transmission line monitoring, effectively replacing traditional manual inspection methods. With the rapid advancement of deep learning techniques, methods based on these models for detecting insulator defects have attracted increasing research interest and achieved notable advancements. Nevertheless, existing approaches primarily emphasize constructing sophisticated and intricate network architectures, which consequently lead to greater inference complexity when applied in practical scenarios. On the other hand, foggy scenarios pose challenges for learning algorithms due to difficulties in obtaining and labeling samples, as well as the poor performance of detectors trained on clear-weather samples. This study proposes adaptive enhancement based on YOLO, a framework that has robustness and domain generalization under fog-induced distribution shifts. It optimizes at multiple scales and enhances images as input to a detector in a single pipeline. Experimental results demonstrate improved performance on public UPID and SFID insulator defect datasets, improving insulator defect detection precision without increased computational complexity or inference resources, which is of great significance for advancing object detection in adverse weather. The proposed method achieves real-time performance, with an end-to-end inference speed exceeding 25 FPS and a model-only speed of approximately 38 FPS on 678 images from UPID, demonstrating both practical applicability and computational efficiency. Full article
(This article belongs to the Special Issue Feature Papers in Networks: 2025–2026 Edition)
Show Figures

Figure 1

19 pages, 899 KB  
Article
Investigating Epistemic Uncertainty in PCB Defect Detection: A Comparative Study Using Monte Carlo Dropout
by Efosa Osagie and Rebecca Balasundaram
J. Exp. Theor. Anal. 2026, 4(1), 11; https://doi.org/10.3390/jeta4010011 - 27 Feb 2026
Abstract
Deep learning models have become central to automated Printed Circuit Board (PCB) defect detection. However, recent work has raised concerns about how reliably these models express confidence in their predictions, particularly when deployed in safety-critical inspection systems. This study conducts an empirical investigation [...] Read more.
Deep learning models have become central to automated Printed Circuit Board (PCB) defect detection. However, recent work has raised concerns about how reliably these models express confidence in their predictions, particularly when deployed in safety-critical inspection systems. This study conducts an empirical investigation of epistemic uncertainty across representative architectures used in PCB inspection: the two-stage Faster R-CNN detector, the one-stage YOLOv8 detector, and their corresponding classification counterparts, ResNet-50 and YOLOv8-Cls. Monte Carlo Dropout (MCD) was applied during inference to compute predictive entropy, mutual information, softmax variance, and bounding-box variability across multiple stochastic forward passes on both multiclass and binary inspection datasets. On the multiclass SolDef_AI dataset, Faster R-CNN achieved substantially stronger detection performance (mAP = 0.7607, F1 = 0.9304) and lower predictive entropy, with more stable localisation. In contrast, YOLOv8 produced markedly weaker performance (mAP = 0.2369, F1 = 0.3130) alongside higher entropy and greater bounding-box variability. On the binary Jiafuwen datasets, the YOLOv8-Cls model achieved higher overall performance (F1 = 0.6493) compared with the ResNet-50 classifier (F1 = 0.4904), reflecting its strength in simpler binary inspection tasks. Across uncertainty metrics, predictive entropy and mutual information were more sensitive to dataset size, showing higher and more variable values in the smaller multiclass dataset, whereas softmax variance and bounding-box variability appeared more architecture-dependent. These findings demonstrate that architectural choice, dataset structure, and task formulation jointly influence both performance and uncertainty behaviour. By integrating conventional metrics with uncertainty estimates, this study provides a transparent benchmark for assessing model confidence in automated optical inspection of PCBs. Full article
Show Figures

Figure 1

18 pages, 1079 KB  
Article
Feasibility of Using Large Language Models for Structured Medication Extraction from Clinical Text: A Comparative Analysis of Zero-Shot and Few-Shot Paradigms
by Evan Schulte, Mohamed Abusharkh, Kushal Dahal, Michael Klepser and Minji Sohn
Appl. Sci. 2026, 16(5), 2300; https://doi.org/10.3390/app16052300 - 27 Feb 2026
Abstract
The digitization of healthcare has been accompanied by a rapid expansion of electronic health records (EHRs); however, a significant proportion of critical patient data, specifically medication regimens, remains entrapped within unstructured clinical narratives. The inability to seamlessly compute this data hinders advancements in [...] Read more.
The digitization of healthcare has been accompanied by a rapid expansion of electronic health records (EHRs); however, a significant proportion of critical patient data, specifically medication regimens, remains entrapped within unstructured clinical narratives. The inability to seamlessly compute this data hinders advancements in pharmacovigilance, clinical decision support, and population health management. This study presents a comprehensive, rigorous evaluation of the feasibility of deploying Large Language Models (LLMs) to automate the extraction of structured dosage information (Dose, Daily Frequency, Duration) from outpatient antimicrobial clinical notes sourced from the Collaboration to Harmonize Antimicrobial Registry Measures (CHARM) registry. We scrutinized the performance of five distinct open-weight architectures, namely GPT-OSS:20B, Gemma 2:9B, Mistral 7B, Qwen3:14B and Llama 3.2, across both Zero-Shot and Retrieval Augmented Generation (RAG)-based Few-Shot prompting paradigms. Our analysis reveals a fundamental architectural trade-off: the reasoning-optimized GPT-OSS:20B dominates the zero-shot landscape (F1 > 0.90) by leveraging abstract schema understanding, whereas the instruction-tuned Gemma 2:9B excels in the few-shot setting (F1 ~ 0.99), effectively utilizing examples as guardrails to surpass larger models. Conversely, smaller models (Mistral, Llama) exhibit a prohibitive “hallucination barrier,” rendering them unsafe for unsupervised clinical application. Furthermore, we identify “Inconsistent Unit Handling” and “Complex Temporal Logic” as persistent failure modes that resist simple scaling laws. This report provides a definitive framework for selecting model architectures based on the availability of few-shot examples and highlights the necessity of dynamic RAG strategies to achieve production-grade reliability in medical informatics. Full article
Show Figures

Figure 1

Back to TopTop