Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,393)

Search Parameters:
Keywords = hardware challenges

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3302 KB  
Systematic Review
Performance Trade-Offs in Multi-Tenant IoT–Cloud Security: A Systematic Review of Emerging Technologies
by Bader Alobaywi, Mohammed G. Almutairi and Frederick T. Sheldon
IoT 2026, 7(1), 21; https://doi.org/10.3390/iot7010021 - 22 Feb 2026
Abstract
Multi-tenancy is essential for scalable IoT–Cloud systems; however, it introduces complex security vulnerabilities at the intersection of shared cloud infrastructures and resource-constrained IoT environments. This systematic review evaluates next-generation security frameworks designed to enforce tenant isolation without violating the strict latency (<10 ms) [...] Read more.
Multi-tenancy is essential for scalable IoT–Cloud systems; however, it introduces complex security vulnerabilities at the intersection of shared cloud infrastructures and resource-constrained IoT environments. This systematic review evaluates next-generation security frameworks designed to enforce tenant isolation without violating the strict latency (<10 ms) and energy bounds of lightweight sensors. Adhering to PRISMA guidelines, we analyze selected high-quality studies to categorize intersectional threats, including cross-tenant data leakage, side-channel attacks, and privilege escalation. Our analysis identifies a critical, unresolved conflict: existing mitigation strategies often incur a 12% computational and communication overhead, creating a significant barrier for real-time applications. Furthermore, we critically analyze emerging technologies, including Zero Trust Architectures (ZTA), adaptive Artificial Intelligence (AI), blockchain, and Post-Quantum Cryptography (PQC). We find that direct PQC deployment is currently infeasible for LPWAN protocols due to key-size constraints (1.6 KB) that exceed typical payload limits. To address these challenges, we propose a novel multi-layer security design principle that offloads heavy isolation and cryptographic workloads to hardware-accelerated edge gateways, thereby maintaining tenant isolation without compromising real-time performance. Finally, this review serves as a roadmap for future research, highlighting federated learning and hardware enclaves as essential pathways for securing next-generation multi-tenant IoT ecosystems. Full article
Show Figures

Figure 1

17 pages, 2032 KB  
Article
Coordinated Inertia Synthesis and Stability Design for PV Systems Utilizing DC-Link Capacitors
by Qi Hua, Lunbo Deng, Qiao Peng and Yongheng Yang
Energies 2026, 19(4), 1100; https://doi.org/10.3390/en19041100 - 22 Feb 2026
Abstract
The increasing penetration of inverter-based resources (IBRs) has been reducing system inertia and intensifying frequency stability challenges. Hence, various grid demands have been imposed on grid-connected systems, e.g., requiring the provision of an auxiliary service to the grid. In this context, this paper [...] Read more.
The increasing penetration of inverter-based resources (IBRs) has been reducing system inertia and intensifying frequency stability challenges. Hence, various grid demands have been imposed on grid-connected systems, e.g., requiring the provision of an auxiliary service to the grid. In this context, this paper investigates the provision of synthesized inertia from the DC-link capacitors in grid-connected photovoltaic (PV) systems. For this configuration, the PV converter adopts a frequency–voltage droop control (FVDC) strategy, while a virtual synchronous generator (VSG) is employed on the grid side to emulate a synchronous generator, to enable the DC-link energy to contribute to primary frequency support. To quantify the virtual inertia and evaluate the closed-loop stability, a small-signal model of the inverter system is established. An eigenvalue analysis reveals that while increasing the DC-link voltage or capacitance enhances the achievable virtual inertia, it simultaneously narrows the stability margin. As such, comparative stability assessments under different parameter settings are performed, highlighting the distinct impacts of the DC-link voltages and capacitances on the emulated inertia and stability margins. The study provides insights into the maximum virtual inertia achievable via DC-link capacitors and offers practical guidelines for coordinating the controller and DC-link design to enhance frequency robustness in low-inertia power systems. Real-time hardware-in-the-loop (RT-HIL) tests validate the analytical findings. Full article
24 pages, 10729 KB  
Article
DenseDuckMOT: A Real-Time Detection-Tracking Coupled Counting Framework for Complex Avicultural Environments
by Jiaxing Xie, Jiatao Wu, Liye Chen, Yue Cao, Zihao Chen, Meiyi Lu, Yujian Lin, Chunxi Tu, Weixing Wang and Jinshui Lin
Animals 2026, 16(4), 684; https://doi.org/10.3390/ani16040684 - 21 Feb 2026
Abstract
The Liancheng White Duck is a nationally protected breed in China, but its high-density farming environment poses significant challenges for target detection and behavior recognition, particularly due to occlusion, motion blur, and flock aggregation, making practical flock monitoring and counting labor intensive and [...] Read more.
The Liancheng White Duck is a nationally protected breed in China, but its high-density farming environment poses significant challenges for target detection and behavior recognition, particularly due to occlusion, motion blur, and flock aggregation, making practical flock monitoring and counting labor intensive and prone to error in real barns. To address these issues, we propose DenseDuckMOT, an integrated detection-tracking framework for practical farm monitoring using existing fixed surveillance cameras with minimal additional hardware cost that combines the improved DuckNet detector with the AKFTrack tracker. DuckNet, based on YOLOv11, incorporates BiFPN, GLSA, and ESDH. It achieves high performance with 98.19% precision, 94.79% mAP@0.75, 97.70% F1-score, and 97.72% recall, while maintaining a lightweight design of only 1.90M parameters and a model size of 4485 KB. AKFTrack introduces adaptive Kalman prediction and a two-stage association scheme. It is evaluated on five dense white duck surveillance videos, where it outperforms or ranks second in MOTA, IDF1, and recall compared to DeepSORT, StrongSORT, and ByteTrack, especially in crowded and occluded scenes. Experimental results, ablation studies, and LayerCAM visualizations confirm the complementary advantages of BiFPN, GLSA, and ESDH, as well as the robustness of AKFTrack in handling occlusion and rapid motion. DenseDuckMOT provides accurate, efficient, and stable real-time monitoring in dynamic poultry farms, offering a scalable solution for intelligent farming. Full article
Show Figures

Figure 1

14 pages, 3086 KB  
Article
SQ-LoRA: Memory-Efficient Language Model Compression Through Stable-Rank-Guided Quantization for Edge Computing Applications
by Seda Bayat Toksöz and Gültekin Işik
Appl. Sci. 2026, 16(4), 2113; https://doi.org/10.3390/app16042113 - 21 Feb 2026
Viewed by 32
Abstract
The deployment of transformer-based language models on resource-constrained edge devices presents fundamental challenges in computational efficiency and memory utilization. We introduce SQ-LoRA (Stable-rank Quantized Low-Rank Adaptation), a theoretically grounded compression framework that achieves unprecedented efficiency through the synergistic integration of adaptive low-rank decomposition, [...] Read more.
The deployment of transformer-based language models on resource-constrained edge devices presents fundamental challenges in computational efficiency and memory utilization. We introduce SQ-LoRA (Stable-rank Quantized Low-Rank Adaptation), a theoretically grounded compression framework that achieves unprecedented efficiency through the synergistic integration of adaptive low-rank decomposition, hardware-accelerated structured sparsity, and intelligent hybrid quantization. Our primary contribution establishes the first rigorous mathematical connection between the matrix stable rank and optimal LoRA rank selection, formalized in Theorem I, which provides bounded approximation guarantees. SQ-LoRA implements: (1) adaptive rank allocation via stable-rank analysis to automatically determine layer-wise compression ratios; (2) 4:8 structured sparsity patterns, enabling 2× hardware acceleration on modern edge processors; and (3) a three-tier quantization scheme that combines 4-bit NormalFloat storage with selective 3-bit/8-bit precision to preserve outliers. A comprehensive evaluation on four diverse natural language processing (NLP) benchmarks demonstrates that SQ-LoRA achieves a 320 MB memory footprint (96.7% reduction) and a 10 ms inference latency (91.7% improvement), and maintains 82.0% average accuracy (within 0.15% of the full model). Statistical significance testing (p < 0.001) confirms its superiority over state-of-the-art methods. This framework enables the deployment of sophisticated language models on devices with 2 GB of RAM, advancing practical edge-AI applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
20 pages, 2394 KB  
Article
Multi-Frequency-Scale Distributed Recurrence Plot-Based Fault Diagnosis for PMSM
by Jun Sun, Ziling Nie, Yu Zhou, Pan Sun, Yangwei Zhou, Yihui Xia and Huayu Li
Sensors 2026, 26(4), 1361; https://doi.org/10.3390/s26041361 - 20 Feb 2026
Viewed by 168
Abstract
Conventional permanent magnet synchronous motor (PMSM) fault diagnosis methods rely on one-dimensional (1-D) time-series signals. These approaches face challenges such as complex signal processing, difficulty in extracting fault features, and limited noise immunity. To address these issues, a novel approach method is proposed. [...] Read more.
Conventional permanent magnet synchronous motor (PMSM) fault diagnosis methods rely on one-dimensional (1-D) time-series signals. These approaches face challenges such as complex signal processing, difficulty in extracting fault features, and limited noise immunity. To address these issues, a novel approach method is proposed. Its core process includes wavelet packet decomposition (WPD), distributed recurrence plot (DRP) generation, and image transformation. This approach enables feature representation of the original signal across multiple frequency bands, and the shortcomings of traditional recurrence plots in terms of feature redundancy and long-sequence representation are overcome. On this basis, a lightweight multi-frequency-scale fault diagnosis model is developed, consisting of a multi-frequency-scale convolutional neural network (CNN), a convolutional block attention module (CBAM), and a global average pooling (GAP) layer. Experimental results demonstrate that the proposed method achieves high diagnostic accuracy and strong noise immunity. Under identical hardware and dataset conditions, the inference time of the proposed method is only 12.35% as long as that of traditional recurrence plot-based CNN and 50.03% as long as that of asymmetric recurrence plot-based CNN. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

49 pages, 908 KB  
Review
A Review of Resilient IoT Systems: Trends, Challenges, and Future Directions
by Bandar Alotaibi
Appl. Sci. 2026, 16(4), 2079; https://doi.org/10.3390/app16042079 - 20 Feb 2026
Viewed by 105
Abstract
The Internet of Things (IoT) is increasingly embedded in critical infrastructures across healthcare, energy, transportation, and industrial automation, yet its pervasiveness introduces substantial security and resilience challenges. This paper presents a comprehensive review of recent advances in IoT resilience, focusing on developments reported [...] Read more.
The Internet of Things (IoT) is increasingly embedded in critical infrastructures across healthcare, energy, transportation, and industrial automation, yet its pervasiveness introduces substantial security and resilience challenges. This paper presents a comprehensive review of recent advances in IoT resilience, focusing on developments reported between 2022 and 2025. A layered taxonomy is proposed to organize resilience strategies across hardware, network, learning, application, and governance layers, addressing adversarial, environmental, and hybrid stressors. The survey systematically classifies and compares more than forty representative studies encompassing deep learning under adversarial attack, generative and ensemble intrusion detection, hardware and protocol-level defenses, federated and distributed learning, and trust and governance-based approaches. A comparative analysis shows that while adversarial training, GAN-based augmentation, and decentralized learning improve robustness, their evidence is often confined to specific datasets or attack scenarios, with limited validation in large-scale deployments. The study highlights challenges in benchmarking adaptivity, cross-layer integration, and explainable resilience, concluding with future directions for creating antifragile IoT systems that can self-heal and adapt to evolving cyber–physical threats. Full article
34 pages, 851 KB  
Review
Frequency-Domain Vision Transformers: Architectures, Applications, and Open Challenges
by Muhammet Fatih Aslan, Busra Aslan and Kadir Sabanci
Appl. Sci. 2026, 16(4), 2024; https://doi.org/10.3390/app16042024 - 18 Feb 2026
Viewed by 130
Abstract
Vision Transformers (ViTs) have achieved strong performance in computer vision but suffer from limited inductive bias, high data requirements, and reduced sensitivity to high-frequency visual details. To address these limitations, Frequency-Domain ViTs (FD-ViTs) incorporate spectral representations—such as Fourier, wavelet, and discrete cosine transforms—into [...] Read more.
Vision Transformers (ViTs) have achieved strong performance in computer vision but suffer from limited inductive bias, high data requirements, and reduced sensitivity to high-frequency visual details. To address these limitations, Frequency-Domain ViTs (FD-ViTs) incorporate spectral representations—such as Fourier, wavelet, and discrete cosine transforms—into the Transformer pipeline to improve feature expressiveness and robustness. This survey provides a systematic review of FD-ViT architectures and introduces a unified taxonomy based on spectral transformation type, integration level, and computational characteristics. We summarize empirical findings across image classification, image restoration, and domain-specific applications, including medical imaging and remote sensing, highlighting consistent performance patterns and task-dependent trade-offs. Our analysis shows that frequency-domain integration yields modest, context-dependent gains in large-scale classification, while offering more consistent advantages in frequency-sensitive tasks such as image restoration and noise-robust visual analysis. We further discuss key open challenges, including spectral aliasing, phase information loss, evaluation inconsistency, and deployment efficiency, and outline emerging directions toward dynamic spectral operators, multimodal integration, and hardware-aware designs. To the best of our knowledge, this work constitutes the first systematic survey that consolidates the growing body of research on FD-ViT, providing a structured conceptual and methodological reference for future studies on spectral representations in Transformer-based visual learning. Full article
(This article belongs to the Special Issue The Age of Transformers: Emerging Trends and Applications)
Show Figures

Figure 1

36 pages, 3628 KB  
Article
FEGW-YOLO: A Feature-Complexity-Guided Lightweight Framework for Real-Time Multi-Crop Detection with Advanced Sensing Integration on Edge Devices
by Yaojiang Liu, Hongjun Tian, Yijie Yin, Yuhan Zhou, Wei Li, Yang Xiong, Yichen Wang, Zinan Nie, Yang Yang, Dongxiao Xie and Shijie Huang
Sensors 2026, 26(4), 1313; https://doi.org/10.3390/s26041313 - 18 Feb 2026
Viewed by 108
Abstract
Real-time object detection on resource-constrained edge devices remains a critical challenge in precision agriculture and autonomous systems, particularly when integrating advanced multi-modal sensors (RGB-D, thermal, hyperspectral). This paper introduces FEGW-YOLO, a lightweight detection framework explicitly designed to bridge the efficiency-accuracy gap for fine-grained [...] Read more.
Real-time object detection on resource-constrained edge devices remains a critical challenge in precision agriculture and autonomous systems, particularly when integrating advanced multi-modal sensors (RGB-D, thermal, hyperspectral). This paper introduces FEGW-YOLO, a lightweight detection framework explicitly designed to bridge the efficiency-accuracy gap for fine-grained visual perception on edge hardware while maintaining compatibility with multiple sensor modalities. The core innovation is a Feature Complexity Descriptor (FCD) metric that enables adaptive, layer-wise compression based on the information-bearing capacity of network features. This compression-guided approach is coupled with (1) Feature Engineering-driven Ghost Convolution (FEG-Conv) for parameter reduction, (2) Efficient Multi-Scale Attention (EMA) for compensating compression-induced information loss, and (3) Wise-IoU loss for improved localization in dense, occluded scenes. The framework follows a principled “Compress, Compensate, and Refine” philosophy that treats compression and compensation as co-designed objectives rather than isolated knobs. Extensive experiments on a custom strawberry dataset (11,752 annotated instances) and cross-crop validation on apples, tomatoes, and grapes demonstrate that FEGW-YOLO achieves 95.1% mAP@0.5 while reducing model parameters by 54.7% and computational cost (GFLOPs) by 53.5% compared to a strong YOLO-Agri baseline. Real-time inference on NVIDIA Jetson Xavier achieves 38 FPS at 12.3 W, enabling 40+ hours of continuous operation on typical agricultural robotic platforms. Multi-modal fusion experiments with RGB-D sensors demonstrate that the lightweight architecture leaves sufficient computational headroom for parallel processing of depth and visual data, a capability essential for practical advanced sensing systems. Field deployment in commercial strawberry greenhouses validates an 87.3% harvesting success rate with a 2.1% fruit damage rate, demonstrating feasibility for autonomous systems. The proposed framework advances the state-of-the-art in efficient agricultural sensing by introducing a principled metric-guided compression strategy, comprehensive multi-modal sensor integration, and empirical validation across diverse crop types and real-world deployment scenarios. This work bridges the gap between laboratory research and practical edge deployment of advanced sensing systems, with direct relevance to autonomous harvesting, precision monitoring, and other resource-constrained agricultural applications. Full article
30 pages, 6791 KB  
Review
Fault-Tolerance Strategies in Multilevel Converters: An Overview
by Juan Angel González-Flores, Rodolfo Amalio Vargas-Méndez, Adolfo R. Lopez, Gloria Lilia Osorio-Gordillo, Carlos Aguilar-Castillo, Ma. del Carmen Toledo-Pérez and Omar Rodríguez-Benítez
Processes 2026, 14(4), 688; https://doi.org/10.3390/pr14040688 - 18 Feb 2026
Viewed by 102
Abstract
This paper presents an overview of fault-tolerance strategies in multilevel converters, with emphasis on fault diagnosis as a fundamental stage. Classical multilevel converter topologies and their main application areas, such as motor drives, renewable energy systems, and smart grids, are first introduced, along [...] Read more.
This paper presents an overview of fault-tolerance strategies in multilevel converters, with emphasis on fault diagnosis as a fundamental stage. Classical multilevel converter topologies and their main application areas, such as motor drives, renewable energy systems, and smart grids, are first introduced, along with the most common faults affecting power semiconductor devices. Fault diagnosis techniques reported in the literature are then reviewed and classified into model-based, signal-based, hardware-based, and hybrid approaches. The operating principles, measured variables, and implementation requirements of each category are analyzed, with particular attention to the fault detection times. A comparative analysis is provided, highlighting the fastest diagnostic strategies and their application to different multilevel converter topologies. This review consolidates recent advances and identifies current trends and challenges, providing a useful reference for the development of faster and more reliable fault-tolerant solutions in multilevel power converters. Full article
Show Figures

Graphical abstract

40 pages, 8354 KB  
Article
System-Level Optimization of AUV Swarm Control and Perception: An Energy-Aware Federated Meta-Transfer Learning Framework with Digital Twin Validation
by Zinan Nie, Hongjun Tian, Yijie Yin, Yuhan Zhou, Wei Li, Yang Xiong, Yichen Wang, Zitong Zhang, Yang Yang, Dongxiao Xie, Manlin Wang and Shijie Huang
J. Mar. Sci. Eng. 2026, 14(4), 384; https://doi.org/10.3390/jmse14040384 - 18 Feb 2026
Viewed by 109
Abstract
Deep-sea exploration increasingly relies on Autonomous Underwater Vehicles (AUVs) to enable persistent, wide-area surveying in harsh and uncertain environments. In practice, however, deployments are constrained by tight energy budgets and bandwidth-limited, intermittent acoustic links, which complicate mission-level coordination. Moreover, many existing systems treat [...] Read more.
Deep-sea exploration increasingly relies on Autonomous Underwater Vehicles (AUVs) to enable persistent, wide-area surveying in harsh and uncertain environments. In practice, however, deployments are constrained by tight energy budgets and bandwidth-limited, intermittent acoustic links, which complicate mission-level coordination. Moreover, many existing systems treat perception and control as loosely coupled modules, often resulting in redundant sensing, inefficient communication, and degraded overall performance—particularly under heterogeneous sensing modalities and shifting geological conditions. To address these challenges, we propose a hierarchical Federated Meta-Transfer Learning (FMTL) framework that tightly integrates collaborative perception with adaptive control for swarm optimization. The framework operates at three levels: (1) Representation Learning aligns heterogeneous sensors in a shared latent space via a physics-informed contrastive objective, substantially reducing communication overhead; (2) Meta-Learning Adaptation enables rapid transfer and convergence in new environments with minimal data exchange; and (3) Energy-Aware Control realizes closed-loop exploration by coupling Federated Explainable AI (FXAI) with decentralized multi-agent reinforcement learning (MARL) for path planning under energy constraints. Validated in high-fidelity hardware-in-the-loop simulations and a digital-twin environment, FMTL outperforms state-of-the-art baselines, achieving an AUC of 0.94 for target identification. Furthermore, an energy–intelligence Pareto analysis demonstrates a 4.5× improvement in information gain per Joule. Overall, this work provides a physically consistent and communication-efficient blueprint for the optimization and control of next-generation intelligent marine swarms. Full article
(This article belongs to the Special Issue System Optimization and Control of Unmanned Marine Vehicles)
Show Figures

Figure 1

21 pages, 1805 KB  
Article
Introducing LEAF: LLM Edge Assessment Framework for Generative AI on the Edge
by Mustafa Abdulkadhim and Sandor R. Repas
Mach. Learn. Knowl. Extr. 2026, 8(2), 48; https://doi.org/10.3390/make8020048 - 18 Feb 2026
Viewed by 295
Abstract
The transition of Large Language Models (LLMs) from centralized clouds to edge environments is critical for addressing privacy concerns, latency bottlenecks, and operational costs. However, existing edge benchmarking frameworks remain tailored to discriminative Deep Learning tasks (e.g., object detection), failing to capture the [...] Read more.
The transition of Large Language Models (LLMs) from centralized clouds to edge environments is critical for addressing privacy concerns, latency bottlenecks, and operational costs. However, existing edge benchmarking frameworks remain tailored to discriminative Deep Learning tasks (e.g., object detection), failing to capture the multidimensional challenges of generative AI, specifically the trade-offs between token generation speed, semantic accuracy, and hardware sustainability. To address this gap, we introduce LEAF (LLM Edge Assessment Framework), a novel evaluation methodology that integrates Circular Economy principles directly into performance metrics. LEAF assesses edge deployments across five synergistic pillars: Circular Economy Score, Energy Efficiency (Joules/Token), Performance Speed (Tokens/Second), semantic accuracy (BERTScore), and End-to-End Latency. We validate LEAF through an extensive experimental analysis of five distinct hardware classes, ranging from embedded IoT devices (Raspberry Pi 4 and 5, NVIDIA Jetson Nano) to professional edge servers (NVIDIA T400) and repurposed legacy workstations (NVIDIA GTX 1050 Ti). Utilizing 4-bit quantized models via the Ollama runtime, our results reveal a counterintuitive insight: repurposed consumer hardware significantly outperforms modern purpose-built edge SoCs. The legacy GTX 1050 Ti achieved a 20× speedup over the Raspberry Pi 4 and maintained superior energy-per-task efficiency compared to low-power ARM architectures by minimizing active runtime. These findings challenge the prevailing narrative that newer silicon is essential for Edge AI, demonstrating that sustainable, high-performance inference can be achieved by extending the lifecycle of existing hardware. LEAF thus provides a blueprint for a “Green Edge” ecosystem that balances computational capability with environmental responsibility. Full article
(This article belongs to the Section Data)
Show Figures

Graphical abstract

19 pages, 2421 KB  
Article
Modeling of a Hardware and Software System for Non-Invasive Monitoring of the Feeding Behavior of Farm Animals
by Oleg Ivashchuk, Zhanat Kenzhebayeva, Alexei Zhigalov, Moldir Allaniyazova, Gulnara Kaziyeva, Kaiyrbek Makulov, Vyacheslav Fedorov and Olga Ivashchuk
Technologies 2026, 14(2), 127; https://doi.org/10.3390/technologies14020127 - 18 Feb 2026
Viewed by 141
Abstract
This paper presents the design of a hardware–software system for non-invasive automated monitoring of feeding behavior in livestock with biometric identification of individual animals. Neural network models for animal identification from images and individual recognition have been developed and trained. A solution is [...] Read more.
This paper presents the design of a hardware–software system for non-invasive automated monitoring of feeding behavior in livestock with biometric identification of individual animals. Neural network models for animal identification from images and individual recognition have been developed and trained. A solution is proposed to address the challenge of acquiring a sufficient number of personalized animal images for training the identification neural network. A transfer learning approach is introduced for pig identification, where the network is first trained on a large-scale dataset of more than three million human face images obtained from open sources and subsequently fine-tuned by training the upper layers on a significantly smaller dataset consisting of 5610 pig face images. Experimental results demonstrated the high effectiveness of the system: the Top-1 identification accuracy reached 95.1%, while the ROC AUC in open-set recognition tasks achieved 0.95. The processing time per frame on an NVIDIA RTX 4090 GPU was 1.4 ms (724 FPS). Full article
(This article belongs to the Special Issue IoT-Enabling Technologies and Applications—2nd Edition)
Show Figures

Figure 1

30 pages, 18301 KB  
Article
Optimizing Computer Vision for Edge Deployment in Industry 4.0: A Framework and Experimental Evaluation
by Eman Azab, Mohamed Ehab, Lamia Shihata and Maggie Mashaly
Technologies 2026, 14(2), 126; https://doi.org/10.3390/technologies14020126 - 17 Feb 2026
Viewed by 148
Abstract
Integrating high-performance computer vision (CV) into Industry 4.0 environments remains a challenge due to the computational disparity between state-of-the-art (SOTA) models and resource-constrained edge hardware. This study proposes a hardware-aware optimization framework designed to bridge this gap, focusing on real-time object detection for [...] Read more.
Integrating high-performance computer vision (CV) into Industry 4.0 environments remains a challenge due to the computational disparity between state-of-the-art (SOTA) models and resource-constrained edge hardware. This study proposes a hardware-aware optimization framework designed to bridge this gap, focusing on real-time object detection for high-speed, omnidirectional conveyor systems. Unlike conventional benchmarking, the proposed framework employs a multi-stage optimization pipeline—integrating backbone refinement, hyperparameter tuning, and quantization—to transition diverse architectures from baseline configurations (Mbase) to hardware-optimized variants (Mopt).The framework’s efficacy is validated using a custom-built standalone experimental platform detecting package features, brands, and disruptions on an omnidirectional-wheeled conveyor. A comprehensive comparative analysis is conducted across a heterogeneous edge ecosystem, including the NVIDIA Jetson Nano (GPU), Raspberry Pi 4 (CPU), and Google Coral (TPU). Our findings demonstrate that through systematic tuning, the YOLOv10n variant emerged as the superior architecture, achieving a precision of 98.1% and an mAP50:95 of 81.22%. Post-deployment characterization reveals that the optimized YOLOv10n model on the NVIDIA Jetson Nano achieved a peak inference speed of 25 frames per second (FPS), successfully striking the “Pareto-optimal” balance between predictive accuracy and real-time processing. The primary contributions of this work include a reproducible optimization methodology, a comparative performance map across three distinct hardware backends, and the release of a specialized industrial conveyor dataset. Full article
30 pages, 58691 KB  
Article
MMPFNet: A Novel Lightweight Road Target Detection Method of FMCW Radar Based on Hypergraph Mechanism and Attention Enhancement
by Dongdong Huang, Dawei Xu and Yongjie Zhai
Sensors 2026, 26(4), 1291; https://doi.org/10.3390/s26041291 - 16 Feb 2026
Viewed by 232
Abstract
Road target detection is a crucial aspect of current research in automotive advanced driver assistance systems and intelligent transportation systems, where accuracy, speed, and lightweight design are key considerations. Compared to various sensors employed in driving assistance systems, millimeter-wave radar offers advantages such [...] Read more.
Road target detection is a crucial aspect of current research in automotive advanced driver assistance systems and intelligent transportation systems, where accuracy, speed, and lightweight design are key considerations. Compared to various sensors employed in driving assistance systems, millimeter-wave radar offers advantages such as all-weather operation, low hardware cost, strong penetration capability, and the ability to extract rich spatial information about targets. This paper tackles the challenges posed by the characteristics of Range-Angle map data from 77 GHz Frequency-Modulated Continuous Wave radar—namely, non-visible light imagery, abstract representation, rich fine details, and overlapping features. To this end, this paper proposes MMPFNet, a lightweight model based on the hypergraph mechanism with attention enhancement, as an extension of YOLOv13. First, an M-DSC3k2 module is proposed based on the hypergraph mechanism to enhance attention toward small targets. Second, a detection head with a double-bottleneck inverted MBConv-block structure is designed to improve the model’s accuracy and generalization capability. Third, a lightweight PPLConv module is customized to transform the backbone network, enhancing the model’s lightweight design while slightly reducing its accuracy. Considering the differences from traditional visible light datasets, the Focus Expansion-IoU loss function is introduced into the model to focus attention on different regression samples. The MMPFNet model achieves significant improvements in detecting common road targets such as pedestrians, bicycles, cars, and trucks on the Frequency-Modulated Continuous Wave radar Range-Angle dataset compared to the baseline YOLOv13n model: mAP50-95 increases by 16%, precision improves by 6%, and recall rises by 8.7%. MMPFNet is also evaluated on other non-visible light datasets such as CRUW-ONRD and soundprint datasets. Compared to commonly used detection models like FCOS and RetinaNet, MMPFNet achieves significant performance gains, attaining state-of-the-art results. Full article
18 pages, 2759 KB  
Article
Research on Lightweight Rose Disease Detection Based on Transferable Feature Representation
by Li Liu, Tao Yin, Yuyan Bai, Bingjie Yang and Jianping Yang
Plants 2026, 15(4), 623; https://doi.org/10.3390/plants15040623 - 16 Feb 2026
Viewed by 190
Abstract
Rose leaf diseases severely reduce yield and product quality, and traditional disease monitoring relies on manual visual inspection by experts, which is inefficient for large-scale cultivation. However, deploying accurate and lightweight detectors in field environments remains challenging due to two main obstacles. First, [...] Read more.
Rose leaf diseases severely reduce yield and product quality, and traditional disease monitoring relies on manual visual inspection by experts, which is inefficient for large-scale cultivation. However, deploying accurate and lightweight detectors in field environments remains challenging due to two main obstacles. First, models trained under controlled laboratory conditions suffer performance degradation due to domain shift when deployed in complex field environments. Second, the computational capacity of hardware deployable in the field is often limited. To address these problems, this study proposes a practical knowledge distillation approach based on transferable feature representations from a pre-trained teacher model, rather than on complex distillation architecture. A high-capacity YOLOv12-L teacher, pre-trained on laboratory images, guided the training of a compact YOLOv12-N student using field images. The distilled YOLOv12-N student model achieved an mAP@50 of 81.1% on field test set, representing a 3.5% improvement over the baseline YOLOv12-N model, while maintaining a highly efficient architecture of only 2.56 million parameters and 6.3 GFLOPs. Several ablation studies confirm the core contribution of this work, namely that the performance gains in lightweight detection stem primarily from the transfer of the teacher model’s feature representations, rather than from modifications to the distillation algorithm or student model’s architecture, thus clarifying the importance of high quality feature transfer in cross-domain agricultural vision tasks. This approach provides a generalizable and efficient solution for real-time rose leaf disease detection in precision agriculture. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

Back to TopTop