Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,092)

Search Parameters:
Keywords = efficient hardware

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 4741 KiB  
Article
TriViT-Lite: A Compact Vision Transformer–MobileNet Model with Texture-Aware Attention for Real-Time Facial Emotion Recognition in Healthcare
by Waqar Riaz, Jiancheng (Charles) Ji and Asif Ullah
Electronics 2025, 14(16), 3256; https://doi.org/10.3390/electronics14163256 (registering DOI) - 16 Aug 2025
Abstract
Facial emotion recognition has become increasingly important in healthcare, where understanding delicate cues like pain, discomfort, or unconsciousness can support more timely and responsive care. Yet, recognizing facial expressions in real-world settings remains challenging due to varying lighting, facial occlusions, and hardware limitations [...] Read more.
Facial emotion recognition has become increasingly important in healthcare, where understanding delicate cues like pain, discomfort, or unconsciousness can support more timely and responsive care. Yet, recognizing facial expressions in real-world settings remains challenging due to varying lighting, facial occlusions, and hardware limitations in clinical environments. To address this, we propose TriViT-Lite, a lightweight yet powerful model that blends three complementary components: MobileNet, for capturing fine-grained local features efficiently; Vision Transformers (ViT), for modeling global facial patterns; and handcrafted texture descriptors, such as Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG), for added robustness. These multi-scale features are brought together through a texture-aware cross-attention fusion mechanism that helps the model focus on the most relevant facial regions dynamically. TriViT-Lite is evaluated on both benchmark datasets (FER2013, AffectNet) and a custom healthcare-oriented dataset covering seven critical emotional states, including pain and unconsciousness. It achieves a competitive accuracy of 91.8% on FER2013 and of 87.5% on the custom dataset while maintaining real-time performance (~15 FPS) on resource-constrained edge devices. Our results show that TriViT-Lite offers a practical and accurate solution for real-time emotion recognition, particularly in healthcare settings. It strikes a balance between performance, interpretability, and efficiency, making it a strong candidate for machine-learning-driven pattern recognition in patient-monitoring applications. Full article
Show Figures

Figure 1

37 pages, 2287 KiB  
Article
Parameterised Quantum SVM with Data-Driven Entanglement for Zero-Day Exploit Detection
by Steven Jabulani Nhlapo, Elodie Ngoie Mutombo and Mike Nkongolo Wa Nkongolo
Computers 2025, 14(8), 331; https://doi.org/10.3390/computers14080331 - 15 Aug 2025
Abstract
Zero-day attacks pose a persistent threat to computing infrastructure by exploiting previously unknown software vulnerabilities that evade traditional signature-based network intrusion detection systems (NIDSs). To address this limitation, machine learning (ML) techniques offer a promising approach for enhancing anomaly detection in network traffic. [...] Read more.
Zero-day attacks pose a persistent threat to computing infrastructure by exploiting previously unknown software vulnerabilities that evade traditional signature-based network intrusion detection systems (NIDSs). To address this limitation, machine learning (ML) techniques offer a promising approach for enhancing anomaly detection in network traffic. This study evaluates several ML models on a labeled network traffic dataset, with a focus on zero-day attack detection. Ensemble learning methods, particularly eXtreme gradient boosting (XGBoost), achieved perfect classification, identifying all 6231 zero-day instances without false positives and maintaining efficient training and prediction times. While classical support vector machines (SVMs) performed modestly at 64% accuracy, their performance improved to 98% with the use of the borderline synthetic minority oversampling technique (SMOTE) and SMOTE + edited nearest neighbours (SMOTEENN). To explore quantum-enhanced alternatives, a quantum SVM (QSVM) is implemented using three-qubit and four-qubit quantum circuits simulated on the aer_simulator_statevector. The QSVM achieved high accuracy (99.89%) and strong F1-scores (98.95%), indicating that nonlinear quantum feature maps (QFMs) can increase sensitivity to zero-day exploit patterns. Unlike prior work that applies standard quantum kernels, this study introduces a parameterised quantum feature encoding scheme, where each classical feature is mapped using a nonlinear function tuned by a set of learnable parameters. Additionally, a sparse entanglement topology is derived from mutual information between features, ensuring a compact and data-adaptive quantum circuit that aligns with the resource constraints of noisy intermediate-scale quantum (NISQ) devices. Our contribution lies in formalising a quantum circuit design that enables scalable, expressive, and generalisable quantum architectures tailored for zero-day attack detection. This extends beyond conventional usage of QSVMs by offering a principled approach to quantum circuit construction for cybersecurity. While these findings are obtained via noiseless simulation, they provide a theoretical proof of concept for the viability of quantum ML (QML) in network security. Future work should target real quantum hardware execution and adaptive sampling techniques to assess robustness under decoherence, gate errors, and dynamic threat environments. Full article
Show Figures

Figure 1

19 pages, 7468 KiB  
Article
A Comparative Study of Hybrid Machine-Learning vs. Deep-Learning Approaches for Varroa Mite Detection and Counting
by Amira Ghezal and Andreas König
Sensors 2025, 25(16), 5075; https://doi.org/10.3390/s25165075 - 15 Aug 2025
Abstract
This study presents a comparative evaluation of traditional machine-learning (ML) and deep-learning (DL) approaches for detecting and counting Varroa destructor mites in hyperspectral images. As Varroa infestations pose a serious threat to honeybee health, accurate and efficient detection methods are essential. The ML [...] Read more.
This study presents a comparative evaluation of traditional machine-learning (ML) and deep-learning (DL) approaches for detecting and counting Varroa destructor mites in hyperspectral images. As Varroa infestations pose a serious threat to honeybee health, accurate and efficient detection methods are essential. The ML pipeline—based on Principal Component Analysis (PCA), k-Nearest Neighbors (kNN), and Support Vector Machine (SVM)—was previously published and achieved high performance (precision = 0.9983, recall = 0.9947), with training and inference completed in seconds on standard CPU hardware. In contrast, the DL approach, employing Faster R-CNN with ResNet-50 and ResNet-101 backbones, was fine-tuned on the same manually annotated images. Despite requiring GPU acceleration, longer training times, and presenting a reproducibility challenges, the deep-learning models achieved precision of 0.966 and 0.971, recall of 0.757 and 0.829, and F1-Score of 0.848 and 0.894 for ResNet-50 and ResNet-101, respectively. Qualitative results further demonstrate the robustness of the ML method under limited-data conditions. These findings highlight the differences between ML and DL approaches in resource-constrained scenarios and offer practical guidance for selecting suitable detection strategies. Full article
Show Figures

Figure 1

18 pages, 1981 KiB  
Article
Enrichment of the HEPscore Benchmark by Energy Consumption Assessment
by Taras V. Panchenko and Nikita D. Piatygorskiy
Technologies 2025, 13(8), 362; https://doi.org/10.3390/technologies13080362 - 15 Aug 2025
Abstract
The HEPscore benchmark, widely used for evaluating computational performance in high-energy physics, has been identified as requiring energy consumption metrics to address the increasing importance of energy efficiency in large-scale computing infrastructures. This study introduces an energy measurement extension for HEPscore, designed to [...] Read more.
The HEPscore benchmark, widely used for evaluating computational performance in high-energy physics, has been identified as requiring energy consumption metrics to address the increasing importance of energy efficiency in large-scale computing infrastructures. This study introduces an energy measurement extension for HEPscore, designed to operate across diverse hardware platforms without requiring administrative privileges or physical modifications. The extension utilizes the Running Average Power Limit (RAPL) interface available in modern processors and dynamically selects the most suitable measurement method based on system capabilities. When RAPL access is unavailable, the system automatically switches to alternative measurement approaches. To validate the accuracy of the software-based measurements, external hardware monitoring devices were used to collect reference data directly from the power supply circuit. Obtained results demonstrate a significant correlation across multiple test platforms running standard HEP workloads. The developed extension integrates energy consumption data into standard HEPscore reports, enabling the calculation of energy efficiency metrics such as HEPscore/Watt. This implementation meets the requirements of the HEPiX Benchmarking Working Group, providing a reliable and portable solution for quantifying energy efficiency alongside computational performance. The proposed method supports informed decision making in resource planning and hardware acquisition for HEP computing environments. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

26 pages, 663 KiB  
Article
Multi-Scale Temporal Fusion Network for Real-Time Multimodal Emotion Recognition in IoT Environments
by Sungwook Yoon and Byungmun Kim
Sensors 2025, 25(16), 5066; https://doi.org/10.3390/s25165066 - 14 Aug 2025
Abstract
This paper introduces EmotionTFN (Emotion-Multi-Scale Temporal Fusion Network), a novel hierarchical temporal fusion architecture that addresses key challenges in IoT emotion recognition by processing diverse sensor data while maintaining accuracy across multiple temporal scales. The architecture integrates physiological signals (EEG, PPG, and GSR), [...] Read more.
This paper introduces EmotionTFN (Emotion-Multi-Scale Temporal Fusion Network), a novel hierarchical temporal fusion architecture that addresses key challenges in IoT emotion recognition by processing diverse sensor data while maintaining accuracy across multiple temporal scales. The architecture integrates physiological signals (EEG, PPG, and GSR), visual, and audio data using hierarchical temporal attention across short-term (0.5–2 s), medium-term (2–10 s), and long-term (10–60 s) windows. Edge computing optimizations, including model compression, quantization, and adaptive sampling, enable deployment on resource-constrained devices. Extensive experiments on MELD, DEAP, and G-REx datasets demonstrate 94.2% accuracy on discrete emotion classification and 0.087 mean absolute error on dimensional prediction, outperforming the best baseline (87.4%). The system maintains sub-200 ms latency on IoT hardware while achieving a 40% improvement in energy efficiency. Real-world deployment validation over four weeks achieved 97.2% uptime and user satisfaction scores of 4.1/5.0 while ensuring privacy through local processing. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

28 pages, 14601 KiB  
Article
Balancing Accuracy and Computational Efficiency: A Faster R-CNN with Foreground-Background Segmentation-Based Spatial Attention Mechanism for Wild Plant Recognition
by Zexuan Cui, Zhibo Chen and Xiaohui Cui
Plants 2025, 14(16), 2533; https://doi.org/10.3390/plants14162533 - 14 Aug 2025
Abstract
Computer vision recognition technology, due to its non-invasive and convenient nature, can effectively avoid damage to fragile wild plants during recognition. However, balancing model complexity, recognition accuracy, and data processing difficulty on resource-constrained hardware is a critical issue that needs to be addressed. [...] Read more.
Computer vision recognition technology, due to its non-invasive and convenient nature, can effectively avoid damage to fragile wild plants during recognition. However, balancing model complexity, recognition accuracy, and data processing difficulty on resource-constrained hardware is a critical issue that needs to be addressed. To tackle these challenges, we propose an improved lightweight Faster R-CNN architecture named ULS-FRCN. This architecture includes three key improvements: a Light Bottleneck module based on depthwise separable convolution to reduce model complexity; a Split SAM lightweight spatial attention mechanism to improve recognition accuracy without increasing model complexity; and unsharp masking preprocessing to enhance model performance while reducing data processing difficulty and training costs. We validated the effectiveness of ULS-FRCN using five representative wild plants from the PlantCLEF 2015 dataset. Ablation experiments and multi-dataset generalization tests show that ULS-FRCN significantly outperforms the baseline model in terms of mAP, mean F1 score, and mean recall, with improvements of 12.77%, 0.01, and 9.07%, respectively. Compared to the original Faster R-CNN, our lightweight design and attention mechanism reduce training parameters, improve inference speed, and enhance computational efficiency. This approach is suitable for deployment on resource-constrained forestry devices, enabling efficient plant identification and management without the need for high-performance servers. Full article
Show Figures

Figure 1

15 pages, 1496 KiB  
Article
Simultaneous Reductions in NOx Emissions, Combustion Instability, and Efficiency Loss in a Lean-Burn CHP Engine via Hydrogen-Enriched Natural Gas
by Johannes Fichtner, Jan Ninow and Joerg Kapischke
Energies 2025, 18(16), 4339; https://doi.org/10.3390/en18164339 - 14 Aug 2025
Abstract
This study demonstrates that hydrogen enrichment in lean-burn spark-ignition engines can simultaneously improve three key performance metrics, thermal efficiency, combustion stability, and nitrogen oxide emissions, without requiring modifications to the engine hardware or ignition timing. This finding offers a novel control approach to [...] Read more.
This study demonstrates that hydrogen enrichment in lean-burn spark-ignition engines can simultaneously improve three key performance metrics, thermal efficiency, combustion stability, and nitrogen oxide emissions, without requiring modifications to the engine hardware or ignition timing. This finding offers a novel control approach to a well-documented trade-off in existing research, where typically only two of these factors are improved at the expense of the third. Unlike previous studies, the present work achieves simultaneous improvement of all three metrics without hardware modification or ignition timing adjustment, relying solely on the optimization of the air–fuel equivalence ratio λ. Experiments were conducted on a six-cylinder engine for combined heat and power application, fueled with hydrogen–natural gas blends containing up to 30% hydrogen by volume. By optimizing only the air–fuel equivalence ratio, it was possible to extend the lean-burn limit from λ1.6 to λ>1.9, reduce nitrogen oxide emissions by up to 70%, enhance thermal efficiency by up to 2.2 percentage points, and significantly improve combustion stability, reducing cycle-by-cycle variationsfrom 2.1% to 0.7%. A defined λ window was identified in which all three key performance indicators simultaneously meet or exceed the natural gas baseline. Within this window, balanced improvements in nitrogen oxide emissions, efficiency, and stability are achievable, although the individual maxima occur at different operating points. Cylinder pressure analysis confirmed that combustion dynamics can be realigned with original equipment manufacturer characteristics via mixture leaning alone, mitigating hydrogen-induced pressure increases to just 11% above the natural gas baseline. These results position hydrogen as a performance booster for natural gas engines in stationary applications, enabling cleaner, more efficient, and smoother operation without added system complexity. The key result is the identification of a λ window that enables simultaneous optimization of nitrogen oxide emissions, efficiency, and combustion stability using only mixture control. Full article
(This article belongs to the Special Issue Advances in Hydrogen Energy and Fuel Cell Technologies)
Show Figures

Figure 1

20 pages, 3278 KiB  
Article
Design and Implementation Process of an Intelligent Automotive Chassis Domain Controller System Based on AUTOSAR
by Yanlin Jin, Yinong Li, Ling Zheng, Guangxuan Li and Xiaoyu Huang
Sensors 2025, 25(16), 5056; https://doi.org/10.3390/s25165056 - 14 Aug 2025
Abstract
With the rapid development of intelligent automobiles, the chassis serves as an essential carrier of intelligence and a necessary condition for achieving high-level autonomous driving. Its electronic and electrical architecture is evolving toward centralized development, which is also significantly increasing the complexity of [...] Read more.
With the rapid development of intelligent automobiles, the chassis serves as an essential carrier of intelligence and a necessary condition for achieving high-level autonomous driving. Its electronic and electrical architecture is evolving toward centralized development, which is also significantly increasing the complexity of system functions. Meanwhile, with the integration of more sensors and an increase in data volume, stricter requirements have been placed on software scalability, portability, and maintainability. This paper presents a system software design and implementation approach for the chassis domain controller by integrating the AUTOSAR standard with model-based design (MBD). The developed software is subsequently deployed on a domain controller hardware platform based on the Renesas u2a16 chip for integrated testing. The software algorithm development, model-in-the-loop (MIL) testing, hardware-in-the-loop (HIL) testing, and real vehicle calibration processes are described in detail, focusing on the roll stability control software component in the chassis domain controller. A detailed definition of the toolchain for each development stage is also provided. The feasibility and effectiveness of the proposed chassis domain controller software system development process, based on the combination of the AUTOSAR standard and model-based design, are validated through test results. This method effectively achieves software–hardware decoupling and enhances software scalability, module reusability, and reliability, which is of great significance for improving the efficiency and iteration of chassis domain controller development. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

28 pages, 968 KiB  
Article
EVuLLM: Ethereum Smart Contract Vulnerability Detection Using Large Language Models
by Eleni Mandana, George Vlahavas and Athena Vakali
Electronics 2025, 14(16), 3226; https://doi.org/10.3390/electronics14163226 - 14 Aug 2025
Viewed by 53
Abstract
Smart contracts have become integral to decentralized applications, yet their programmability introduces critical security risks, exemplified by high-profile exploits such as the DAO and Parity Wallet incidents. Existing vulnerability detection methods, including static and dynamic analysis, as well as machine learning-based approaches, often [...] Read more.
Smart contracts have become integral to decentralized applications, yet their programmability introduces critical security risks, exemplified by high-profile exploits such as the DAO and Parity Wallet incidents. Existing vulnerability detection methods, including static and dynamic analysis, as well as machine learning-based approaches, often struggle with emerging threats and rely heavily on large, labeled datasets. This study investigates the effectiveness of open-source, lightweight large language models (LLMs) fine-tuned using parameter-efficient techniques, including Quantized Low-Rank Adaptation (QLoRA), for smart contract vulnerability detection. We introduce the EVuLLM dataset to address the scarcity of diverse evaluation resources and demonstrate that our fine-tuned models achieve up to 94.78% accuracy, surpassing the performance of larger proprietary models, while significantly reducing computational requirements. Moreover, we emphasize the advantages of lightweight models deployable on local hardware, such as enhanced data privacy, reduced reliance on internet connectivity, lower infrastructure costs, and improved control over model behavior, factors that are especially critical in security-sensitive blockchain applications. We also explore Retrieval-Augmented Generation (RAG) as a complementary strategy, achieving competitive results with minimal training. Our findings highlight the practicality of using locally hosted LLMs for secure, efficient, and reproducible smart contract analysis, paving the way for broader adoption of AI-driven security in blockchain ecosystems. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Figure 1

14 pages, 2671 KiB  
Article
Reconfigurable Smart-Pixel-Based Optical Convolutional Neural Networks Using Crossbar Switches: A Conceptual Study
by Young-Gu Ju
Electronics 2025, 14(16), 3219; https://doi.org/10.3390/electronics14163219 - 13 Aug 2025
Viewed by 142
Abstract
This study presents a reconfigurable optical convolutional neural network (CNN) architecture that integrates a crossbar switch network into a smart-pixel-based optical CNN (SPOCNN) framework. The SPOCNN leverages smart pixel light modulators (SPLMs), enabling high-speed and massively parallel optical computation. To address the challenge [...] Read more.
This study presents a reconfigurable optical convolutional neural network (CNN) architecture that integrates a crossbar switch network into a smart-pixel-based optical CNN (SPOCNN) framework. The SPOCNN leverages smart pixel light modulators (SPLMs), enabling high-speed and massively parallel optical computation. To address the challenge of data rearrangement between CNN layers—especially in multi-channel and deep-layer processing—a crossbar switch network is introduced to perform dynamic spatial permutation and multicast operations efficiently. This integration significantly reduces the number of processing steps required for core operations such as convolution, max pooling, and local response normalization, enhancing throughput and scalability. The architecture also supports bidirectional data flow and modular expansion, allowing the simulation of deeper networks within limited hardware layers. Performance analysis based on an AlexNet-style CNN indicates that the proposed system can complete inference in fewer than 100 instruction cycles, achieving processing speeds of over 1 million frames per second. The proposed architecture offers a promising solution for real-time optical AI applications. The further development of hardware prototypes and co-optimization strategies between algorithms and optical hardware is suggested to fully harness its capabilities. Full article
(This article belongs to the Section Optoelectronics)
Show Figures

Figure 1

19 pages, 887 KiB  
Article
A Protocol for Ultra-Low-Latency and Secure State Exchange Based on Non-Deterministic Ethernet by the Example of MVDC Grids
by Michael Steinke and Wolfgang Hommel
Electronics 2025, 14(16), 3214; https://doi.org/10.3390/electronics14163214 - 13 Aug 2025
Viewed by 148
Abstract
Modern networked industrial applications often require low-latency communication. Some applications evolve over time, however, are tied to yet existing infrastructures, like power grids spanning across large areas. For instance, medium voltage direct current (MVDC) grids are evolving to a promising alternative to traditional [...] Read more.
Modern networked industrial applications often require low-latency communication. Some applications evolve over time, however, are tied to yet existing infrastructures, like power grids spanning across large areas. For instance, medium voltage direct current (MVDC) grids are evolving to a promising alternative to traditional medium voltage alternating current (MVAC) grids due to their efficiency and suitability for novel use cases like electric mobility. MVDC grids, however, require an active control and fault handling strategy. Some strategies demand for a continuous state exchange of the converter substations via a low-latency communication channel with less than 1 millisecond. While some communication approaches for MVDC grids are described in the literature, none of them is inherently designed to be secure. In this paper, we present a protocol for ultra-low-latency secure state exchange (PULLSE) based on conventional non-deterministic Ethernet and AES-GCM. We chose Ethernet in order to not limit the approaches usability in terms of hardware requirements or communication patterns. PULLSE is designed to prevent traffic eavesdropping, replay, and manipulation attacks. Full article
(This article belongs to the Special Issue Modern Circuits and Systems Technologies (MOCAST 2024))
Show Figures

Figure 1

7 pages, 188 KiB  
Proceeding Paper
Lightweight Post-Quantum Cryptography: Applications and Countermeasures in Internet of Things, Blockchain, and E-Learning
by Chin-Ling Chen, Kuang-Wei Zeng, Wei-Ying Li, Chin-Feng Lee, Ling-Chun Liu and Yong-Yuan Deng
Eng. Proc. 2025, 103(1), 14; https://doi.org/10.3390/engproc2025103014 - 12 Aug 2025
Viewed by 104
Abstract
With the rapid advancement of quantum computing technology, traditional encryption methods are encountering unprecedented challenges in the Internet of Things (IoT), blockchain systems, and digital learning (e-learning) platforms. Therefore, we systematically reviewed the applications and countermeasures of lightweight post-quantum cryptographic techniques, focusing on [...] Read more.
With the rapid advancement of quantum computing technology, traditional encryption methods are encountering unprecedented challenges in the Internet of Things (IoT), blockchain systems, and digital learning (e-learning) platforms. Therefore, we systematically reviewed the applications and countermeasures of lightweight post-quantum cryptographic techniques, focusing on the requirements of resource-constrained IoT devices and decentralized systems. We compared the encryption methods based on ring learning with errors (Ring-LWE), Binary Ring-LWE, ring-ExpLWE, the collaborative critical generation framework Q-SECURE, and hardware accelerators for the CRYSTALS-dilithium digital signature scheme. According to the high security and efficiency demands for data transmission and user interaction in e-learning platforms, we developed lightweight encryption schemes. By reviewing existing research achievements, we analyzed the application challenges in IoT, blockchain, and e-learning scenarios and explored strategies for optimizing post-quantum encryption schemes for effective deployment. Full article
24 pages, 1346 KiB  
Article
Energy-Efficient Resource Allocation Scheme Based on Reinforcement Learning in Distributed LoRa Networks
by Ryota Ariyoshi, Aohan Li, Mikio Hasegawa and Tomoaki Ohtsuki
Sensors 2025, 25(16), 4996; https://doi.org/10.3390/s25164996 - 12 Aug 2025
Viewed by 152
Abstract
The rapid growth of Long Range (LoRa) devices has led to network congestion, reducing spectrum and energy efficiency. To address this problem, we propose an energy-efficient reinforcement learning method for distributed LoRa networks, enabling each device to independently select appropriate transmission parameters, i.e., [...] Read more.
The rapid growth of Long Range (LoRa) devices has led to network congestion, reducing spectrum and energy efficiency. To address this problem, we propose an energy-efficient reinforcement learning method for distributed LoRa networks, enabling each device to independently select appropriate transmission parameters, i.e., channel, transmission power (TP), and bandwidth (BW) based on acknowledgment (ACK) feedback and energy consumption. Our method employs the Upper Confidence Bound (UCB)1-tuned algorithm and incorporates energy metrics into the reward function, achieving lower power consumption and high transmission success rates. Designed to be lightweight for resource-constrained IoT devices, it was implemented on real LoRa hardware and tested in dense network scenarios. Experimental results show that the proposed method outperforms fixed allocation, adaptive data rate low-complexity (ADR-Lite), and ϵ-greedy methods in both transmission success rate and energy efficiency. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

20 pages, 16838 KiB  
Article
Multi-Criteria Visual Quality Control Algorithm for Selected Technological Processes Designed for Budget IIoT Edge Devices
by Piotr Lech
Electronics 2025, 14(16), 3204; https://doi.org/10.3390/electronics14163204 - 12 Aug 2025
Viewed by 149
Abstract
This paper presents an innovative multi-criteria visual quality control algorithm designed for deployment on cost-effective Edge devices within the Industrial Internet of Things environment. Traditional industrial vision systems are typically associated with high acquisition, implementation, and maintenance costs. The proposed solution addresses the [...] Read more.
This paper presents an innovative multi-criteria visual quality control algorithm designed for deployment on cost-effective Edge devices within the Industrial Internet of Things environment. Traditional industrial vision systems are typically associated with high acquisition, implementation, and maintenance costs. The proposed solution addresses the need to reduce these costs while maintaining high defect detection efficiency. The developed algorithm largely eliminates the need for time- and energy-intensive neural network training or retraining, though these capabilities remain optional. Consequently, the reliance on human labor, particularly for tasks such as manual data labeling, has been significantly reduced. The algorithm is optimized to run on low-power computing units typical of budget industrial computers, making it a viable alternative to server- or cloud-based solutions. The system supports flexible integration with existing industrial automation infrastructure, but it can also be deployed at manual workstations. The algorithm’s primary application is to assess the spread quality of thick liquid mold filling; however, its effectiveness has also been demonstrated for 3D printing processes. The proposed hybrid algorithm combines three approaches: (1) the classical SSIM image quality metric, (2) depth image measurement using Intel MiDaS technology combined with analysis of depth map visualizations and histogram analysis, and (3) feature extraction using selected artificial intelligence models based on the OpenCLIP framework and publicly available pretrained models. This combination allows the individual methods to compensate for each other’s limitations, resulting in improved defect detection performance. The use of hybrid metrics in defective sample selection has been shown to yield superior algorithmic performance compared to the application of individual methods independently. Experimental tests confirmed the high effectiveness and practical applicability of the proposed solution, preserving low hardware requirements. Full article
Show Figures

Figure 1

24 pages, 948 KiB  
Review
A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives
by Cecilia Diana-Albelda, Álvaro García-Martín and Jesus Bescos
J. Imaging 2025, 11(8), 269; https://doi.org/10.3390/jimaging11080269 - 11 Aug 2025
Viewed by 382
Abstract
Accurate and automated segmentation of gliomas from Magnetic Resonance Imaging (MRI) is crucial for effective diagnosis, treatment planning, and patient monitoring. However, the aggressive nature and morphological complexity of these tumors pose significant challenges that call for advanced segmentation techniques. This review provides [...] Read more.
Accurate and automated segmentation of gliomas from Magnetic Resonance Imaging (MRI) is crucial for effective diagnosis, treatment planning, and patient monitoring. However, the aggressive nature and morphological complexity of these tumors pose significant challenges that call for advanced segmentation techniques. This review provides a comprehensive analysis of Deep Learning (DL) methods for glioma segmentation, with a specific focus on bridging the gap between research performance and practical clinical deployment. We evaluate over 80 state-of-the-art models published up to 2025, categorizing them into CNN-based, Pure Transformer, and Hybrid CNN-Transformer architectures. The primary objective of this paper is to critically assess these models not only on their segmentation accuracy but also on their computational efficiency and suitability for real-world medical environments by incorporating hardware resource considerations. We present a comparison of model performance on the BraTS datasets benchmark and introduce a suitability analysis for top-performing models based on their robustness, efficiency, and completeness of tumor region delineation. By identifying current trends, limitations, and key trade-offs, this review offers future research directions aimed at optimizing the balance between technical performance and clinical usability to improve diagnostic outcomes for glioma patients. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Back to TopTop