Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (566)

Search Parameters:
Keywords = INTEL

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 28721 KB  
Article
Dual-Arm Robotic Textile Unfolding with Depth-Corrected Perception and Fold Resolution
by Tilla Egerhei Båserud, Joakim Johansen, Ajit Jha and Ilya Tyapin
Robotics 2026, 15(4), 78; https://doi.org/10.3390/robotics15040078 - 8 Apr 2026
Abstract
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a [...] Read more.
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a dual-arm robotic manipulation framework. The system uses two Interbotix WidowX 250s 6-DoF robotic arms and an Intel RealSense L515 LiDAR camera for visual perception. The unfolding process consists of three stages: initial dual-arm stretching to reduce major folds, refinement through a second stretch targeting the lower region, and a machine-learning stage that employs a YOLOv11 framework trained on depth-encoded textile images, followed by a depth-gradient-based estimator for fold direction. The system applies an extremity-based grasping strategy that selects leftmost and rightmost textile points from a custom error-corrected depth map, enabling robust grasp point selection, and a fold direction estimation method based on depth gradients around the detected fold. The most confident fold region is selected, an unfolding direction is determined using depth ranking, and the textile is manipulated until a flat state is confirmed through depth uniformity. Experiments show that depth correction significantly reduces spatial error in the robot frame, while segmentation and extremity detection achieve high accuracy across varied fold configurations, and the YOLOv11n-based model reaches 98.8% classification accuracy, while fold direction is estimated correctly in 87% of test cases. By enabling robust, largely autonomous textile unfolding, the system demonstrates a practical approach that could support safer and more efficient automated textile recycling workflows. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

32 pages, 29579 KB  
Article
A Unified Parameter-Adaptive MPC Framework for Motion Control of Heterogeneous AGVs with Different Actuation Topologies
by Shengyu Zhou, Yixin Su, Huawei Zhang and Zhaoqi Kang
Actuators 2026, 15(4), 188; https://doi.org/10.3390/act15040188 - 28 Mar 2026
Viewed by 282
Abstract
The deployment of heterogeneous Automated Guided Vehicles (AGVs) in smart manufacturing requires control strategies that can accommodate distinct actuation characteristics and constraints. This paper proposes a Multi-Factor Coupled Parameter-Adaptive Model Predictive Control (MFCP-AMPC) framework. Unlike conventional approaches requiring vehicle-specific tuning, this framework unifies [...] Read more.
The deployment of heterogeneous Automated Guided Vehicles (AGVs) in smart manufacturing requires control strategies that can accommodate distinct actuation characteristics and constraints. This paper proposes a Multi-Factor Coupled Parameter-Adaptive Model Predictive Control (MFCP-AMPC) framework. Unlike conventional approaches requiring vehicle-specific tuning, this framework unifies differential-drive, dual-steer, and mecanum-wheel platforms under a single parameter-varying state-space model that respects the specific actuation limits of each topology. A key contribution is the multi-factor coupling mechanism that dynamically adjusts the prediction horizon and weighting matrices based on path curvature, vehicle speed, and tracking error. Experiments on industrial AGV prototypes demonstrate that the framework achieves robust tracking precision under varying payloads. Crucially, by acknowledging physical limits, the framework achieves strict millimeter-level accuracy (RMSE < 7 mm) in quasi-static low-speed complex maneuvers (v0.3 m/s), and maintains highly competitive industrial precision (RMSE ≈ 15∼25 mm) under aggressive high-speed tracking (v1.0 m/s). Crucially, the proposed method significantly improves the control input smoothness (Smoothness Index > 0.75), thereby reducing mechanical wear and preventing actuator saturation. Real-time validation (12 ms average solve time on an Intel i7 IPC) confirms its suitability for resource-constrained industrial controllers. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

18 pages, 972 KB  
Article
CPU Deployment-Oriented Evaluation of Compact Neural Networks for Remaining Useful Life Prediction
by Ali Naderi Bakhtiyari, Vahid Hassani and Mohammad Omidi
Machines 2026, 14(4), 375; https://doi.org/10.3390/machines14040375 - 28 Mar 2026
Viewed by 269
Abstract
Remaining Useful Life (RUL) prediction is a key component of prognostics and health management for modern industrial systems. While deep learning methods have significantly improved prediction accuracy, many existing approaches rely on large neural networks that are difficult to deploy on resource-constrained edge [...] Read more.
Remaining Useful Life (RUL) prediction is a key component of prognostics and health management for modern industrial systems. While deep learning methods have significantly improved prediction accuracy, many existing approaches rely on large neural networks that are difficult to deploy on resource-constrained edge devices. This study presents a deployment-oriented evaluation of compact neural networks for RUL prediction using the NASA C-MAPSS turbofan engine benchmark. Two lightweight hybrid architectures, CNN–GRU and CNN–TCN, were developed with approximately 28k–32k parameters to represent realistic models for CPU-based edge inference. A systematic experimental analysis was conducted across all four C-MAPSS subsets (FD001–FD004), which represent increasing levels of operational and fault complexity. In addition to baseline performance, two post-training compression techniques (i.e., global unstructured magnitude pruning and dynamic INT8 quantization) were evaluated. To assess real deployment behavior, inference latency was measured on both a high-performance Intel x86 workstation and a resource-constrained ARM platform. Results show that CNN–GRU generally achieves higher predictive accuracy, whereas CNN–TCN provides more consistent and lower inference latency due to its convolution-only temporal modeling. Unstructured pruning can yield modest improvements in prediction accuracy, suggesting a regularization effect, but it does not reliably reduce model size or latency on standard CPUs due to the overhead associated with pruning masks. Dynamic quantization substantially reduces model size (particularly for CNN–GRU) while preserving predictive accuracy; however, it increases runtime latency because of additional quantization and dequantization operations. These findings demonstrate that compression techniques commonly used for large models do not necessarily translate into deployment benefits for already compact RUL architectures and highlight the importance of hardware-aware evaluation when designing edge prognostics systems. Full article
Show Figures

Figure 1

8 pages, 1600 KB  
Article
Impact of Low-Frequency RF Injection on Leakage Behavior in Nanoscale NMOS Devices
by Mohammad Abedi, Zahra Abedi, Payman Zarkesh-Ha, Sameer Hemmady and Edl Schamiloglu
Electronics 2026, 15(6), 1244; https://doi.org/10.3390/electronics15061244 - 17 Mar 2026
Viewed by 243
Abstract
The goal of this research is to develop a predictive model that determines how low-frequency Electromagnetic Interference (EMI) affects the leakage current behavior of CMOS transistors. Although developed and validated using NMOS devices, the modeling framework can be extended to PMOS transistors; experimental [...] Read more.
The goal of this research is to develop a predictive model that determines how low-frequency Electromagnetic Interference (EMI) affects the leakage current behavior of CMOS transistors. Although developed and validated using NMOS devices, the modeling framework can be extended to PMOS transistors; experimental validation of PMOS devices is planned for future work. The model provides essential physical parameter-based analysis of nanoscale device EMI susceptibility during low-frequency operation. The model demonstrates high accuracy and practicality through experimental verification of test chips built with standard TSMC CMOS technology nodes. The findings highlight that modern CMOS designs must account for low-frequency EMI, which can induce leakage shifts significant enough to impact EMC compliance, functional robustness, and reliability in ultra-low-power and near-threshold applications. The research delivers a practical method for designers to evaluate and reduce EMI-induced leakage in integrated circuits. Full article
Show Figures

Figure 1

30 pages, 11120 KB  
Article
ParaTaintGX: Detecting Memory Corruption Vulnerabilities in SGX Applications via Parameter-Taint Model
by Chao Li, Yifan Xu, Zhe Sun, Yongjie Liu, Jun Zhang and Fan Li
Mathematics 2026, 14(6), 1007; https://doi.org/10.3390/math14061007 - 16 Mar 2026
Viewed by 266
Abstract
Intel Software Guard Extensions (SGX) have been widely studied and adopted in privacy-preserving information systems to enhance the security and privacy guarantees of sensitive data computation. By constructing a protected enclave within the processor, SGX provides hardware-enforced confidentiality and integrity for sensitive data [...] Read more.
Intel Software Guard Extensions (SGX) have been widely studied and adopted in privacy-preserving information systems to enhance the security and privacy guarantees of sensitive data computation. By constructing a protected enclave within the processor, SGX provides hardware-enforced confidentiality and integrity for sensitive data and critical code. Nevertheless, due to inevitable interactions between trusted enclaves and untrusted host environments, SGX applications remain vulnerable to memory corruption attacks. Existing detection techniques exhibit fundamental limitations, including the lack of systematic induction of SGX-specific memory corruption behaviors, the absence of fine-grained parameter-level taint modeling during call-chain construction, and relatively inefficient call-chain exploration strategies over large search spaces. To address these issues, we propose ParaTaintGX, an analysis framework that integrates parameter-level taint states into vulnerability detection. ParaTaintGX constructs fine-grained call-chain nodes that capture both functions and the taint states of their parameters. It further introduces a Multi-node Heuristic Priority Search Algorithm to guide call-chain exploration. In addition, a backtracking-based pruning strategy is applied during path analysis to efficiently identify memory corruption vulnerabilities. Our evaluation demonstrates that ParaTaintGX discovers 12 vulnerabilities across 10 open-source SGX projects, outperforming the best baseline tool by two vulnerabilities. It achieves 19.35% precision, surpassing the most precise existing tool by 8.37 percentage points. These results highlight its superior detection capability and precision. Full article
Show Figures

Figure 1

20 pages, 9746 KB  
Article
SGX-Based Efficient Three-Factor Authentication Scheme with Online Registration for Industrial Internet of Things
by Zhenbin Guo, Yang Liu, Wenchen He, Xiaoxu Hu, Hua Zhang and Tengfei Tu
Electronics 2026, 15(6), 1180; https://doi.org/10.3390/electronics15061180 - 12 Mar 2026
Viewed by 240
Abstract
The Industrial Internet of Things (IIoT) enhances industrial efficiency but also introduces substantial security challenges. Authentication is a key building block for securing IIoT networks. However, many recent IoT authentication schemes rely on offline registration and transmit temporary identity credentials in plaintext during [...] Read more.
The Industrial Internet of Things (IIoT) enhances industrial efficiency but also introduces substantial security challenges. Authentication is a key building block for securing IIoT networks. However, many recent IoT authentication schemes rely on offline registration and transmit temporary identity credentials in plaintext during registration, which exposes them to privileged-user attacks and limits their practicality in complex deployment scenarios. To address these issues, this paper presents an efficient three-factor authentication scheme with secure online registration for IIoT. The proposed scheme leverages Intel Software Guard Extensions (SGX) to protect the registration master key and support online registration. In addition, a dynamic credential update mechanism is introduced to mitigate privileged-user attacks. The security of the scheme is validated through ProVerif-based formal verification and informal security analysis, while its performance is evaluated through comparative analysis and NS-3 simulations. The results demonstrate that the proposed scheme provides enhanced security with low overhead, making it suitable for IIoT environments. Full article
Show Figures

Figure 1

22 pages, 1151 KB  
Article
Directed and Resolution-Adaptive Louvain Community Method for Hardware Trojan Detection and Localization in Gate-Level Netlists
by Hongxu Gao, Dong Ding, Cai Zhen, Xin Liu, Yu Li, Jinping Li, Yuning Zhao and Quan Wang
Electronics 2026, 15(5), 1027; https://doi.org/10.3390/electronics15051027 - 28 Feb 2026
Viewed by 272
Abstract
The increasing complexity of modern gate-level circuits significantly degrades the efficiency of existing Hardware Trojan detection methods. Community partitioning is an efficient structural decomposition technique to address efficiency and scalability issues, yet current community-based detection schemes rely primarily on undirected graph modeling. To [...] Read more.
The increasing complexity of modern gate-level circuits significantly degrades the efficiency of existing Hardware Trojan detection methods. Community partitioning is an efficient structural decomposition technique to address efficiency and scalability issues, yet current community-based detection schemes rely primarily on undirected graph modeling. To address these issues, we propose an improved structure-aware community detection method for gate-level netlists, aiming to enhance the detection and localization capabilities of small-scale Hardware Trojans. First, an expanded dataset with structural diversity of clean and Trojan-inserted circuits is constructed by extending Trust-Hub benchmark circuits. Then, a directed and resolution-adaptive Louvain community detection algorithm is proposed—by introducing directed modularity, resolution parameters, and logic-gate semantic weighting, fine-grained community partitioning is achieved. On this basis, topological, functional, and anomaly features are extracted from community subgraphs, and a detection framework is built by combining graph neural networks and traditional detection models. All experiments are conducted on a unified platform equipped with an Intel (R) Core (TM) i7-10750H processor and an NVIDIA GeForce RTX 2060 GPU. Experimental results show that compared with configurations using the original Louvain partitioning and traditional features, the proposed method achieves significant improvements in both detection accuracy and localization capability. After introducing the improved community partitioning and feature design, the optimal model (CommunityGAT) yields a 3.3% increase in TPR and a 10.8% increase in ALC, verifying the method’s effectiveness in detecting small-scale concealed Trojans. Full article
(This article belongs to the Special Issue New Trends in Cybersecurity and Hardware Design for IoT)
Show Figures

Figure 1

9 pages, 3625 KB  
Proceeding Paper
A Framework for Integrity Monitoring for Positioning Through Graph-Based SLAM Optimization
by Sam Bekkers and Heiko Engwerda
Eng. Proc. 2026, 126(1), 25; https://doi.org/10.3390/engproc2026126025 - 25 Feb 2026
Viewed by 311
Abstract
As satellite navigation systems show vulnerabilities in specific circumstances such as urban canyons or jamming and spoofing situations, additional sensors such as cameras may be incorporated on the platform. Despite advancements in the robotics and computer vision community, which have led to increasingly [...] Read more.
As satellite navigation systems show vulnerabilities in specific circumstances such as urban canyons or jamming and spoofing situations, additional sensors such as cameras may be incorporated on the platform. Despite advancements in the robotics and computer vision community, which have led to increasingly accurate Simultaneous Localization and Mapping (SLAM) positioning solutions, visual navigation has its own vulnerabilities. It therefore remains of critical importance for many applications to study the integrity of fused navigation algorithms and their components, which is done less for SLAM than for satellite navigation. In this paper, a framework for integrity monitoring (IM) of a visual SLAM algorithm is proposed. A sensor-level IM scheme analyses feature reprojection errors. It is demonstrated that, in dynamic environments, multiple hypotheses can be generated from different subsets of extracted features. Additionally, the factor graph-based framework employs a fusion-level IM scheme which deals with these multiple hypotheses and selects the most probable one by calculating the sum of weighted measurement residuals. These concepts are applied to scenarios from real and simulated experiments in order to demonstrate applicability. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

20 pages, 1420 KB  
Article
High-Level Synthesis (HLS)-Enabled Field-Programmable Gate Array (FPGA) Algorithms for Latency-Critical Plasma Diagnostics and Neural Trigger Prototyping in Next-Generation Energy Projects
by Radosław Cieszewski, Krzysztof Poźniak, Ryszard Romaniuk and Maciej Linczuk
Energies 2026, 19(4), 1091; https://doi.org/10.3390/en19041091 - 21 Feb 2026
Viewed by 522
Abstract
Large-scale advanced energy systems, including fusion devices, high-power plasma sources, and accelerator-driven energy platforms, increasingly depend on real-time, hardware-level data processing for diagnostics, control, and protection. In such installations, ultra-low latency, deterministic throughput, and multi-decade operational lifetimes are not optional design goals but [...] Read more.
Large-scale advanced energy systems, including fusion devices, high-power plasma sources, and accelerator-driven energy platforms, increasingly depend on real-time, hardware-level data processing for diagnostics, control, and protection. In such installations, ultra-low latency, deterministic throughput, and multi-decade operational lifetimes are not optional design goals but strict system-level requirements. While similar timing constraints exist in high-energy physics infrastructures, energy applications place a stronger emphasis on long-term stability, maintainability, and reproducibility of digital signal processing pipelines. This work investigates whether high-level synthesis (HLS) provides a practical and sustainable design methodology for implementing both classical pattern-based and compact neural network (NN) trigger logic on Field-Programmable Gate Arrays (FPGAs) under realistic energy-system constraints. Using representative commercial toolchains (Intel HLS and hls4ml) as reference workflows, we demonstrate the capabilities of fixed-point, fully pipelined streaming architectures, while also identifying critical shortcomings of pragma-driven HLS approaches in terms of architecture transparency, long-term portability, and systematic multi-objective design-space exploration, all of which are crucial for long-lived energy projects and plasma diagnostic systems. These limitations directly motivate the development of a custom, vendor-agnostic, extensible HLS framework (PyHLS), specifically oriented toward deterministic latency, reproducibility, and physics-grade verification demands of advanced energy infrastructures. Gas Electron Multipliers (GEMs) are modern gaseous detectors increasingly employed in plasma diagnostics, radiation monitoring, and high-power energy experiments, where high rate capability, fine spatial resolution, and radiation tolerance are required. Their massively parallel signal structure and continuous data streams make GEMs a representative and demanding benchmark for FPGA-based real-time trigger and preprocessing systems in energy-related environments. The primary objective of this study is to establish a pragmatic technological baseline, demonstrating that contemporary HLS workflows can reliably support both template-based and neural inference-based trigger architectures within strict timing, resource, and power constraints typical for advanced energy installations. Furthermore, we outline a scalable development path toward multi-channel and two-dimensional (pixelated) GEM readout architectures, directly applicable to fusion diagnostics, plasma accelerators, beam–plasma interaction studies, and radiation-hard energy monitoring platforms. Although the proposed methodology remains fully transferable to large-scale physics trigger systems, its principal relevance is directed toward real-time diagnostics and protection layers in next-generation energy systems. Full article
Show Figures

Figure 1

16 pages, 1467 KB  
Article
ECG Heartbeat Classification Using Echo State Networks with Noisy Reservoirs and Variable Activation Function
by Ioannis P. Antoniades, Anastasios N. Tsiftsis, Christos K. Volos, Andreas D. Tsigopoulos, Konstantia G. Kyritsi and Hector E. Nistazakis
Computation 2026, 14(2), 49; https://doi.org/10.3390/computation14020049 - 13 Feb 2026
Viewed by 383
Abstract
In this work, we use an Echo State Network (ESN) model, which is essentially a recurrent neural network (RNN) operating according to the reservoir computing (RC) paradigm, to classify individual ECG heartbeats using the MIT-BIH arrhythmia database. The aim is to evaluate the [...] Read more.
In this work, we use an Echo State Network (ESN) model, which is essentially a recurrent neural network (RNN) operating according to the reservoir computing (RC) paradigm, to classify individual ECG heartbeats using the MIT-BIH arrhythmia database. The aim is to evaluate the performance of ESN in a challenging task that involves classification of complex, unprocessed one-dimensional signals, distributed into five classes. Moreover, we investigate the performance of the ESN in the presence of (i) noise in the dynamics of the internal variables of the hidden (reservoir) layer and (ii) random variability in the activation functions of the hidden layer cells (neurons). The overall accuracy of the best-performing ESN, without noise and variability, exceeded 96% with per-class accuracies ranging from 90.2% to 99.1%, which is higher than previous studies using CNNs and more complex machine learning approaches. The top-performing ESN required only 40 min of training on a CPU (Intel i5-1235U@1.3 GHz) HP laptop. Notably, an alternative ESN configuration that matched the accuracy of a prior CNN-based study (93.4%) required only 6 min of training, whereas a CNN would typically require an estimated training time of 2–3 days. Surprisingly, ESN performance proved to be very robust when Gaussian noise was added to the dynamics of the reservoir hidden variables, even for high noise amplitudes. Moreover, the success rates remained essentially the same when random variability was imposed in the activation functions of the hidden layer cells. The stability of ESN performance under noisy conditions and random variability in the hidden layer (reservoir) cells demonstrates the potential of analog hardware implementations of ESNs to be robust in time-series classification tasks. Full article
Show Figures

Figure 1

22 pages, 3543 KB  
Article
Benchmarking Post-Quantum Signatures and KEMs on General-Purpose CPUs Using a TCP Client–Server Testbed
by Jesus Algar-Fernandez, Andrea Villacís-Vanegas, Ysabel Amaro-Aular and Maria-Dolores Cano
Computers 2026, 15(2), 116; https://doi.org/10.3390/computers15020116 - 9 Feb 2026
Viewed by 731
Abstract
Quantum computing threatens widely deployed public-key cryptosystems, accelerating the adoption of Post-Quantum Cryptography (PQC) in practical systems. Beyond asymptotic security, the feasibility of PQC deployments depends on measured performance on real hardware and on implementation-level overheads. This paper presents an experimental evaluation of [...] Read more.
Quantum computing threatens widely deployed public-key cryptosystems, accelerating the adoption of Post-Quantum Cryptography (PQC) in practical systems. Beyond asymptotic security, the feasibility of PQC deployments depends on measured performance on real hardware and on implementation-level overheads. This paper presents an experimental evaluation of five post-quantum digital signature schemes (CRYSTALS-Dilithium, HAWK, SQISign, SNOVA, and SPHINCS+) and three key encapsulation mechanisms (Kyber, HQC, and BIKE) selected to cover multiple PQC design families and parameterizations used in practice. We implement a TCP client–server testbed in Python that invokes C implementations for each primitive—via standalone executables and, where provided, in-process dynamic libraries—and benchmarks key generation, encapsulation/decapsulation, and signature generation/verification on two Windows 11 commodity processors: an AMD Ryzen 7 4000 (8 cores, 16 threads, 1.8 GHz) and an Intel Core i5-1035G1 (4 cores, 8 threads, 1.0 GHz). Each operation is repeated ten times under a low-interference setup, and results are aggregated as mean (with 95% confidence intervals) timings over repeated runs. Across the evaluated configurations, lattice-based schemes (Kyber, Dilithium, HAWK) show the lowest computational cost, while code-based KEMs (HQC, BIKE), isogeny-based (SQISign), and multivariate (SNOVA) signatures incur higher overhead. Hash-based SPHINCS+ exhibits larger artifacts and higher signing latency depending on the parameterization. The AMD platform consistently outperforms the Intel platform, illustrating the impact of CPU characteristics on observed PQC overheads. These results provide comparative evidence to support primitive selection and capacity planning for quantum-resistant deployments, while motivating future end-to-end validation in protocol and web service settings. Full article
Show Figures

Figure 1

13 pages, 1612 KB  
Article
Rethinking the Security Assurances of MTD: A Gap Analysis for Network Defense
by Łukasz Jalowski, Marek Zmuda, Mariusz Rawski and Paulina Rekosz
Future Internet 2026, 18(2), 89; https://doi.org/10.3390/fi18020089 - 7 Feb 2026
Viewed by 524
Abstract
Moving Target Defense (MTD) is a paradigm that has the potential to revolutionize the approach to network security. Although a significant number of papers have been published on the topic, there are still no standards or any dominant implementations of this concept. This [...] Read more.
Moving Target Defense (MTD) is a paradigm that has the potential to revolutionize the approach to network security. Although a significant number of papers have been published on the topic, there are still no standards or any dominant implementations of this concept. This article identifies and attempts to bridge the gap in understanding various aspects of MTD security while also defining the research directions necessary to implement MTD techniques in real-world scenarios. It discusses the security of key design principles of MTD, considers problems regarding applying MTD to real networks, and finally addresses threat modeling in the context of MTD. By aggregating various security aspects related to MTD, some of which have not been typically discussed in the available literature on the subject, this work aims to assist in designing future MTD schemes, help navigate around various security caveats, and highlight possible research directions that have not been sufficiently explored in the existing literature. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems, 3rd Edition)
Show Figures

Graphical abstract

24 pages, 2572 KB  
Article
Measurement of the Time of Boarding and Alighting from Trams Using the Traditional Method, and the Possibility of Using the YOLOs10 Algorithm
by Mikołaj Szyca, Emil Smyk, Krzysztof Radtke and Ján Dižo
Smart Cities 2026, 9(2), 25; https://doi.org/10.3390/smartcities9020025 - 2 Feb 2026
Viewed by 627
Abstract
This article examines differences between conventional manual measurements of tram operations and data extracted automatically using the REWIZOR program, based on the Yolo10s algorithm. The study addresses the broader question of how artificial intelligence can support analyses of passenger exchange processes in public [...] Read more.
This article examines differences between conventional manual measurements of tram operations and data extracted automatically using the REWIZOR program, based on the Yolo10s algorithm. The study addresses the broader question of how artificial intelligence can support analyses of passenger exchange processes in public transport and improve the efficiency of data collection. Measurements conducted in four Polish cities included tram types, stop times, and detailed boarding and alighting durations, while the REWIZOR software enabled automatic detection of stop times and passenger flows based on video recordings. The results show that, although both approaches yield consistent qualitative information regarding doors and passenger counts, significant quantitative discrepancies arise. These differences stem mainly from methodological inconsistencies and varying definitions of boarding, alighting, and stop times, as well as from software-related detection errors. The findings indicate that AI-based measurements require calibration against reference methods to allow reliable comparison with conventional datasets. As currently implemented, REWIZOR can be used effectively for internal analyses of passenger flows, if all compared data come from the same system. Further development—such as implementing simultaneous tracking of people and heads—may considerably improve accuracy and facilitate wider applicability in public transport studies. Full article
(This article belongs to the Special Issue Computer Vision for Creating Sustainable Smart Cities of Tomorrow)
Show Figures

Figure 1

36 pages, 11446 KB  
Article
SIFT-SNN for Traffic-Flow Infrastructure Safety: A Real-Time Context-Aware Anomaly Detection Framework
by Munish Rathee, Boris Bačić and Maryam Doborjeh
J. Imaging 2026, 12(2), 64; https://doi.org/10.3390/jimaging12020064 - 31 Jan 2026
Viewed by 517
Abstract
Automated anomaly detection in transportation infrastructure is essential for enhancing safety and reducing the operational costs associated with manual inspection protocols. This study presents an improved neuromorphic vision system, which extends the prior SIFT-SNN (scale-invariant feature transform–spiking neural network) proof-of-concept by incorporating temporal [...] Read more.
Automated anomaly detection in transportation infrastructure is essential for enhancing safety and reducing the operational costs associated with manual inspection protocols. This study presents an improved neuromorphic vision system, which extends the prior SIFT-SNN (scale-invariant feature transform–spiking neural network) proof-of-concept by incorporating temporal feature aggregation for context-aware and sequence-stable detection. Analysis of classical stitching-based pipelines exposed sensitivity to motion and lighting variations, motivating the proposed temporally smoothed neuromorphic design. SIFT keypoints are encoded into latency-based spike trains and classified using a leaky integrate-and-fire (LIF) spiking neural network implemented in PyTorch. Evaluated across three hardware configurations—an NVIDIA RTX 4060 GPU, an Intel i7 CPU, and a simulated Jetson Nano—the system achieved 92.3% accuracy and a macro F1 score of 91.0% under five-fold cross-validation. Inference latencies were measured at 9.5 ms, 26.1 ms, and ~48.3 ms per frame, respectively. Memory footprints were under 290 MB, and power consumption was estimated to be between 5 and 65 W. The classifier distinguishes between safe, partially dislodged, and fully dislodged barrier pins, which are critical failure modes for the Auckland Harbour Bridge’s Movable Concrete Barrier (MCB) system. Temporal smoothing further improves recall for ambiguous cases. By achieving a compact model size (2.9 MB), low-latency inference, and minimal power demands, the proposed framework offers a deployable, interpretable, and energy-efficient alternative to conventional CNN-based inspection tools. Future work will focus on exploring the generalisability and transferability of the work presented, additional input sources, and human–computer interaction paradigms for various deployment infrastructures and advancements. Full article
Show Figures

Figure 1

14 pages, 286 KB  
Article
Trusted Yet Flexible: High-Level Runtimes for Secure ML Inference in TEEs
by Nikolaos-Achilleas Steiakakis and Giorgos Vasiliadis
J. Cybersecur. Priv. 2026, 6(1), 23; https://doi.org/10.3390/jcp6010023 - 27 Jan 2026
Viewed by 741
Abstract
Machine learning inference is increasingly deployed on shared and cloud infrastructures, where both user inputs and model parameters are highly sensitive. Confidential computing promises to protect these assets using Trusted Execution Environments (TEEs), yet existing TEE-based inference systems remain fundamentally constrained: they rely [...] Read more.
Machine learning inference is increasingly deployed on shared and cloud infrastructures, where both user inputs and model parameters are highly sensitive. Confidential computing promises to protect these assets using Trusted Execution Environments (TEEs), yet existing TEE-based inference systems remain fundamentally constrained: they rely almost exclusively on low-level, memory-unsafe languages to enforce confinement, sacrificing developer productivity, portability, and access to modern ML ecosystems. At the same time, mainstream high-level runtimes, such as Python, are widely considered incompatible with enclave execution due to their large memory footprints and unsafe model-loading mechanisms that permit arbitrary code execution. To bridge this gap, we present the first Python-based ML inference system that executes entirely inside Intel SGX enclaves while safely supporting untrusted third-party models. Our design enforces standardized, declarative model representations (ONNX), eliminating deserialization-time code execution and confining model behavior through interpreter-mediated execution. The entire inference pipeline (including model loading, execution, and I/O) remains enclave-resident, with cryptographic protection and integrity verification throughout. Our experimental results show that Python incurs modest overheads for small models (≈17%) and outperforms a low-level baseline on larger workloads (97% vs. 265% overhead), demonstrating that enclave-resident high-level runtimes can achieve competitive performances. Overall, our findings indicate that Python-based TEE inference is practical and secure, enabling the deployment of untrusted models with strong confidentiality and integrity guarantees while maintaining developer productivity and ecosystem advantages. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

Back to TopTop