Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (90)

Search Parameters:
Keywords = offline streaming

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 10903 KB  
Article
Robot-Driven Calibration and Accuracy Assessment of Meta Quest 3 Inside-Out Tracking Using a TECHMAN TM5-900 Collaborative Robot
by Josep Lopez-Xarbau, Marco Antonio Rodriguez-Fernandez, Marcos Faundez-Zanuy, Jordi Calvo-Sanz and Juan Jose Garcia-Tirado
Sensors 2026, 26(8), 2285; https://doi.org/10.3390/s26082285 - 8 Apr 2026
Viewed by 343
Abstract
We present a systematic evaluation of the positional and rotational tracking accuracy of the Meta Quest 3 mixed-reality headset using a TECHMAN TM5-900 collaborative robot (±0.05 mm repeatability) as a highly repeatable robot-driven reference. The headset was rigidly attached to the robot’s tool [...] Read more.
We present a systematic evaluation of the positional and rotational tracking accuracy of the Meta Quest 3 mixed-reality headset using a TECHMAN TM5-900 collaborative robot (±0.05 mm repeatability) as a highly repeatable robot-driven reference. The headset was rigidly attached to the robot’s tool flange and subjected to single-axis translational motions (200 mm along X, Y, and Z) and rotational motions (Roll ± 65°, Pitch ± 85°, and Yaw ± 85°). Each test was repeated three times, and the resulting trajectories were averaged to improve statistical robustness. Both data sources were integrated into a single Python-based application running on the same computer. The headset streamed its data via UDP, while the robot, implemented as an ROS2 node, published its data to the same host. This configuration enabled simultaneous acquisition of both streams, ensuring temporal consistency without the need for offline interpolation. All comparisons were performed in a relative reference frame, thereby avoiding the need for absolute hand–eye calibration. Coordinate-frame alignment was achieved using Singular Value Decomposition (SVD)-based rigid-body Procrustes analysis. Over 2848 synchronized samples spanning 151.46 s, the Meta Quest 3 achieved a mean translational RMSE of 0.346 mm (3D RMSE = 0.621 mm) and a mean rotational RMSE of 0.143°, with Pearson correlation coefficients greater than 0.9999 on all axes. These results show sub-millimeter positional tracking and sub-degree rotational tracking under controlled conditions, supporting the potential of the Meta Quest 3 for precision-oriented mixed-reality applications in industrial and research settings. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

23 pages, 352 KB  
Article
Performance Comparison of Python-Based Complex Event Processing Engines for IoT Intrusion Detection: Faust Versus Streamz
by Maryam Abbasi, Filipe Cardoso, Paulo Váz, José Silva, Filipe Sá and Pedro Martins
Computers 2026, 15(3), 200; https://doi.org/10.3390/computers15030200 - 23 Mar 2026
Viewed by 418
Abstract
The proliferation of Internet of Things (IoT) devices has intensified the need for efficient real-time anomaly and intrusion detection, making the selection of an appropriate Complex Event Processing (CEP) engine a critical architectural decision for security-aware data pipelines. Python-based CEP frameworks offer compelling [...] Read more.
The proliferation of Internet of Things (IoT) devices has intensified the need for efficient real-time anomaly and intrusion detection, making the selection of an appropriate Complex Event Processing (CEP) engine a critical architectural decision for security-aware data pipelines. Python-based CEP frameworks offer compelling advantages through the seamless integration with data science and machine learning ecosystems; however, rigorous comparative evaluations of such frameworks under realistic IoT security workloads remain absent from the literature. This study presents the first systematic comparative evaluation of Faust and Streamz—two Python-native CEP engines representing fundamentally different architectural philosophies—specifically in the context of IoT network intrusion detection. Faust was selected for its actor-based stateful processing model with native Kafka integration and distributed table support, while Streamz was selected for its reactive, lightweight pipeline design targeting high-throughput stateless processing, making them representative of the two dominant paradigms in Python stream processing. Although both engines target different application niches, their performance characteristics under realistic CEP workloads have never been rigorously compared, leaving practitioners without empirical guidance. The primary evaluation employs an IoT network intrusion dataset comprising 583,485 events from 83 heterogeneous devices. To assess whether the observed performance characteristics are specific to this single dataset or generalize across different workload profiles, a secondary IoT-adjacent benchmark is included: the PaySim financial transaction dataset (6.4 million records), selected because its event schema, fraud-pattern temporal structure, and volume differ substantially from the intrusion dataset, providing a stress test for cross-workload robustness rather than a claim of domain equivalence. We acknowledge the reviewer’s valid point that a second IoT-specific intrusion dataset (such as TON_IoT or Bot-IoT) would constitute a more directly comparable validation; this is identified as a priority for future work. The load levels used in scalability experiments (up to 5000 events per second) intentionally exceed the dataset’s natural rate to stress-test each engine’s architectural ceiling and identify saturation thresholds relevant to large-scale or multi-sensor IoT deployments. We conducted controlled experiments with comprehensive statistical analysis. Our results demonstrate that Streamz achieves superior throughput at 4450 events per second with 89% efficiency and minimal resource consumption (40 MB memory, 12 ms median latency), while Faust provides robust intrusion pattern detection with 93–98% accuracy and stable, predictable resource utilization (1.4% CPU standard deviation). A multi-framework comparison including Apache Kafka Streams and offline scikit-learn baselines confirms that Faust achieves detection quality competitive with JVM-based alternatives (Faust: 96.2%; Kafka Streams: 96.8%; absolute difference of 0.6 percentage points, not statistically significant at p=0.318) while retaining the Python ecosystem advantages. Statistical analysis confirms significant performance differences across all metrics (p<0.001, Cohen’s d>0.8). Critical scalability thresholds are identified: Streamz maintains efficiency above 95% up to 3500 events per second, while Faust degrades beyond 2500 events per second. These findings provide IoT security engineers and system architects with actionable, empirically grounded guidance for CEP engine selection, establish reproducible benchmarking methodology applicable to future Python-based stream processing evaluations, and advance theoretical understanding of the accuracy–throughput trade-off in stateful versus stateless Python CEP architectures. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

23 pages, 5079 KB  
Article
Dual-Stream Transformer with Kalman-Based Sensor Fusion for Wearable Fall Detection
by Abheek Pradhan, Sana Alamgeer, Rakesh Suvvari, Syed Tousiful Haque and Anne H. H. Ngu
Big Data Cogn. Comput. 2026, 10(3), 90; https://doi.org/10.3390/bdcc10030090 - 17 Mar 2026
Viewed by 515
Abstract
Wearable fall detection systems face a fundamental challenge: while gyroscope data provide valuable orientation cues, naively combining raw gyroscope and accelerometer signals can degrade performance due to noise contamination. To overcome this challenge, we present a dual-stream transformer architecture that incorporates (i) Kalman-based [...] Read more.
Wearable fall detection systems face a fundamental challenge: while gyroscope data provide valuable orientation cues, naively combining raw gyroscope and accelerometer signals can degrade performance due to noise contamination. To overcome this challenge, we present a dual-stream transformer architecture that incorporates (i) Kalman-based sensor fusion to convert noisy gyroscope angular velocities into stable orientation estimates (roll, pitch, yaw), maintaining an internal state of body pose, and (ii) processing accelerometer and orientation streams in separate encoder pathways before fusion to prevent cross-modal interference. Our architecture further integrates Squeeze-and-Excitation channel attention and Temporal Attention Pooling to focus on fall-critical temporal patterns. Evaluated on the SmartFallMM dataset using 21-fold leave-one-subject-out cross-validation, the dual-stream Kalman transformer achieves 91.10% F1, outperforming single-stream Kalman transformers (89.80% F1) by 1.30% and single-stream baseline transformers (88.96% F1) by 2.14%. We further evaluate the model in real time using a watch-based SmartFall App on five participants, maintaining an average F1 score of 83% and an accuracy of 90%. These results indicate robust performance in both offline and real-world deployment settings, establishing a new state-of-the-art for inertial-measurement-unit-based fall detection on commodity smartwatch devices. Full article
Show Figures

Figure 1

23 pages, 2010 KB  
Article
Visibility-Prior Guided Dual-Stream Mixture-of-Experts for Robust Facial Expression Recognition Under Complex Occlusions
by Siyuan Ma, Long Liu, Mingzhi Cheng, Peijun Qin, Zixuan Han, Cui Chen, Shizhao Yang and Hongjuan Wang
Electronics 2026, 15(6), 1230; https://doi.org/10.3390/electronics15061230 - 16 Mar 2026
Viewed by 326
Abstract
Facial occlusion induces sample-wise reliability shifts in facial expression recognition (FER), where the usefulness of global context and local discriminative cues varies dramatically with the amount of visible facial information. Existing occlusion-robust FER studies often evaluate under limited or homogeneous occlusion settings and [...] Read more.
Facial occlusion induces sample-wise reliability shifts in facial expression recognition (FER), where the usefulness of global context and local discriminative cues varies dramatically with the amount of visible facial information. Existing occlusion-robust FER studies often evaluate under limited or homogeneous occlusion settings and commonly adopt static fusion strategies, which are insufficient for complex and heterogeneous real-world occlusions. In this work, we establish a rigorous occlusion robustness evaluation protocol by constructing a fixed offline test benchmark with diverse synthetic occlusion patterns (e.g., masks, sunglasses, texture blocks, and mixed occlusions) on top of public FER test splits. We further propose a Dual-Stream Adaptive Weighting Mixture-of-Experts framework (DS-AW-MoE) that fuses a global contextual expert and a local discriminative expert via an occlusion-aware weighting network. Crucially, we introduce a facial visibility assessment as a task-agnostic prior to explicitly regulate expert contributions, enabling dynamic re-allocation of model capacity according to input-dependent feature reliability. Extensive experiments on public datasets and the constructed occlusion benchmark demonstrate that DS-AW-MoE achieves more stable recognition under complex occlusions, characterized by a smaller and more consistent performance drop. To support reproducibility under dataset license constraints, we will release an anonymous, fully runnable repository containing the complete occlusion synthesis pipeline, evaluation protocol, and configuration files, allowing researchers to reproduce the benchmark after obtaining the original datasets. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 3rd Edition)
Show Figures

Figure 1

23 pages, 6668 KB  
Article
Development of a Visual SLAM-Based Autonomous UAV System for Greenhouse Plant Monitoring
by Jing-Heng Lin and Ta-Te Lin
Drones 2026, 10(3), 205; https://doi.org/10.3390/drones10030205 - 15 Mar 2026
Viewed by 733
Abstract
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. [...] Read more.
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. The proposed system employs a dual-link edge-computing architecture: a lightweight onboard controller handles flight control and sensor acquisition, while visual simultaneous localization and mapping (V-SLAM) is offloaded to an edge computer via the FPV video link. Phenotyping (flower detection and tracking/counting) is performed offline from the side-view RGB stream and does not participate in the flight control loop. Using muskmelon (Cucumis melo L.) flower development as a case study, the UAV autonomously executed daily missions for 27 days in a commercial greenhouse, performing flower detection and tracking to monitor phenological dynamics. Localization and control accuracy were evaluated against a validated UWB reference system, achieving 5.4~8.0 cm 2D RMSE for trajectory tracking and 12.7 cm translation RMSE for greenhouse mapping. This work demonstrates a practical architecture for autonomous monitoring in GPS-denied agricultural environments, with operational boundaries characterized through the sustained field deployment. The system’s design principles may extend to other indoor or communication-limited scenarios requiring lightweight, intelligent robotic operation. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

31 pages, 23615 KB  
Article
A Memory-Efficient Class-Incremental Learning Framework for Remote Sensing Scene Classification via Feature Replay
by Yunze Wei, Yuhan Liu, Ben Niu, Xiantai Xiang, Jingdun Lin, Yuxin Hu and Yirong Wu
Remote Sens. 2026, 18(6), 896; https://doi.org/10.3390/rs18060896 - 15 Mar 2026
Viewed by 365
Abstract
Most existing deep learning models for remote sensing scene classification (RSSC) adopt an offline learning paradigm, where all classes are jointly optimized on fixed-class datasets. In dynamic real-world scenarios with streaming data and emerging classes, such paradigms are inherently prone to catastrophic forgetting [...] Read more.
Most existing deep learning models for remote sensing scene classification (RSSC) adopt an offline learning paradigm, where all classes are jointly optimized on fixed-class datasets. In dynamic real-world scenarios with streaming data and emerging classes, such paradigms are inherently prone to catastrophic forgetting when models are incrementally trained on new data. Recently, a growing number of class-incremental learning (CIL) methods have been proposed to tackle these issues, some of which achieve promising performance by rehearsing training data from previous tasks. However, implementing such strategy in real-world scenarios is often challenging, as the requirement to store historical data frequently conflicts with strict memory constraints and data privacy protocols. To address these challenges, we propose a novel memory-efficient feature-replay CIL framework (FR-CIL) for RSSC that retains compact feature embeddings, rather than raw images, as exemplars for previously learned classes. Specifically, a progressive multi-scale feature enhancement (PMFE) module is proposed to alleviate representation ambiguity. It adopts a progressive construction scheme to enable fine-grained and interactive feature enhancement, thereby improving the model’s representation capability for remote sensing scenes. Then, a specialized feature calibration network (FCN) is trained in a transductive learning paradigm with manifold consistency regularization to adapt stored feature descriptors to the updated feature space, thereby effectively compensating for feature space drift and enabling a unified classifier. Following feature calibration, a bias rectification (BR) strategy is employed to mitigate prediction bias by exclusively optimizing the classifier on a balanced exemplar set. As a result, this memory-efficient CIL framework not only addresses data privacy concerns but also mitigates representation drift and classifier bias. Extensive experiments on public datasets demonstrate the effectiveness and robustness of the proposed method. Notably, FR-CIL outperforms the leading state-of-the-art CIL methods in mean accuracy by margins of 3.75%, 3.09%, and 2.82% on the six-task AID, seven-task RSI-CB256, and nine-task NWPU-45 datasets, respectively. At the same time, it reduces memory storage requirements by over 94.7%, highlighting its strong potential for real-world RSSC applications under strict memory constraints. Full article
Show Figures

Figure 1

17 pages, 840 KB  
Article
Attention-Enhanced LSTM for Real-Time Curling Stone Trajectory Prediction on Resource-Constrained Devices
by Guanyu Chen, Shimpei Aihara and Yoshinari Takegawa
Appl. Sci. 2026, 16(5), 2612; https://doi.org/10.3390/app16052612 - 9 Mar 2026
Viewed by 336
Abstract
Real-time trajectory forecasting for curling stones is essential for on-ice decision support, yet prior work often emphasizes offline analysis, fixed-window predictors, or physics-driven models that require additional measurements, and it rarely reports end-to-end feasibility under edge-computing constraints (latency and memory). This leaves a [...] Read more.
Real-time trajectory forecasting for curling stones is essential for on-ice decision support, yet prior work often emphasizes offline analysis, fixed-window predictors, or physics-driven models that require additional measurements, and it rarely reports end-to-end feasibility under edge-computing constraints (latency and memory). This leaves a practical gap between accurate trajectory reconstruction and deployable rink-side guidance. To bridge this gap, we propose an online forecaster based on low-dimensional (x,y) coordinate streams and a lightweight attention-enhanced Long Short-Term Memory (LSTM) architecture optimized for edge devices. The model uses a four-second sliding window (240 frames at 59.94 Hz) to predict fifteen seconds of future positions (900 frames) in a single multi-step forward pass, and an overlapping publication scheme is adopted to retain longer temporal context and stabilize continuous updates. We further provide a TensorFlow Lite (TFLite) conversion and quantization workflow to support on-device inference. Quantitatively, experiments on the CurlTracer dataset (1033 throws at 59.94 Hz) show that the proposed attention–LSTM achieves trajectory-level MAE/MdAE of 0.25/0.22 m over the full prediction horizon, improving over a plain LSTM (0.30/0.24 m) and a physics-based pivot-slide baseline (3.52/3.54 m). At two checkpoints, the first-step MAE/MdAE are 0.14/0.11 m and the mid-step MAE/MdAE are 0.21/0.18 m. For real-time feasibility, on a Raspberry Pi 4B the per-window latency is approximately 0.25 s (including I/O and post-processing), while CPU benchmarks show that TFLite variants provide 7–8× speedups over the original Keras runtime with only minor accuracy loss (e.g., window-level MAE 0.30–0.41 m across FP32/DRQ/FP16/INT8). Qualitatively, representative trajectory visualizations show good agreement in near/mid horizons and reasonable stopping-region guidance, supporting integration with a stone-mounted interface for actionable feedback. Full article
(This article belongs to the Special Issue Advances in Winter Sports and Data Science)
Show Figures

Figure 1

50 pages, 3734 KB  
Article
DT-LCAF: Digital Twin-Enabled Life Cycle Assessment Framework for Real-Time Embodied Carbon Optimization in Smart Building Construction
by Naif Albelwi
Sustainability 2026, 18(5), 2321; https://doi.org/10.3390/su18052321 - 27 Feb 2026
Viewed by 580
Abstract
The construction sector contributes approximately 39% of global carbon emissions, with embodied carbon—emissions from material extraction, manufacturing, transportation, and construction—representing a systematically underestimated yet increasingly critical component of building life cycle environmental impacts. Traditional Life Cycle Assessment (LCA) methods suffer from static database [...] Read more.
The construction sector contributes approximately 39% of global carbon emissions, with embodied carbon—emissions from material extraction, manufacturing, transportation, and construction—representing a systematically underestimated yet increasingly critical component of building life cycle environmental impacts. Traditional Life Cycle Assessment (LCA) methods suffer from static database dependencies, delayed feedback cycles, and limited integration with active construction decision-making, creating a fundamental gap between environmental assessment and construction operations. This paper presents the Digital Twin-Enabled Life Cycle Assessment Framework (DT-LCAF), a dynamic construction-phase embodied carbon accounting system aligned with the EN 15978 standard (stages A1–A5) that integrates Building Information Modeling (BIM), Internet of Things (IoT) sensor networks, and machine learning designed to support real-time sustainability decision-making during smart building construction, with computational performance validated through the offline processing of historical datasets. The framework introduces two enabling mechanisms: (1) a Multi-Scale Carbon Prediction Network (MSCPN) employing hierarchical graph attention networks to capture material interdependencies across component, system, and building scales; and (2) a Reinforcement Learning-based Carbon Optimization Engine (RL-COE) that generates constraint-aware recommendations for material substitution, supplier selection, and construction sequencing while respecting structural, economic, and temporal constraints. Experimental evaluation employs two complementary validation strategies using proxy embodied carbon labels (not ground-truth construction measurements): embodied carbon prediction accuracy is assessed using proxy carbon labels derived from the CBECS dataset (5900 commercial buildings) combined with the ICE Database v3.0 emission factors, achieving a 10.24% MAPE, representing a 23.7% improvement over the best-performing baseline in predicting these proxy estimates; temporal responsiveness and streaming data ingestion capabilities are validated using the Building Data Genome Project 2 (1636 buildings, 3053 m). The RL-COE optimization engine demonstrates an 18.4% mean carbon reduction rate within the proxy label framework across building types while maintaining cost and schedule feasibility. A BIM-based case study illustrates the framework’s construction-phase update loop, showing how embodied carbon estimates evolve dynamically as construction progresses. The limitations regarding the proxy-based nature of embodied carbon labels and the absence of ground-truth construction-phase measurements are explicitly discussed. The framework contributes to smart city sustainability by enabling scalable, data-driven embodied carbon intelligence across building portfolios. All quantitative results are based on proxy embodied carbon estimates derived from building characteristics and standard emission factor databases, rather than measured project data. The reported performance therefore demonstrates a proof-of-concept within the proxy system, and real-project, measurement-based validation remains future work. Full article
Show Figures

Figure 1

26 pages, 3681 KB  
Article
Intelligent Acquisition of Dynamic Targets via Multi-Source Information: A Fusion Framework Integrating Deep Reinforcement Learning with Evidence Theory
by Jiyao Yu, Bin Zhu, Yi Chen, Bo Xie, Xuanling Feng, Hongfei Yan, Jian Zeng and Runhua Wang
Remote Sens. 2026, 18(5), 689; https://doi.org/10.3390/rs18050689 - 26 Feb 2026
Viewed by 309
Abstract
Accurate acquisition of low-observable targets with a minimal radar cross-section (RCS) poses a significant challenge for multi-source remote sensing systems, such as integrated radar–electro-optical (REO) platforms, particularly in complex electromagnetic environments characterized by strong noise interference and a high false-alarm rate. Conventional methods, [...] Read more.
Accurate acquisition of low-observable targets with a minimal radar cross-section (RCS) poses a significant challenge for multi-source remote sensing systems, such as integrated radar–electro-optical (REO) platforms, particularly in complex electromagnetic environments characterized by strong noise interference and a high false-alarm rate. Conventional methods, which often treat data association and fusion from heterogeneous sensors as separate, offline processes, struggle with the dynamic uncertainties and real-time decision requirements of such scenarios. To address these limitations, this paper proposes a novel Evidence–Reinforcement Learning-based Decision and Control (ERL-DC) framework. It operates through a closed-loop architecture consisting of three core modules: A static assessment model for initial target prioritization, a Dempster–Shafer (D–S) evidence-based multi-source data decision generator for dynamic information fusion and uncertainty-aware target selection, and a Deep Reinforcement Learning (DRL) controller for noise-robust sensor steering. A high-fidelity simulation environment was developed to model the multi-source data stream, encompassing radar detection with clutter and false targets, as well as the physical constraints of the electro-optical (EO) servo system. Based on the averaged results from multiple Monte Carlo simulations, the proposed ERL-DC framework reduced the Average Decision Time (ADT) from 7.51 s to 4.53 s, corresponding to an absolute reduction of 2.98 s when compared to the conventional method integrating threshold logic with Model Predictive Control (MPC). Furthermore, the Net Discrimination Accuracy (NDA), derived from the statistical outcomes across all the simulation runs, exhibited an absolute increase of 37.8 percentage points, rising from 57.8% to 95.6%. These results indicate that ERL-DC achieves a more favorable trade-off in terms of scheduling efficiency, decision robustness, and resource utilization. The primary contribution is an intelligent, closed-loop architecture that tightly couples high-level evidential reasoning for multi-source data fusion with low-level adaptive control. Within the simulated environment characterized by clutter, false targets, and angular measurement noise, ERL-DC demonstrates improved target discrimination accuracy and decision efficiency compared to conventional methods. Future work will focus on online parameter adaptation and validation on physical platforms. Full article
Show Figures

Figure 1

25 pages, 5757 KB  
Article
A Device-Free Human Detection System Using 2.4 GHz Wireless Networks and an RSSI Distribution-Based Method with Autonomous Threshold
by Charernkiat Pochaiya, Apidet Booranawong, Dujdow Buranapanichkit, Kriangkrai Tassanavipas and Hiroshi Saito
Electronics 2026, 15(2), 491; https://doi.org/10.3390/electronics15020491 - 22 Jan 2026
Viewed by 596
Abstract
A device-free human detection system based on a received signal strength indicator (RSSI) monitors and analyzes the change of RSSI signals to detect human movements in a wireless network. This study proposes and implements a real-time, device-free human detection system based on an [...] Read more.
A device-free human detection system based on a received signal strength indicator (RSSI) monitors and analyzes the change of RSSI signals to detect human movements in a wireless network. This study proposes and implements a real-time, device-free human detection system based on an RSSI distribution-based detection method with an autonomous threshold. The novelty and contribution of our solution is that the RSSI distribution concept is considered and used to calculate the optimal threshold setting for human detection, while thresholds can be automatically determined from RSSI data streams gathered from test environments. The proposed system can efficiently work without requiring an offline phase, as introduced in many existing works in the research literature. Experiments using 2.4 GHz IEEE 802.15.4 technology have been carried out in indoor environments in two laboratory rooms with different numbers of wireless links, human movement patterns, and movement speeds. Experimental results show that, in all test scenarios, the proposed method can monitor and detect human movement in a wireless network in real time. It outperforms a comparative method and achieves high accuracy (i.e., 100% detection accuracy) with a low computational complexity requirement. Full article
Show Figures

Figure 1

21 pages, 323 KB  
Article
PhishCluster: Real-Time, Density-Based Discovery of Malicious URL Campaigns from Semantic Embeddings
by Dimitrios Karapiperis, Georgios Feretzakis and Sarandis Mitropoulos
Information 2026, 17(1), 64; https://doi.org/10.3390/info17010064 - 9 Jan 2026
Viewed by 559
Abstract
The proliferation of algorithmically generated malicious URLs has overwhelmed traditional threat intelligence systems, necessitating a paradigm shift from reactive, single-instance analysis to proactive, automated campaign discovery. Existing systems excel at finding semantically similar URLs given a known malicious seed but fail to provide [...] Read more.
The proliferation of algorithmically generated malicious URLs has overwhelmed traditional threat intelligence systems, necessitating a paradigm shift from reactive, single-instance analysis to proactive, automated campaign discovery. Existing systems excel at finding semantically similar URLs given a known malicious seed but fail to provide a real-time, macroscopic view of emerging and evolving attack campaigns from high-velocity data streams. This paper introduces PhishCluster, a novel framework designed to bridge this critical gap. PhishCluster implements a two-phase, online–offline architecture that synergistically combines large-scale Approximate Nearest Neighbor (ANN) search with advanced density-based clustering. The online phase employs an ANN-accelerated maintenance algorithm to process a stream of URL embeddings at unprecedented throughput, summarizing the data into compact, evolving Campaign Micro-Clusters (CMCs). The offline, on-demand phase then applies a hierarchical density-based algorithm to these CMCs, enabling the discovery of arbitrarily shaped, varying-density campaigns without prior knowledge of their number. Our comprehensive experimental evaluation on a synthetic billion-point dataset, designed to mimic real-world campaign dynamics, demonstrates that PhishCluster’s architecture resolves the fundamental trade-off between speed and quality in streaming data analysis. The results validate that PhishCluster achieves an order-of-magnitude improvement in processing throughput over state-of-the-art streaming clustering baselines while simultaneously attaining a superior clustering quality and campaign detection fidelity. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

23 pages, 998 KB  
Article
A SIEM-Integrated Cybersecurity Prototype for Insider Threat Anomaly Detection Using Enterprise Logs and Behavioural Biometrics
by Mohamed Salah Mohamed and Abdullahi Arabo
Electronics 2026, 15(1), 248; https://doi.org/10.3390/electronics15010248 - 5 Jan 2026
Viewed by 1456
Abstract
Insider threats remain a serious concern for organisations in both public and private sectors. Detecting anomalous behaviour in enterprise environments is critical for preventing insider incidents. While many prior studies demonstrate promising results using deep learning on offline datasets, few address real-time operationalisation [...] Read more.
Insider threats remain a serious concern for organisations in both public and private sectors. Detecting anomalous behaviour in enterprise environments is critical for preventing insider incidents. While many prior studies demonstrate promising results using deep learning on offline datasets, few address real-time operationalisation or calibrated alert control within a Security Information and Event Management (SIEM) workflow. This paper presents a SIEM-integrated prototype that fuses the Computer Emergency Response Team Insider Threat Test Dataset (CERT) enterprise logs (Logon, Device, HTTP, and Email) with behavioural biometrics from the Balabit mouse dynamics dataset. Per-modality one-dimensional convolutional neural network (1D CNN) branches are trained independently using imbalance-aware strategies, including downsampling, class weighting, and focal loss. A unified 20 × N feature schema ensures train–serve parity and consistent feature validation during live inference. Post-training calibration using Platt and isotonic regression enables analyst-controlled threshold tuning and stable alert budgeting inside the SIEM. The models are deployed in Splunk’s Machine Learning Toolkit (MLTK), where dashboards visualise anomaly timelines, risky users or hosts, and cross-stream overlaps. Evaluation emphasises operational performance, precision–recall balance, calibration stability, and throughput rather than headline accuracy. Results show calibrated, controllable alert volumes: for Device, precision ≈0.70 at recall ≈0.30 (PR-AUC = 0.468, ROC-AUC = 0.949); for Logon, ROC-AUC = 0.936 with an ultra-low false-positive rate at a conservative threshold. Batch CPU inference sustains ≈70.5 k windows/s, confirming real-time feasibility. This study’s main contribution is to demonstrate a calibrated, multi-modal CNN framework that integrates directly within a live SIEM pipeline. It provides a reproducible path from offline anomaly detection research to Security Operations Centre (SOC)-ready deployment, bridging the gap between academic models and operational Cybersecurity practice. Full article
(This article belongs to the Special Issue AI in Cybersecurity, 2nd Edition)
Show Figures

Figure 1

27 pages, 4420 KB  
Article
Real-Time Quarry Truck Monitoring with Deep Learning and License Plate Recognition: Weighbridge Reconciliation for Production Control
by Ibrahima Dia, Bocar Sy, Ousmane Diagne, Sidy Mané and Lamine Diouf
Mining 2025, 5(4), 84; https://doi.org/10.3390/mining5040084 - 14 Dec 2025
Viewed by 986
Abstract
This paper presents a real-time quarry truck monitoring system that combines deep learning and license plate recognition (LPR) for operational monitoring and weighbridge reconciliation. Rather than estimating load volumes directly from imagery, the system ensures auditable matching between detected trucks and official weight [...] Read more.
This paper presents a real-time quarry truck monitoring system that combines deep learning and license plate recognition (LPR) for operational monitoring and weighbridge reconciliation. Rather than estimating load volumes directly from imagery, the system ensures auditable matching between detected trucks and official weight records. Deployed at quarry checkpoints, fixed cameras stream to an edge stack that performs truck detection, line-crossing counts, and per-frame plate Optical Character Recognition (OCR); a temporal voting and format-constrained post-processing step consolidates plate strings for registry matching. The system exposes a dashboard with auditable session bundles (model/version hashes, Region of Interest (ROI)/line geometry, thresholds, logs) to ensure replay and traceability between offline evaluation and live operations. We evaluate detection (precision, recall, mAP@0.5, and mAP@0.5:0.95), tracking (ID metrics), and (LPR) usability, and we quantify operational validity by reconciling estimated shift-level tonnage T against weighbridge tonnage T* using Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), R2, and Bland–Altman analysis. Results show stable convergence of the detection models, reliable plate usability under varied optics (day, dusk, night, and dust), low-latency processing suitable for commodity hardware, and close agreement with weighbridge references at the shift level. The study demonstrates that vision-based counting coupled with plate linkage can provide regulator-ready KPIs and auditable evidence for production control in quarry operations. Full article
(This article belongs to the Special Issue Mine Management Optimization in the Era of AI and Advanced Analytics)
Show Figures

Graphical abstract

41 pages, 2890 KB  
Article
STREAM: A Semantic Transformation and Real-Time Educational Adaptation Multimodal Framework in Personalized Virtual Classrooms
by Leyli Nouraei Yeganeh, Yu Chen, Nicole Scarlett Fenty, Amber Simpson and Mohsen Hatami
Future Internet 2025, 17(12), 564; https://doi.org/10.3390/fi17120564 - 5 Dec 2025
Viewed by 1572
Abstract
Most adaptive learning systems personalize around content sequencing and difficulty adjustment rather than transforming instructional material within the lesson itself. This paper presents the STREAM (Semantic Transformation and Real-Time Educational Adaptation Multimodal) framework. This modular pipeline decomposes multimodal educational content into semantically tagged, [...] Read more.
Most adaptive learning systems personalize around content sequencing and difficulty adjustment rather than transforming instructional material within the lesson itself. This paper presents the STREAM (Semantic Transformation and Real-Time Educational Adaptation Multimodal) framework. This modular pipeline decomposes multimodal educational content into semantically tagged, pedagogically annotated units for regeneration into alternative formats while preserving source traceability. STREAM is designed to integrate automatic speech recognition, transformer-based natural language processing, and planned computer vision components to extract instructional elements from teacher explanations, slides, and embedded media. Each unit receives metadata, including time codes, instructional type, cognitive demand, and prerequisite concepts, designed to enable format-specific regeneration with explicit provenance links. For a predefined visual-learner profile, the system generates annotated path diagrams, two-panel instructional guides, and entity pictograms with complete back-link coverage. Ablation studies confirm that individual components contribute measurably to output completeness without compromising traceability. This paper reports results from a tightly scoped feasibility pilot that processes a single five-minute elementary STEM video offline under clean audio–visual conditions. We position the pilot’s limitations as testable hypotheses that require validation across diverse content domains, authentic deployments with ambient noise and bandwidth constraints, multiple learner profiles, including multilingual students and learners with disabilities, and controlled comprehension studies. The contribution is a transparent technical demonstration of feasibility and a methodological scaffold for investigating whether within-lesson content transformation can support personalized learning at scale. Full article
Show Figures

Graphical abstract

41 pages, 6103 KB  
Article
H-RT-IDPS: A Hierarchical Real-Time Intrusion Detection and Prevention System for the Smart Internet of Vehicles via TinyML-Distilled CNN and Hybrid BiLSTM-XGBoost Models
by Ikram Hamdaoui, Chaymae Rami, Zakaria El Allali and Khalid El Makkaoui
Technologies 2025, 13(12), 572; https://doi.org/10.3390/technologies13120572 - 5 Dec 2025
Viewed by 1141
Abstract
The integration of connected vehicles into smart city infrastructure introduces critical cybersecurity challenges for the Internet of Vehicles (IoV), where resource-constrained vehicles and powerful roadside units (RSUs) must collaborate for secure communication. We propose H-RT-IDPS, a hierarchical real-time intrusion detection and prevention system [...] Read more.
The integration of connected vehicles into smart city infrastructure introduces critical cybersecurity challenges for the Internet of Vehicles (IoV), where resource-constrained vehicles and powerful roadside units (RSUs) must collaborate for secure communication. We propose H-RT-IDPS, a hierarchical real-time intrusion detection and prevention system targeting two high-priority IoV security pillars: availability (traffic overload) and integrity/authenticity (spoofing), with spoofing evaluated across multiple subclasses (GAS, RPM, SPEED, and steering wheel). In the offline phase, deep learning and hybrid models were benchmarked on the vehicular CAN bus dataset CICIoV2024, with the BiLSTM-XGBoost hybrid chosen for its balance between accuracy and inference speed. Real-time deployment uses a TinyML-distilled CNN on vehicles for ultra-lightweight, low-latency detection, while RSU-level BiLSTM-XGBoost performs a deeper temporal analysis. A Kafka–Spark Streaming pipeline supports localized classification, prevention, and dashboard-based monitoring. In baseline, stealth, and coordinated modes, the evaluation achieved accuracy, precision, recall, and F1-scores all above 97%. The mean end-to-end inference latency was 148.67 ms, and the resource usage was stable. The framework remains robust in both high-traffic and low-frequency attack scenarios, enhancing operator situational awareness through real-time visualizations. These results demonstrate a scalable, explainable, and operator-focused IDPS well suited for securing SC-IoV deployments against evolving threats. Full article
(This article belongs to the Special Issue Research on Security and Privacy of Data and Networks)
Show Figures

Figure 1

Back to TopTop