Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (157)

Search Parameters:
Keywords = event-driven architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 2920 KB  
Article
Advancing Energy Flexibility Protocols for Multi-Energy System Integration
by Haihang Chen, Fadi Assad and Konstantinos Salonitis
Energies 2026, 19(3), 588; https://doi.org/10.3390/en19030588 - 23 Jan 2026
Abstract
This study investigates the incorporation of a standardised flexibility protocol within a physics-based models to enable controllable demand-side flexibility in residential energy systems. A heating subsystem is developed using MATLAB/Simulink and Simscape, serving as a testbed for protocol-driven control within a Multi-Energy System [...] Read more.
This study investigates the incorporation of a standardised flexibility protocol within a physics-based models to enable controllable demand-side flexibility in residential energy systems. A heating subsystem is developed using MATLAB/Simulink and Simscape, serving as a testbed for protocol-driven control within a Multi-Energy System (MES). A conventional thermostat controller is first established, followed by the implementation of an OpenADR event engine in Stateflow. Simulations conducted under consistent boundary conditions reveal that protocol-enabled control enhances system performance in several respects. It maintains a more stable and pronounced indoor–outdoor temperature differential, thereby improving thermal comfort. It also reduces fuel consumption by curtailing or shifting heat output during demand-response events, while remaining within acceptable comfort limits. Additionally, it improves operational stability by dampening high-frequency fluctuations in mdot_fuel. The resulting co-simulation pipeline offers a modular and reproducible framework for analysing the propagation of grid-level signals to device-level actions. The research contributes a simulation-ready architecture that couples standardised demand-response signalling with a physics-based MES model, alongside quantitative evidence that protocol-compliant actuation can deliver comfort-preserving flexibility in residential heating. The framework is readily extensible to other energy assets, such as cooling systems, electric vehicle charging, and combined heat and power (CHP), and is adaptable to additional protocols, thereby supporting future cross-vector investigations into digitally enabled energy flexibility. Full article
Show Figures

Figure 1

45 pages, 15149 KB  
Review
A New Era in Computing: A Review of Neuromorphic Computing Chip Architecture and Applications
by Guang Chen, Meng Xu, Yuying Chen, Fuge Yuan, Lanqi Qin and Jian Ren
Chips 2026, 5(1), 3; https://doi.org/10.3390/chips5010003 - 22 Jan 2026
Abstract
Neuromorphic computing, an interdisciplinary field combining neuroscience and computer science, aims to create efficient, bio-inspired systems. Different from von Neumann architectures, neuromorphic systems integrate memory and processing units to enable parallel, event-driven computation. By simulating the behavior of biological neurons and networks, these [...] Read more.
Neuromorphic computing, an interdisciplinary field combining neuroscience and computer science, aims to create efficient, bio-inspired systems. Different from von Neumann architectures, neuromorphic systems integrate memory and processing units to enable parallel, event-driven computation. By simulating the behavior of biological neurons and networks, these systems excel in tasks like pattern recognition, perception, and decision-making. Neuromorphic computing chips, which operate similarly to the human brain, offer significant potential for enhancing the performance and energy efficiency of bio-inspired algorithms. This review introduces a novel five-dimensional comparative framework—process technology, scale, power consumption, neuronal models, and architectural features—that systematically categorizes and contrasts neuromorphic implementations beyond existing surveys. We analyze notable neuromorphic chips, such as BrainScaleS, SpiNNaker, TrueNorth, and Loihi, comparing their scale, power consumption, and computational models. The paper also explores the applications of neuromorphic computing chips in artificial intelligence (AI), robotics, neuroscience, and adaptive control systems, while facing challenges related to hardware limitations, algorithms, and system scalability and integration. Full article
20 pages, 6325 KB  
Article
A Rapid Prediction Model of Rainstorm Flood Targeting Power Grid Facilities
by Shuai Wang, Lei Shi, Xiaoli Hao, Xiaohua Ren, Qing Liu, Hongping Zhang and Mei Xu
Hydrology 2026, 13(1), 37; https://doi.org/10.3390/hydrology13010037 - 19 Jan 2026
Viewed by 87
Abstract
Rainstorm floods constitute one of the major natural hazards threatening the safe and stable operation of power grid facilities. Constructing a rapid and accurate prediction model is of great significance in order to enhance the disaster prevention capacity of the power grid. This [...] Read more.
Rainstorm floods constitute one of the major natural hazards threatening the safe and stable operation of power grid facilities. Constructing a rapid and accurate prediction model is of great significance in order to enhance the disaster prevention capacity of the power grid. This study proposes a rapid prediction model for urban rainstorm flood targeting power grid facilities based on deep learning. The model utilizes computational results of high-precision mechanism models as data-driven input and adopts a dual-branch prediction architecture of space and time: the spatial prediction module employs a multi-layer perceptron (MLP), and the temporal prediction module integrates convolutional neural network (CNN), long short-term memory network (LSTM), and attention mechanism (ATT). The constructed water dynamics model of the right bank of Liangshui River in Fengtai District of Beijing has been verified to be reliable in the simulation of the July 2023 (“23·7”) extreme rainstorm event in Beijing (the July 2023 event), which provides high-quality training and validation data for the deep learning-based surrogate model (SM model). Compared with traditional high-precision mechanism models, the SM model shows distinctive advantages: the R2 value of the overall inundation water depth prediction of the spatial prediction module reaches 0.9939, and the average absolute error of water depth is 0.013 m; the R2 values of temporal water depth processes prediction at all substations made by the temporal prediction module are all higher than 0.92. Only by inputting rainfall data can the water depth at power grid facilities be output within seconds, providing an effective tool for rapid assessment of flood risks to power grid facilities. In a word, the main contribution of this study lies in the proposal of the SM model driven by the high-precision mechanism model. This model, through a dual-branch module in both space and time, has achieved second-level high-precision prediction from rainfall input to water depth output in scenarios where the power grid is at risk of flooding for the first time, providing an expandable method for real-time simulation of complex physical processes. Full article
Show Figures

Figure 1

23 pages, 620 KB  
Article
CharSPBench: An Interaction-Aware Micro-Architecture Characterization Framework for Smartphone Benchmarks
by Chenghao Ouyang, Zhong Yang and Guohui Li
Electronics 2026, 15(2), 432; https://doi.org/10.3390/electronics15020432 - 19 Jan 2026
Viewed by 171
Abstract
Mobile application workloads are inherently driven by user interactions and are characterized by short execution phases and frequent behavioral changes. These properties make it difficult for traditional micro-architecture analysis approaches, which typically assume stable execution behavior, to accurately capture performance bottlenecks in realistic [...] Read more.
Mobile application workloads are inherently driven by user interactions and are characterized by short execution phases and frequent behavioral changes. These properties make it difficult for traditional micro-architecture analysis approaches, which typically assume stable execution behavior, to accurately capture performance bottlenecks in realistic mobile scenarios. To address this challenge, this paper presents CharSPBench, an interaction-aware micro-architecture characterization framework for analyzing mobile benchmarks under representative user interaction scenarios. CharSPBench organizes micro-architecture performance events in a structured and semantically consistent manner. It further enables systematic attribution of performance bottlenecks across different interaction conditions. The framework further supports intensity-based workload analysis to identify workload tendencies, such as memory-intensive and frontend-bound behaviors, under interaction-driven execution. Using the proposed framework, 126 micro-architecture performance events are systematically organized. This process leads to the identification of 19 key, semantically non-redundant features, further grouped into five major micro-architecture subsystems. Based on this structured representation, eight representative interaction-dependent micro-architecture insights are extracted to characterize performance behavior across mobile benchmarks. These quantitative results demonstrate that CharSPBench complements existing micro-architecture analysis techniques and provides practical support for interaction-aware benchmark design and mobile processor performance evaluation. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

45 pages, 14932 KB  
Article
An Intelligent Predictive Maintenance Architecture for Substation Automation: Real-World Validation of a Digital Twin and AI Framework of the Badra Oil Field Project
by Sarmad Alabbad and Hüseyin Altınkaya
Electronics 2026, 15(2), 416; https://doi.org/10.3390/electronics15020416 - 17 Jan 2026
Viewed by 120
Abstract
The increasing complexity of modern electrical substations—driven by renewable integration, advanced automation, and asset aging—necessitates a transition from reactive maintenance toward intelligent, data-driven strategies. Predictive maintenance (PdM), supported by artificial intelligence, enables early fault detection and remaining useful life (RUL) estimation, while Digital [...] Read more.
The increasing complexity of modern electrical substations—driven by renewable integration, advanced automation, and asset aging—necessitates a transition from reactive maintenance toward intelligent, data-driven strategies. Predictive maintenance (PdM), supported by artificial intelligence, enables early fault detection and remaining useful life (RUL) estimation, while Digital Twin (DT) technology provides synchronized cyber–physical representations for situational awareness and risk-free validation of maintenance decisions. This study proposes a five-layer DT-enabled PdM architecture integrating standards-based data acquisition, semantic interoperability (IEC 61850, CIM, and OPC UA Part 17), hybrid AI analytics, and cyber-secure decision support aligned with IEC 62443. The framework is validated using utility-grade operational data from the SS1 substation of the Badra Oil Field, comprising approximately one million multivariate time-stamped measurements and 139 confirmed fault events across transformer, feeder, and environmental monitoring systems. Fault detection is formulated as a binary classification task using event-window alignment to the 1 min SCADA timeline, preserving realistic operational class imbalance. Five supervised learning models—a Random Forest, Gradient Boosting, a Support Vector Machine, a Deep Neural Network, and a stacked ensemble—were benchmarked, with the ensemble embedded within the DT core representing the operational predictive model. Experimental results demonstrate strong performance, achieving an F1-score of 0.98 and an AUC of 0.995. The results confirm that the proposed DT–AI framework provides a scalable, interoperable, and cyber-resilient foundation for deployment-ready predictive maintenance in modern substation automation systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 857 KB  
Article
Hybrid Spike-Encoded Spiking Neural Networks for Real-Time EEG Seizure Detection: A Comparative Benchmark
by Ali Mehrabi, Neethu Sreenivasan, Upul Gunawardana and Gaetano Gargiulo
Biomimetics 2026, 11(1), 75; https://doi.org/10.3390/biomimetics11010075 - 16 Jan 2026
Viewed by 250
Abstract
Reliable and low-latency seizure detection from electroencephalography (EEG) is critical for continuous clinical monitoring and emerging wearable health technologies. Spiking neural networks (SNNs) provide an event-driven computational paradigm that is well suited to real-time signal processing, yet achieving competitive seizure detection performance with [...] Read more.
Reliable and low-latency seizure detection from electroencephalography (EEG) is critical for continuous clinical monitoring and emerging wearable health technologies. Spiking neural networks (SNNs) provide an event-driven computational paradigm that is well suited to real-time signal processing, yet achieving competitive seizure detection performance with constrained model complexity remains challenging. This work introduces a hybrid spike encoding scheme that combines Delta–Sigma (change-based) and stochastic rate representations, together with two spiking architectures designed for real-time EEG analysis: a compact feed-forward HybridSNN and a convolution-enhanced ConvSNN incorporating depthwise-separable convolutions and temporal self-attention. The architectures are intentionally designed to operate on short EEG segments and to balance detection performance with computational practicality for continuous inference. Experiments on the CHB–MIT dataset show that the HybridSNN attains 91.8% accuracy with an F1-score of 0.834 for seizure detection, while the ConvSNN further improves detection performance to 94.7% accuracy and an F1-score of 0.893. Event-level evaluation on continuous EEG recordings yields false-alarm rates of 0.82 and 0.62 per day for the HybridSNN and ConvSNN, respectively. Both models exhibit inference latencies of approximately 1.2 ms per 0.5 s window on standard CPU hardware, supporting continuous real-time operation. These results demonstrate that hybrid spike encoding enables spiking architectures with controlled complexity to achieve seizure detection performance comparable to larger deep learning models reported in the literature, while maintaining low latency and suitability for real-time clinical and wearable EEG monitoring. Full article
(This article belongs to the Special Issue Bioinspired Engineered Systems)
Show Figures

Figure 1

22 pages, 2873 KB  
Article
Resource-Constrained Edge AI Solution for Real-Time Pest and Disease Detection in Chili Pepper Fields
by Hoyoung Chung, Jin-Hwi Kim, Junseong Ahn, Yoona Chung, Eunchan Kim and Wookjae Heo
Agriculture 2026, 16(2), 223; https://doi.org/10.3390/agriculture16020223 - 15 Jan 2026
Viewed by 197
Abstract
This paper presents a low-cost, fully on-premise Edge Artificial Intelligence (AI) system designed to support real-time pest and disease detection in open-field chili pepper cultivation. The proposed architecture integrates AI-Thinker ESP32-CAM module (ESP32-CAM) image acquisition nodes (“Sticks”) with a Raspberry Pi 5–based edge [...] Read more.
This paper presents a low-cost, fully on-premise Edge Artificial Intelligence (AI) system designed to support real-time pest and disease detection in open-field chili pepper cultivation. The proposed architecture integrates AI-Thinker ESP32-CAM module (ESP32-CAM) image acquisition nodes (“Sticks”) with a Raspberry Pi 5–based edge server (“Module”), forming a plug-and-play Internet of Things (IoT) pipeline that enables autonomous operation upon simple power-up, making it suitable for aging farmers and resource-limited environments. A Leaf-First 2-Stage vision model was developed by combining YOLOv8n-based leaf detection with a lightweight ResNet-18 classifier to improve the diagnostic accuracy for small lesions commonly occurring in dense pepper foliage. To address network instability, which is a major challenge in open-field agriculture, the system adopted a dual-protocol communication design using Hyper Text Transfer Protocol (HTTP) for Joint Photographic Experts Group (JPEG) transmission and Message Queuing Telemetry Transport (MQTT) for event-driven feedback, enhanced by Redis-based asynchronous buffering and state recovery. Deployment-oriented experiments under controlled conditions demonstrated an average end-to-end latency of 0.86 s from image capture to Light Emitting Diode (LED) alert, validating the system’s suitability for real-time decision support in crop management. Compared to heavier models (e.g., YOLOv11 and ResNet-50), the lightweight architecture reduced the computational cost by more than 60%, with minimal loss in detection accuracy. This study highlights the practical feasibility of resource-constrained Edge AI systems for open-field smart farming by emphasizing system-level integration, robustness, and real-time operability, and provides a deployment-oriented framework for future extension to other crops. Full article
(This article belongs to the Special Issue Smart Sensor-Based Systems for Crop Monitoring)
Show Figures

Figure 1

29 pages, 4853 KB  
Article
ROS 2-Based Architecture for Autonomous Driving Systems: Design and Implementation
by Andrea Bonci, Federico Brunella, Matteo Colletta, Alessandro Di Biase, Aldo Franco Dragoni and Angjelo Libofsha
Sensors 2026, 26(2), 463; https://doi.org/10.3390/s26020463 - 10 Jan 2026
Viewed by 447
Abstract
Interest in the adoption of autonomous vehicles (AVs) continues to grow. It is essential to design new software architectures that meet stringent real-time, safety, and scalability requirements while integrating heterogeneous hardware and software solutions from different vendors and developers. This paper presents a [...] Read more.
Interest in the adoption of autonomous vehicles (AVs) continues to grow. It is essential to design new software architectures that meet stringent real-time, safety, and scalability requirements while integrating heterogeneous hardware and software solutions from different vendors and developers. This paper presents a lightweight, modular, and scalable architecture grounded in Service-Oriented Architecture (SOA) principles and implemented in ROS 2 (Robot Operating System 2). The proposed design leverages ROS 2’s Data Distribution System-based Quality-of-Service model to provide reliable communication, structured lifecycle management, and fault containment across distributed compute nodes. The architecture is organized into Perception, Planning, and Control layers with decoupled sensor access paths to satisfy heterogeneous frequency and hardware constraints. The decision-making core follows an event-driven policy that prioritizes fresh updates without enforcing global synchronization, applying zero-order hold where inputs are not refreshed. The architecture was validated on a 1:10-scale autonomous vehicle operating on a city-like track. The test environment covered canonical urban scenarios (lane-keeping, obstacle avoidance, traffic-sign recognition, intersections, overtaking, parking, and pedestrian interaction), with absolute positioning provided by an indoor GPS (Global Positioning System) localization setup. This work shows that the end-to-end Perception–Planning pipeline consistently met worst-case deadlines, yielding deterministic behaviour even under stress. The proposed architecture can be deemed compliant with real-time application standards for our use case on the 1:10 test vehicle, providing a robust foundation for deployment and further refinement. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion for Decision Making for Autonomous Driving)
Show Figures

Figure 1

21 pages, 4706 KB  
Article
Near-Real-Time Integration of Multi-Source Seismic Data
by José Melgarejo-Hernández, Paula García-Tapia-Mateo, Juan Morales-García and Jose-Norberto Mazón
Sensors 2026, 26(2), 451; https://doi.org/10.3390/s26020451 - 9 Jan 2026
Viewed by 170
Abstract
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish [...] Read more.
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish National Geographic Institute creates significant challenges due to differences in formats, update frequencies, and access methods. To overcome these limitations, this paper presents a modular and automated framework for the scheduled near-real-time ingestion of global seismic data using open APIs and semi-structured web data. The system, implemented using a Docker-based architecture, automatically retrieves, harmonizes, and stores seismic information from heterogeneous sources at regular intervals using a cron-based scheduler. Data are standardized into a unified schema, validated to remove duplicates, and persisted in a relational database for downstream analytics and visualization. The proposed framework adheres to the FAIR data principles by ensuring that all seismic events are uniquely identifiable, source-traceable, and stored in interoperable formats. Its lightweight and containerized design enables deployment as a microservice within emerging data spaces and open environmental data infrastructures. Experimental validation was conducted using a two-phase evaluation. This evaluation consisted of a high-frequency 24 h stress test and a subsequent seven-day continuous deployment under steady-state conditions. The system maintained stable operation with 100% availability across all sources, successfully integrating 4533 newly published seismic events during the seven-day period and identifying 595 duplicated detections across providers. These results demonstrate that the framework provides a robust foundation for the automated integration of multi-source seismic catalogs. This integration supports the construction of more comprehensive and globally accessible earthquake datasets for research and near-real-time applications. By enabling automated and interoperable integration of seismic information from diverse providers, this approach supports the construction of more comprehensive and globally accessible earthquake catalogs, strengthening data-driven research and situational awareness across regions and institutions worldwide. Full article
(This article belongs to the Special Issue Advances in Seismic Sensing and Monitoring)
Show Figures

Figure 1

31 pages, 2310 KB  
Article
Deep Learning-Based Multi-Source Precipitation Fusion and Its Utility for Hydrological Simulation
by Zihao Huang, Changbo Jiang, Yuannan Long, Shixiong Yan, Yue Qi, Munan Xu and Tao Xiang
Atmosphere 2026, 17(1), 70; https://doi.org/10.3390/atmos17010070 - 8 Jan 2026
Viewed by 248
Abstract
High-resolution satellite precipitation products are key inputs for basin-scale rainfall estimation, but they still exhibit substantial biases in complex terrain and during heavy rainfall. Recent multi-source fusion studies have shown that simply stacking multiple same-type microwave satellite products yields only limited additional gains [...] Read more.
High-resolution satellite precipitation products are key inputs for basin-scale rainfall estimation, but they still exhibit substantial biases in complex terrain and during heavy rainfall. Recent multi-source fusion studies have shown that simply stacking multiple same-type microwave satellite products yields only limited additional gains for high-quality precipitation estimates and may even introduce local degradation, suggesting that targeted correction of a single, widely validated high-quality microwave product (such as IMERG) is a more rational strategy. Focusing on the mountainous, gauge-sparse Lüshui River basin with pronounced relief and frequent heavy rainfall, we use GPM IMERG V07 as the primary microwave product and incorporate CHIRPS, ERA5 evaporation, and a digital elevation model as auxiliary inputs to build a daily attention-enhanced CNN–LSTM (A-CNN–LSTM) bias-correction framework. Under a unified IMERG-based setting, we compare three network architectures—LSTM, CNN–LSTM, and A-CNN–LSTM—and test three input configurations (single-source IMERG, single-source CHIRPS, and combined IMERG + CHIRPS) to jointly evaluate impacts on corrected precipitation and SWAT runoff simulations. The IMERG-driven A-CNN–LSTM markedly reduces daily root-mean-square error and improves the intensity and timing of 10–50 mm·d−1 rainfall events; the single-source IMERG configuration also outperforms CHIRPS-including multi-source setups in terms of correlation, RMSE, and performance across rainfall-intensity classes. When the corrected IMERG product is used to force SWAT, daily Nash-Sutcliffe Efficiency increases from about 0.71/0.70 to 0.85/0.79 in the calibration/validation periods, and RMSE decreases from 87.92 to 60.98 m3 s−1, while flood peaks and timing closely match simulations driven by gauge-interpolated precipitation. Overall, the results demonstrate that, in gauge-sparse mountainous basins, correcting a single high-quality, widely validated microwave product with a small set of heterogeneous covariates is more effective for improving precipitation inputs and their hydrological utility than simply aggregating multiple same-type satellite products. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

28 pages, 515 KB  
Review
From Cues to Engagement: A Comprehensive Survey and Holistic Architecture for Computer Vision-Based Audience Analysis in Live Events
by Marco Lemos, Pedro J. S. Cardoso and João M. F. Rodrigues
Multimodal Technol. Interact. 2026, 10(1), 8; https://doi.org/10.3390/mti10010008 - 8 Jan 2026
Viewed by 246
Abstract
The accurate measurement of audience engagement in real-world live events remains a significant challenge, with the majority of existing research confined to controlled environments like classrooms. This paper presents a comprehensive survey of Computer Vision AI-driven methods for real-time audience engagement monitoring and [...] Read more.
The accurate measurement of audience engagement in real-world live events remains a significant challenge, with the majority of existing research confined to controlled environments like classrooms. This paper presents a comprehensive survey of Computer Vision AI-driven methods for real-time audience engagement monitoring and proposes a novel, holistic architecture to address this gap, with this architecture being the main contribution of the paper. The paper identifies and defines five core constructs essential for a robust analysis: Attention, Emotion and Sentiment, Body Language, Scene Dynamics, and Behaviours. Through a selective review of state-of-the-art techniques for each construct, the necessity of a multimodal approach that surpasses the limitations of isolated indicators is highlighted. The work synthesises a fragmented field into a unified taxonomy and introduces a modular architecture that integrates these constructs with practical, business-oriented metrics such as Commitment, Conversion, and Retention. Finally, by integrating cognitive, affective, and behavioural signals, this work provides a roadmap for developing operational systems that can transform live event experience and management through data-driven, real-time analytics. Full article
Show Figures

Figure 1

16 pages, 1561 KB  
Article
TSAformer: A Traffic Flow Prediction Model Based on Cross-Dimensional Dependency Capture
by Haoning Lv, Xi Chen and Weijie Xiu
Electronics 2026, 15(1), 231; https://doi.org/10.3390/electronics15010231 - 4 Jan 2026
Viewed by 185
Abstract
Accurate multivariate traffic flow forecasting is critical for intelligent transportation systems yet remains challenging due to the complex interplay of temporal dynamics and spatial interactions. While Transformer-based models have shown promise in capturing long-range temporal dependencies, most existing approaches compress multidimensional observations into [...] Read more.
Accurate multivariate traffic flow forecasting is critical for intelligent transportation systems yet remains challenging due to the complex interplay of temporal dynamics and spatial interactions. While Transformer-based models have shown promise in capturing long-range temporal dependencies, most existing approaches compress multidimensional observations into flattened sequences—thereby neglecting explicit modeling of cross-dimensional (i.e., spatial or inter-variable) relationships, which are essential for capturing traffic propagation, network-wide congestion, and node-specific behaviors. To address this limitation, we propose TSAformer, a novel Transformer architecture that explicitly preserves and jointly models time and dimension as dual structural axes. TSAformer begins with a multimodal input embedding layer that encodes raw traffic values alongside temporal context (time-of-day and day-of-week) and node-specific positional features, ensuring rich semantic representation. The core of TSAformer is the Two-Stage Attention (TSA) module, which first models intra-dimensional temporal evolution via time-axis self-attention then captures inter-dimensional spatial interactions through a lightweight routing mechanism—avoiding quadratic complexity while enabling all-to-all cross-node communication. Built upon TSA, a hierarchical encoder–decoder (HED) structure further enhances forecasting by modeling traffic patterns across multiple temporal scales, from fine-grained fluctuations to macroscopic trends, and fusing predictions via cross-scale attention. Extensive experiments on three real-world traffic datasets—including urban road networks and highway systems—demonstrate that TSAformer consistently outperforms state-of-the-art baselines across short-term and long-term forecasting horizons. Notably, it achieves top-ranked performance in 36 out of 58 critical evaluation scenarios, including peak-hour and event-driven congestion prediction. By explicitly modeling both temporal and dimensional dependencies without structural compromise, TSAformer provides a scalable, interpretable, and high-performance solution for spatiotemporal traffic forecasting. Full article
(This article belongs to the Special Issue Artificial Intelligence for Traffic Understanding and Control)
Show Figures

Figure 1

22 pages, 777 KB  
Data Descriptor
Dataset on AI- and VR-Supported Communication and Problem-Solving Performance in Undergraduate Courses: A Clustered Quasi-Experiment in Mexico
by Roberto Gómez Tobías
Data 2026, 11(1), 6; https://doi.org/10.3390/data11010006 - 2 Jan 2026
Viewed by 250
Abstract
Behavioral and educational researchers increasingly rely on rich datasets that capture how students respond to technology-enhanced instruction, yet few open resources document the full pipeline from experimental design to data curation in authentic classroom settings. This data descriptor presents a clustered quasi-experimental dataset [...] Read more.
Behavioral and educational researchers increasingly rely on rich datasets that capture how students respond to technology-enhanced instruction, yet few open resources document the full pipeline from experimental design to data curation in authentic classroom settings. This data descriptor presents a clustered quasi-experimental dataset on the impact of an instructional architecture that combines virtual reality (VR) simulations with artificial intelligence (AI)-driven formative feedback to enhance undergraduate students’ communication and problem-solving performance. The study was conducted at a large private university in Mexico during the 2024–2025 academic year and involved six intact classes (three intervention, three comparison; n = 180). Exposure to AI and VR was operationalized as a session-level “dose” (minutes of use, number of feedback events, number of scenarios, perceived presence), while performance was assessed with analytic rubrics (six criteria for communication and seven for problem solving) scored independently by two raters, with interrater reliability estimated via ICC (2, k). Additional Likert-type scales measured presence, perceived usefulness of feedback and self-efficacy. The curated dataset includes raw and cleaned tabular files, a detailed codebook, scoring guides and replication scripts for multilevel models and ancillary analyses. By releasing this dataset, we seek to enable reanalysis, methodological replication and cross-study comparisons in technology-enhanced education, and to provide an authentic resource for teaching statistics, econometrics and research methods in the behavioral sciences. Full article
Show Figures

Graphical abstract

20 pages, 6216 KB  
Article
High-Speed Signal Digitizer Based on Reference Waveform Crossings and Time-to-Digital Conversion
by Arturs Aboltins, Sandis Migla, Nikolajs Tihomorskis, Jakovs Ratners, Rihards Barkans and Viktors Kurtenoks
Electronics 2026, 15(1), 153; https://doi.org/10.3390/electronics15010153 - 29 Dec 2025
Viewed by 220
Abstract
This work presents an experimental evaluation of a high-speed analog-to-digital conversion method based on passive reference waveform crossings combined with time-to-digital converter (TDC) time-tagging. Unlike conventional level-crossing event-driven analog-to-digital converters (ADCs) that require dynamically updated digital-to-analog converters (DACs), the proposed architecture compares the [...] Read more.
This work presents an experimental evaluation of a high-speed analog-to-digital conversion method based on passive reference waveform crossings combined with time-to-digital converter (TDC) time-tagging. Unlike conventional level-crossing event-driven analog-to-digital converters (ADCs) that require dynamically updated digital-to-analog converters (DACs), the proposed architecture compares the input waveform against a broadband periodic sampling function without active threshold control. Crossing instants are detected by a high-speed comparator and converted into rising and falling edge timestamps using a multi-channel TDC. A commercial ScioSense GPX2-based time-tagger with 30 ps single-shot precision was used for validation. A range of test signals—including 5 MHz sine, sawtooth, damped sine, and frequency-modulated chirp waveforms—were acquired using triangular, sinusoidal, and sawtooth sampling functions. Stroboscopic sampling was demonstrated using reference frequencies lower than the signal of interest, enabling effective undersampling of periodic radio frequency (RF) waveforms. The method achieved effective bandwidths approaching 100 MHz, with amplitude reconstruction errors of 0.05–0.30 RMS for sinusoidal signals and 0.15–0.40 RMS for sawtooth signals. Timing jitter showed strong dependence on the relative slope between the acquired waveform and sampling function: steep regions produced jitter near 5 ns, while shallow regions exhibited jitter up to 20 ns. The study has several limitations, including the bandwidth and dead-time constraints of the commercial TDC, the finite slew rate and noise of the comparator front-end, and the limited frequency range of the generated sampling functions. These factors influence the achievable timing precision and reconstruction accuracy, especially in low-gradient signal regions. Overall, the passive waveform-crossing method demonstrates strong potential for wideband, sparse, and rapidly varying signals, with natural scalability to multi-channel systems. Potential application domains include RF acquisition, ultra-wideband (UWB) radar, integrated sensing and communication (ISAC) systems, high-speed instrumentation, and wideband timed antenna arrays. Full article
(This article belongs to the Special Issue Analog/Mixed Signal Integrated Circuit Design)
Show Figures

Figure 1

31 pages, 3484 KB  
Article
CEDAR: An Ontology-Based Framework Using Event Abstractions to Contextualise Financial Data Processes
by Aya Tafech and Fethi Rabhi
Electronics 2026, 15(1), 145; https://doi.org/10.3390/electronics15010145 - 29 Dec 2025
Viewed by 193
Abstract
Financial institutions face data quality (DQ) challenges in regulatory reporting due to complex architectures where data flows through multiple systems. Data consumers struggle to assess quality because traditional DQ tools operate on data snapshots without capturing temporal event sequences and business contexts that [...] Read more.
Financial institutions face data quality (DQ) challenges in regulatory reporting due to complex architectures where data flows through multiple systems. Data consumers struggle to assess quality because traditional DQ tools operate on data snapshots without capturing temporal event sequences and business contexts that determine whether anomalies represent genuine issues or valid behavior. Existing approaches address either semantic representation (ontologies for static knowledge) or temporal pattern detection (event processing without semantics), but not their integration. This paper presents CEDAR (Contextual Events and Domain-driven Associative Representation), integrating financial ontologies with event-driven processing for context-aware DQ assessment. Novel contributions include (1) ontology-driven rule derivation that automatically translates OWL business constraints into executable detection logic; (2) temporal ontological reasoning extending static quality assessment with event stream processing; (3) explainable assessment tracing anomalies through causal chains to violated constraints; and (4) standards-based design using W3C technologies with FIBO extensions. Following the Design Science Research Methodology, we document the first, early-stage iteration focused on design novelty and technical feasibility. We present conceptual models, a working prototype, controlled validation with synthetic equity derivative data, and comparative analysis against existing approaches. The prototype successfully detects context-dependent quality issues and enables ontological root cause exploration. Contributions: A novel integration of ontologies and event processing for financial DQ management with validated technical feasibility, demonstrating how semantic web technologies address operational challenges in event-driven architectures. Full article
(This article belongs to the Special Issue Visual Analysis of Software Engineering Data)
Show Figures

Figure 1

Back to TopTop