Next Issue
Volume 14, January
Previous Issue
Volume 13, November
 
 

Technologies, Volume 13, Issue 12 (December 2025) – 58 articles

Cover Story (view full-size image): Modern industrial environments increasingly rely on reliable wireless communication to support automated systems and autonomous mobile robots. In intralogistics, these robots require stable data exchange during motion in complex and dynamic environments. Wi-Fi networks are widely used due to their low cost and rapid deployment; however, their performance may fluctuate under industrial conditions. Private 5G networks represent an alternative designed to provide more stable and predictable communication. This study experimentally compares Wi-Fi and a private 5G network using an autonomous mobile robot operating under real conditions. Both networks were connected in parallel, enabling a direct comparison of their behaviour under identical operating conditions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
34 pages, 15045 KB  
Article
Integration of Road Data Collected Using LSB Audio Steganography
by Adam Stančić, Ivan Grgurević, Marko Matulin and Marko Periša
Technologies 2025, 13(12), 597; https://doi.org/10.3390/technologies13120597 - 18 Dec 2025
Viewed by 258
Abstract
Modern traffic-monitoring systems increasingly rely on supplemental analytical data to complement video recordings, yet such data are rarely integrated into video containers without altering the original footage. This paper proposes a lightweight audio-based approach for embedding road-condition information using a Least Significant Bit [...] Read more.
Modern traffic-monitoring systems increasingly rely on supplemental analytical data to complement video recordings, yet such data are rarely integrated into video containers without altering the original footage. This paper proposes a lightweight audio-based approach for embedding road-condition information using a Least Significant Bit (LSB) steganography framework. The method operates by serializing sensor data, encoding it into the LSB positions of synthetically generated audio, and subsequently compressing the audio track while preserving imperceptibility and video integrity. A series of controlled experiments evaluates how waveform type, sampling rate, amplitude, and frequency influence the storage efficiency and quality of WAV and FLAC stego-audio files. Additional tests examine the impact of embedding capacity and output-quality settings on compression behavior. Results reveal clear trade-offs between audio quality, data capacity, and file size, demonstrating that the proposed framework enables efficient, secure, and scalable integration of metadata into surveillance recordings. The findings establish practical guidelines for deploying LSB-based audio embedding in real traffic-monitoring environments. Full article
(This article belongs to the Special Issue IoT-Enabling Technologies and Applications—2nd Edition)
Show Figures

Figure 1

26 pages, 8192 KB  
Article
Enhancing Deep Learning Models with Attention Mechanisms for Interpretable Detection of Date Palm Diseases and Pests
by Amine El Hanafy, Abdelaaziz Hessane and Yousef Farhaoui
Technologies 2025, 13(12), 596; https://doi.org/10.3390/technologies13120596 - 18 Dec 2025
Viewed by 335
Abstract
Deep learning has become a powerful tool for diagnosing pests and plant diseases, although conventional convolutional neural networks (CNNs) generally suffer from limited interpretability and suboptimal focus on important image features. This study examines the integration of attention mechanisms into two prevalent CNN [...] Read more.
Deep learning has become a powerful tool for diagnosing pests and plant diseases, although conventional convolutional neural networks (CNNs) generally suffer from limited interpretability and suboptimal focus on important image features. This study examines the integration of attention mechanisms into two prevalent CNN architectures—ResNet50 and MobileNetV2—to improve the interpretability and classification of diseases impacting date palm trees. Four attention modules—Squeeze-and-Excitation (SE), Efficient Channel Attention (ECA), Soft Attention, and the Convolutional Block Attention Module (CBAM)—were systematically integrated into ResNet50 and MobileNetV2 and assessed on the Palm Leaves dataset. Using transfer learning, the models were trained and evaluated through accuracy, F1-score, Grad-CAM visualizations, and quantitative metrics such as entropy and Attention Focus Scores. Analysis was also performed on the model’s complexity, including parameters and FLOPs. To confirm generalization, we tested the improved models on field data that was not part of the dataset used for learning. The experimental results demonstrated that the integration of attention mechanisms substantially improved both predictive accuracy and interpretability across all evaluated architectures. For MobileNetV2, the best performance and the most compact attention maps were obtained with SE and ECA (reaching 91%), while Soft Attention improved accuracy but produced broader, less concentrated activation patterns. For ResNet50, SE achieved the most focused and symptom-specific heatmaps, whereas CBAM reached the highest classification accuracy (up to 90.4%) but generated more spatially diffuse Grad-CAM activations. Overall, these findings demonstrate that attention-enhanced CNNs can provide accurate, interpretable, and robust detection of palm tree diseases and pests under real-world agricultural conditions. Full article
Show Figures

Figure 1

19 pages, 6764 KB  
Article
A Dual-Validation Framework for Temporal Robustness Assessment in Brain–Computer Interfaces for Motor Imagery
by Mohamed A. Hanafy, Saykhun Yusufjonov, Payman SharafianArdakani, Djaykhun Yusufjonov, Madan M. Rayguru and Dan O. Popa
Technologies 2025, 13(12), 595; https://doi.org/10.3390/technologies13120595 - 18 Dec 2025
Viewed by 340
Abstract
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this [...] Read more.
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this barrier is critical to translating MI-BCIs from controlled laboratory environments to practical uses. In this paper, we present a comprehensive dual-validation framework to rigorously evaluate the temporal robustness of EEG signals of an MI-BCI. We collected data from six participants performing four motor imagery tasks (left/right hand and foot). Features were extracted using Common Spatial Patterns, and ten machine learning classifiers were assessed within a unified pipeline. Our method integrates within-session evaluation (stratified K-fold cross-validation) with cross-session testing (bidirectional train/test), complemented by stability metrics and performance heterogeneity assessment. Findings reveal minimal performance loss between conditions, with an average accuracy drop of just 2.5%. The AdaBoost classifier achieved the highest within-session performance (84.0% system accuracy, F1-score: 83.8%/80.9% for hand/foot), while the K-nearest neighbors (KNN) classifier demonstrated the best cross-session robustness (81.2% system accuracy, F1-score: 80.5%/80.2% for hand/foot, 0.663 robustness score). This study shows that robust performance across sessions is attainable for MI-BCI evaluation, supporting the pathway toward reliable, real-world clinical deployment. Full article
(This article belongs to the Collection Selected Papers from the PETRA Conference Series)
Show Figures

Figure 1

20 pages, 1200 KB  
Article
Tax Compliance and Technological Innovation: Case Study on the Development of Tools to Assist Sales Tax Inspections to Curb Tax Fraud
by Vera Lucia Reiko Yoshida Shidomi and Joshua Onome Imoniana
Technologies 2025, 13(12), 594; https://doi.org/10.3390/technologies13120594 - 17 Dec 2025
Viewed by 347
Abstract
This paper mainly studies tax inspection decision-making technology, aiming to improve the accuracy and robustness of target recognition, state estimation, and autonomous decision making in complex environments by constructing an application that integrates visual, radar, and inertial navigation information. Tax inspection is a [...] Read more.
This paper mainly studies tax inspection decision-making technology, aiming to improve the accuracy and robustness of target recognition, state estimation, and autonomous decision making in complex environments by constructing an application that integrates visual, radar, and inertial navigation information. Tax inspection is a universally complex phenomenon, but little is known about the use of innovative technology to arm tax auditors with tools in monitoring it. Thus, based on the legitimacy theory, there is an agreement between taxpayers and the tax authorities regarding adequate compliance with tax legislation. The use of systemic controls by tax authorities is essential to track stakeholders’ contracts and ensure the upholding of this mandate. The case study is exploratory, using participant observation, and interventionist approach to a tax auditing. The results indicated that partnership between experienced tax auditors and IT tax auditors offered several tangible benefits to the in-house development and monitoring of an innovative application. It also indicates that OCR supports a data lake for inspectors in which stored information is available on standby during inspection. Furthermore, auditors’ use of mobile applications programmed with intelligent perception and tracking resources instead of using searches on mainframes streamlined the inspection process. The integration of professional skepticism, empathy among users, and technological innovation created a surge in independence among tax auditors and ensured focus. This paper’s contribution lies in the discussion of the enhancement of tax inspection through target recognition, drawing on legitimacy theory to rethink the relationship between taxpayers and tax authorities regarding adequate compliance with tax legislation, and presenting an exploratory case study using a participant observation, interventionist approach focused on a tax auditor. The implications of this study for policy makers, auditors, and academics are only the peak of the iceberg, as innovation in public administration presupposes efficiency. As a suggestion for future dimensions of research, we recommend the infusion of AI into these tools for further efficacy and effectiveness to mitigate fraud in the undue appropriation of taxes and undue competition. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

19 pages, 444 KB  
Article
Enhancing Cascade Object Detection Accuracy Using Correctors Based on High-Dimensional Feature Separation
by Andrey V. Kovalchuk, Andrey A. Lebedev, Olga V. Shemagina, Irina V. Nuidel, Vladimir G. Yakhno and Sergey V. Stasenko
Technologies 2025, 13(12), 593; https://doi.org/10.3390/technologies13120593 - 16 Dec 2025
Cited by 1 | Viewed by 314
Abstract
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can [...] Read more.
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can be attached to an existing detector without retraining it. The key idea is to exploit the blessing of dimensionality: high-dimensional feature vectors constructed from multiple cascade stages are transformed by PCA and whitening into a space where simple linear Fisher discriminants can reliably separate rare error patterns from normal operation using only a few labeled examples. This study presents a novel algorithm designed to correct the outputs of object detectors constructed using the Viola–Jones framework enhanced with a modified census transform. The proposed method introduces several improvements addressing error correction and robustness in data-limited conditions. The approach involves image partitioning through a sliding window of fixed aspect ratio and a modified census transform in which pixel intensity is compared to the mean value within a rectangular neighborhood. Training samples for false negative and false positive correctors are selected using dual Intersection-over-Union (IoU) thresholds and probabilistic sampling of true positive and true negative fragments. Corrector models are trained based on the principles of high-dimensional separability within the paradigm of one- and few-shot learning, utilizing features derived from cascade stages of the detector. Decision boundaries are optimized using Fisher’s rule, with adaptive thresholding to guarantee zero false acceptance. Experimental results indicate that the proposed correction scheme enhances object detection accuracy by effectively compensating for classifier errors, particularly under conditions of scarce training data. On two railway image datasets with only about one thousand images each, the proposed correctors increase Precision from 0.36 to 0.65 on identifier detection while maintaining high Recall (0.98 → 0.94), and improve digit detection Recall from 0.94 to 0.98 with negligible loss in Precision (0.92 → 0.91). These results demonstrate that even under scarce training data, high-dimensional feature separation enables effective one-/few-shot error correction for cascade detectors with minimal computational overhead. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Figure 1

14 pages, 1284 KB  
Article
A Comparative Study of Machine and Deep Learning Approaches for Smart Contract Vulnerability Detection
by Mohammed Yaseen Alhayani, Wisam Hazim Gwad, Shahab Wahhab Kareem and Moustafa Fayad
Technologies 2025, 13(12), 592; https://doi.org/10.3390/technologies13120592 - 16 Dec 2025
Viewed by 498
Abstract
The increasing use of blockchain smart contracts has introduced new security challenges, as small coding errors can lead to major financial losses. While rule-based static analyzers remain the most common detection tools, their limited adaptability often results in false positives and outdated vulnerability [...] Read more.
The increasing use of blockchain smart contracts has introduced new security challenges, as small coding errors can lead to major financial losses. While rule-based static analyzers remain the most common detection tools, their limited adaptability often results in false positives and outdated vulnerability patterns. This study presents a comprehensive comparative analysis of machine learning (ML) and deep learning (DL) methods for smart contract vulnerability detection using the BCCC-SCsVuls-2024 benchmark dataset. Six models (Random Forest, k-Nearest Neighbors, Simple and Deep Multilayer Perceptron, and Simple and Deep one-dimensional Convolutional Neural Networks) were evaluated under a unified experimental framework combining RobustScaler normalization and Principal Component Analysis (PCA) for dimensionality reduction. Our experimental results from a five-fold cross-validation show that the Random Forest classifier achieved the best overall performance with an accuracy of 89.44% and an F1-score of 93.20%, outperforming both traditional and neural models in stability and generalization. PCA-based feature analysis revealed that opcode-level features, particularly stack and memory manipulation instructions (PUSH, DUP, SWAP, and RETURNDATASIZE), were the most influential in defining contract behavior. Full article
Show Figures

Figure 1

59 pages, 7553 KB  
Review
Turn-Taking Modelling in Conversational Systems: A Review of Recent Advances
by Rutherford Agbeshi Patamia, Ha Pham Thien Dinh, Ming Liu and Akansel Cosgun
Technologies 2025, 13(12), 591; https://doi.org/10.3390/technologies13120591 - 15 Dec 2025
Viewed by 1272
Abstract
Effective turn-taking is fundamental to conversational interactions, shaping the fluidity of communication across human dialogues and interactions with spoken dialogue systems (SDS). Despite its apparent simplicity, conversational turn-taking involves complex timing mechanisms influenced by various linguistic, prosodic, and multimodal cues. This review synthesises [...] Read more.
Effective turn-taking is fundamental to conversational interactions, shaping the fluidity of communication across human dialogues and interactions with spoken dialogue systems (SDS). Despite its apparent simplicity, conversational turn-taking involves complex timing mechanisms influenced by various linguistic, prosodic, and multimodal cues. This review synthesises recent theoretical insights and practical advancements in understanding and modelling conversational timing dynamics, emphasising critical phenomena such as voice activity (VA), turn floor offsets (TFO), and predictive turn-taking. We first discuss foundational concepts, such as voice activity detection (VAD) and inter-pausal units (IPUs), and highlight their significance for systematically representing dialogue states. Central to the challenge of interactive systems is distinguishing moments when conversational roles shift versus when they remain with the current speaker, encapsulated by the concepts of “hold” and “shift”. The timing of these transitions, measured through Turn Floor Offsets (TFOs), aligns closely with minimal human reaction times, suggesting biological underpinnings while exhibiting cross-linguistic variability. This review further explores computational turn-taking heuristics and models, noting that simplistic strategies may reduce interruptions yet risk introducing unnatural delays. Integrating multimodal signals, prosodic, verbal, visual, and predictive mechanisms is emphasised as essential for future developments in achieving human-like conversational responsiveness. Full article
(This article belongs to the Special Issue Collaborative Robotics and Human-AI Interactions)
Show Figures

Figure 1

24 pages, 2437 KB  
Article
Optimization of Compressor Preheating to Increase Efficiency, Comfort, and Lifespan
by Anton Dianov
Technologies 2025, 13(12), 590; https://doi.org/10.3390/technologies13120590 - 15 Dec 2025
Viewed by 303
Abstract
Various compressors found in appliances such as air conditioners, refrigerators, dehumidifiers, etc., are gaining more popularity in different areas, including industry, retail, consumer electronics, and others. This market is growing fast, attracting numerous manufacturers who are closely competing with each other. Simultaneously, the [...] Read more.
Various compressors found in appliances such as air conditioners, refrigerators, dehumidifiers, etc., are gaining more popularity in different areas, including industry, retail, consumer electronics, and others. This market is growing fast, attracting numerous manufacturers who are closely competing with each other. Simultaneously, the requirements for compressor drive efficiency and for reducing their carbon footprint are becoming tougher, which is prompting manufacturers to pay serious attention to this problem. Compressor drives operate in many modes, and almost all of them have been studied and optimized. The exception to this is the preheating mode, which is required to warm the lubricating oil before beginning compressor operations. This mode is rarely used in warm climates; therefore, previous researchers have ignored it. However, with the spread of compressor applications into countries with colder climates, the significance of the preheating mode has increased. This study examines the preheating mode of compressor drives and proposes several techniques that increase their efficiency by 4.15% and decrease the preheating time by 3.6 times. Furthermore, the author developed an algorithm that makes the load to the inverter and motor phases more even, thus increasing the lifespan of compressors and reducing their carbon footprint. Full article
Show Figures

Figure 1

31 pages, 51329 KB  
Article
Numerical Simulation and Optimization of Spray Cooling on High-Temperature Surfaces in Industrial Rotary Coolers
by Fangshuo Fan, Zuobing Chen, Yanhui Lai, Jiawei Liu and Ya Mao
Technologies 2025, 13(12), 589; https://doi.org/10.3390/technologies13120589 - 15 Dec 2025
Viewed by 323
Abstract
Spray cooling efficiency plays a critical role in the heat dissipation process from the external surface of industrial low-carbon cement rotary coolers. This study numerically investigated the thermal performance of high-temperature zones by examining four spray parameters: spray angle, nozzle distance, spray height, [...] Read more.
Spray cooling efficiency plays a critical role in the heat dissipation process from the external surface of industrial low-carbon cement rotary coolers. This study numerically investigated the thermal performance of high-temperature zones by examining four spray parameters: spray angle, nozzle distance, spray height, and mass flow rate. Multi-objective optimization design (MOD) was subsequently performed using response surface methodology (RSM). RSM reveals spray angle as the most significant parameter affecting heat transfer. With temperature uniformity as a constraint, MOD yields the following optimal parameters: 89° spray angle, 380 mm nozzle distance, and 663.5 mm spray height. This configuration achieves an average surface temperature of 814.33 K and a heat flux of 131,588.3 W/m2. The optimized spray parameters ensure high heat flux and uniform surface temperature while enlarging the heat transfer area and strengthening the synergistic heat transfer between dual nozzles. This approach provides a reliable technical pathway for efficient thermal management in industrial rotary cooler exteriors. Full article
(This article belongs to the Special Issue Technological Advances in Science, Medicine, and Engineering 2025)
Show Figures

Figure 1

34 pages, 5913 KB  
Article
Smart Device Development for Gait Monitoring: Multimodal Feedback in an Interactive Foot Orthosis, Walking Aid, and Mobile Application
by Stefan Resch, André Kousha, Anna Carroll, Noah Severinghaus, Felix Rehberg, Marco Zatschker, Yunus Söyleyici and Daniel Sanchez-Morillo
Technologies 2025, 13(12), 588; https://doi.org/10.3390/technologies13120588 - 13 Dec 2025
Viewed by 483
Abstract
Smart assistive technologies such as sensor-based footwear and walking aids offer promising opportunities for gait rehabilitation through real-time feedback and patient-centered monitoring. While biofeedback applications show great potential, current research rarely explores integrated closed-loop systems with device- and modality-specific feedback. In this work, [...] Read more.
Smart assistive technologies such as sensor-based footwear and walking aids offer promising opportunities for gait rehabilitation through real-time feedback and patient-centered monitoring. While biofeedback applications show great potential, current research rarely explores integrated closed-loop systems with device- and modality-specific feedback. In this work, we present a modular sensor-based system combining a smart foot orthosis and an instrumented forearm crutch to deliver real-time vibrotactile biofeedback. The system integrates plantar pressure and motion sensing, vibrotactile feedback, and wireless communication via a smartphone application. We conducted a user study with eight participants to validate the system’s feasibility for mobile gait detection and app usability, and to evaluate different vibrotactile feedback types across the orthosis and forearm crutch. The results indicate that pattern-based vibrotactile feedback was rated as more useful and suitable for regular use than simple vibration alerts. Moreover, participants reported clear perceptual differences between feedback delivered via the orthosis and the forearm crutch, indicating device-dependent feedback perception. The findings highlight the relevance of feedback strategy design beyond hardware implementation and inform the development of user-centered haptic biofeedback systems. Full article
Show Figures

Figure 1

25 pages, 429 KB  
Article
CALM: Continual Associative Learning Model via Sparse Distributed Memory
by Andrey Nechesov and Janne Ruponen
Technologies 2025, 13(12), 587; https://doi.org/10.3390/technologies13120587 - 13 Dec 2025
Viewed by 648
Abstract
Sparse Distributed Memory (SDM) provides a biologically inspired mechanism for associative and online learning. Transformer architectures, despite exceptional inference performance, remain static and vulnerable to catastrophic forgetting. This work introduces Continual Associative Learning Model (CALM), a conceptual framework that defines the theoretical base [...] Read more.
Sparse Distributed Memory (SDM) provides a biologically inspired mechanism for associative and online learning. Transformer architectures, despite exceptional inference performance, remain static and vulnerable to catastrophic forgetting. This work introduces Continual Associative Learning Model (CALM), a conceptual framework that defines the theoretical base and integration logic for the cognitive model seeking to establish continual, lifelong adaptation without retraining by combining SDM system with lightweight dual-transformer modules. The architecture proposes an always-online associative memory for episodic storage (System 1), as well as a pair of asynchronous transformer consolidate experience in the background for uninterrupted reasoning and gradual model evolution (System 2). The framework remains compatible with standard transformer benchmarks, establishing a shared evaluation basis for both reasoning accuracy and continual learning stability. Preliminary experiments using the SDMPreMark benchmark evaluate algorithmic behavior across multiple synthetic sets, confirming a critical radius-threshold phenomenon in SDM recall. These results represent deterministic characterization of SDM dynamics in the component level, preceding the integration in the model level with transformer-based semantic tasks. The CALM framework provides a reproducible foundation for studying continual memory and associative learning in hybrid transformer architectures, although future work should involve experiments with non-synthetic, high-load data to confirm scalable behavior in high interference. Full article
(This article belongs to the Special Issue Collaborative Robotics and Human-AI Interactions)
Show Figures

Figure 1

36 pages, 829 KB  
Review
AUV Intelligent Decision-Making System Empowered by Deep Learning: Evolution, Challenges and Future Prospects
by Qiulin Ding, Lugang Ye, Hao Chen, Hongyuan Liu, Aoming Liang and Weicheng Cui
Technologies 2025, 13(12), 586; https://doi.org/10.3390/technologies13120586 - 12 Dec 2025
Viewed by 453
Abstract
The intelligent decision-making systems of Autonomous Underwater Vehicles (AUVs) are undergoing a significant transformation, shifting from traditional control theories to data-driven paradigms. Deep learning (DL) serves as the primary driving force behind this evolution; however, its application in complex and unstructured underwater environments [...] Read more.
The intelligent decision-making systems of Autonomous Underwater Vehicles (AUVs) are undergoing a significant transformation, shifting from traditional control theories to data-driven paradigms. Deep learning (DL) serves as the primary driving force behind this evolution; however, its application in complex and unstructured underwater environments continues to present unique challenges. To systematically analyze the development, current obstacles, and future directions of DL-enhanced AUV decision-making systems, this paper proposes an innovative ‘four-module’ decomposition framework consisting of information processing, understanding, judgment, and output. This framework enables a structured review of the progression of DL technologies across each stage of the AUV decision-making information flow. To further bridge the gap between theoretical advancements and practical implementation, we introduce a task complexity–environment uncertainty four-quadrant analytical matrix, offering strategic guidance for selecting appropriate DL architectures across diverse operational scenarios. Additionally, this work identifies key challenges in the field as well as anticipates future developments to solve these challenges. This paper aims to provide researchers and engineers with a comprehensive and strategic overview to support the design and optimization of next-generation AUV decision-making architectures. Full article
Show Figures

Figure 1

22 pages, 1158 KB  
Article
High-Speed Architecture for Hybrid Arithmetic–Huffman Data Compression
by Yair Wiseman
Technologies 2025, 13(12), 585; https://doi.org/10.3390/technologies13120585 - 12 Dec 2025
Viewed by 542
Abstract
This paper proposes a hardware–software co-design for adaptive lossless compression based on Hybrid Arithmetic–Huffman Coding, a table-driven approximation of arithmetic coding that preserves near-optimal compression efficiency while eliminating the multiplicative precision and sequential bottlenecks that have traditionally prevented arithmetic coding deployment in resource-constrained [...] Read more.
This paper proposes a hardware–software co-design for adaptive lossless compression based on Hybrid Arithmetic–Huffman Coding, a table-driven approximation of arithmetic coding that preserves near-optimal compression efficiency while eliminating the multiplicative precision and sequential bottlenecks that have traditionally prevented arithmetic coding deployment in resource-constrained embedded systems. The compression pipeline is partitioned as follows: flexible software on the processor core dynamically builds and adapts the prefix coding (usually Huffman Coding) frontend for accurate probability estimation and binarization; the resulting binary stream is fed to a deeply pipelined systolic hardware accelerator that performs binary arithmetic coding using pre-calibrated finite state transition tables, dedicated renormalization logic, and carry propagation mitigation circuitry instantiated in on-chip memory. The resulting implementation achieves compression ratios consistently within 0.4% of the theoretical entropy limit, multi-gigabit per second throughput in 28 nm/FinFET nodes, and approximately 68% lower energy per compressed byte than optimized software arithmetic coding, making it ideally suited for real-time embedded vision, IoT sensor networks, and edge multimedia applications. Full article
(This article belongs to the Special Issue Optimization Technologies for Digital Signal Processing)
Show Figures

Figure 1

32 pages, 983 KB  
Review
Innovations and Future Perspectives in the Use of Artificial Intelligence for Cybersecurity: A Scoping Review
by Cristian Randieri, Francesca Fiani, Kevin Lubrano and Christian Napoli
Technologies 2025, 13(12), 584; https://doi.org/10.3390/technologies13120584 - 11 Dec 2025
Viewed by 489
Abstract
Cybersecurity is a field in which integration of artificial intelligence (AI) represents a significant direction towards protection against cyber threats. This scoping review explores the current impact and future prospects of AI in four key areas of cybersecurity: threat detection, endpoint security, phishing [...] Read more.
Cybersecurity is a field in which integration of artificial intelligence (AI) represents a significant direction towards protection against cyber threats. This scoping review explores the current impact and future prospects of AI in four key areas of cybersecurity: threat detection, endpoint security, phishing and fraud detection, and network security. The main goal was to answer the research question, ‘Is AI an effective method to enhance current infrastructures’ cybersecurity?’ Method: Through the PRISMA-ScR protocol, 2548 records were identified from the Google Scholar database from January 2020 to April 2025. The following search terms were used to identify available literature: “Artificial Intelligence Cybersecurity”, “Machine Learning Cybersecurity”, “Cybersecurity Innovation AI”, “AI Future Perspective Cybersecurity”, “Machine Learning Innovation Cybersecurity”. The search only included articles in English. No grey literature has been included. Articles with a focus on performance optimization, cost analysis and business models without a focus on privacy and security have been discarded. Results: The impact and performance of AI algorithms have been highlighted through a selection of 20 articles. Both Machine Learning and Neural Network methods have been employed in the literature, with Decision Trees and Random Forest being the most common approaches. Discussion: The main common limitations of the analyzed articles have been discussed, highlighting possible future directions of research to tackle them. Conclusions: Despite the evidenced limitations, AI showed promising results in improving cybersecurity, especially concerning cyber attack detection and classification, with methods able to grant very high accuracy and trustworthiness. Full article
(This article belongs to the Collection Review Papers Collection for Advanced Technologies)
Show Figures

Figure 1

33 pages, 849 KB  
Review
Transport and Application Layer Protocols for IoT: Comprehensive Review
by Ionel Petrescu, Elisabeta Niculae, Viorel Vulturescu, Andrei Dimitrescu and Liviu Marian Ungureanu
Technologies 2025, 13(12), 583; https://doi.org/10.3390/technologies13120583 - 11 Dec 2025
Viewed by 652
Abstract
The Internet of Things (IoT) connects billions of heterogeneous devices, necessitating lightweight, efficient, and secure communication protocols to support a diverse range of use cases. While physical and network-layer technologies enable connectivity, transport and application-layer protocols determine how IoT devices exchange, manage, and [...] Read more.
The Internet of Things (IoT) connects billions of heterogeneous devices, necessitating lightweight, efficient, and secure communication protocols to support a diverse range of use cases. While physical and network-layer technologies enable connectivity, transport and application-layer protocols determine how IoT devices exchange, manage, and secure information. The diverse and constrained nature of IoT devices presents a challenge in selecting appropriate communication protocols, with no one-size-fits-all solution existing. This article provides a comprehensive review of key transport and application protocols in IoT, including MQTT, MQTT-SN, CoAP, LwM2M, AMQP, XMPP, WebSockets, HTTP/HTTPS, and OPC UA. Each protocol is examined in terms of its design principles, communication patterns, reliability mechanisms, and security features. The discussion highlights their suitability for different deployment scenarios, ranging from resource-constrained sensor networks to industrial automation and cloud-integrated consumer devices. By mapping protocol characteristics to IoT requirements, such as scalability, interoperability, power efficiency, and manageability, the article provides guidelines for selecting the optimal protocol stack to optimize IoT system performance and long-term sustainability. Our analysis reveals that while MQTT dominates cloud telemetry, CoAP and LwM2M are superior in IP-based constrained networks, and emerging solutions like OSCORE are critical for end-to-end security. Full article
Show Figures

Figure 1

23 pages, 3058 KB  
Article
Research on the Adhesive Properties of Flax Fiber in the Production of Composite Materials
by Sergiy Lavrenko, Olga Gorach and Nataliia Lavrenko
Technologies 2025, 13(12), 582; https://doi.org/10.3390/technologies13120582 - 11 Dec 2025
Viewed by 348
Abstract
The article presents the results of theoretical and experimental research into the adhesive properties of flax fiber and their impact on the scientific development of composite material production. The research established that combining natural fibers with a polymer material or matrix increases the [...] Read more.
The article presents the results of theoretical and experimental research into the adhesive properties of flax fiber and their impact on the scientific development of composite material production. The research established that combining natural fibers with a polymer material or matrix increases the complexity of the composite forming process and causes problems in the physicochemical processes of matrix–filler interaction. This is explained by the low wettability of flax bast (13.0–14.5 g). It was found that the presence of cutins on oil flax fibers determines their high degree of hydrophobicity. To improve the adhesive properties of the bast, it was chemically treated to remove cellulose companions and cutins, high-molecular-weight compounds. The bast was chemically treated using the oxidative method. After chemical treatment, a fiber enriched with cellulose and freed from waxy substances was obtained. Thus, the cellulose content increased from 47.67–53.33% to 90.01–97.68%, and the waxy substances were almost completely removed. Their content in the bast was 18.13–18.57%, but after chemical treatment, it decreased to 0.01–0.04%. After chemical treatment, the wettability of the fiber increased to the required levels −104.94–122.78 g, indicating that the adhesive properties were significantly improved. The results of studies on physical and mechanical indicators demonstrate the high quality of the obtained composites. In terms of fluidity, all samples were superior to the control sample reinforced with cotton fiber. The theoretical and experimental research enabled the collection of experimental samples of composite materials. Full article
Show Figures

Graphical abstract

23 pages, 4602 KB  
Article
A Two-Step Method for Diode Package Characterization Based on Small-Signal Behavior Analysis
by Hidai A. Cárdenas-Herrera and Roberto S. Murphy-Arteaga
Technologies 2025, 13(12), 581; https://doi.org/10.3390/technologies13120581 - 11 Dec 2025
Viewed by 265
Abstract
This article presents a comprehensive and intuitive analysis of the impact of packaging on diode performance and a two-step method for packaging parameter extraction. This is performed using a single forward bias point, one-port measurements and probe tips on a conventional printed circuit [...] Read more.
This article presents a comprehensive and intuitive analysis of the impact of packaging on diode performance and a two-step method for packaging parameter extraction. This is performed using a single forward bias point, one-port measurements and probe tips on a conventional printed circuit board (PCB). A PIN diode was used to validate the method, biased from reverse (−5 V) to forward (1.22 V) bias. Measurements were performed up to 27 gigahertz (GHz). The complete diode characterization process—from the design and the electrical modeling of the test fixture to the extraction of the unpackaged diode measurements—is detailed. The parameters of the package model were extracted, its effects were removed from the measurement, and the behavior of the unpackaged diode was determined. Three operating regions based on their radiofrequency and direct current (RF-DC) behavior were proposed, and an electrical model of the unpackaged diode was derived for each region. The results showed that the influence of the package caused the diode to remain in an unchanged behavior under different biases, indicating that it no longer rectified. The results presented herein are validated by the excellent correlation between the diode’s measured S-parameters, impedance, and admittance and their corresponding models. Full article
(This article belongs to the Special Issue Microelectronics and Electronic Packaging for Advanced Sensor System)
Show Figures

Graphical abstract

19 pages, 635 KB  
Article
Formal Verification of Transcompiled Mobile Applications Using First-Order Logic
by Ahmad Ahmad Muhammad, Mahitap Ayman, Samer A. Elhossany, Walaa Medhat, Sahar Selim, Hala Zayed, Ahmed H. Yousef, Axel Jantsch and Nahla Elaraby
Technologies 2025, 13(12), 580; https://doi.org/10.3390/technologies13120580 - 10 Dec 2025
Viewed by 568
Abstract
The increasing interest in automated code conversion and transcompilation—driven by the need to support multiple platforms efficiently—has raised new challenges in verifying that translated codes preserve the intended behaviors of the originals. Although it has not yet been widely adopted, transcompilation offers promising [...] Read more.
The increasing interest in automated code conversion and transcompilation—driven by the need to support multiple platforms efficiently—has raised new challenges in verifying that translated codes preserve the intended behaviors of the originals. Although it has not yet been widely adopted, transcompilation offers promising applications in software reuse and cross-platform migration. With the growing use of Large Language Models (LLMs) in code translation, where internal reasoning remains inaccessible, verifying the equivalence of their generated outputs has become increasingly essential. However, existing evaluation metrics—such as BLEU and CodeBLEU, which are commonly used as baselines in transcompiler evaluation—primarily measure syntactic similarity, even though this does not guarantee semantic correctness. This syntactic bias often leads to misleading evaluations where structurally different but semantically equivalent code is penalized. This syntactic bias often leads to misleading evaluations, where structurally different but semantically equivalent code is penalized. To address this limitation, we propose a formal verification framework based on equivalence checking using First-Order Logic (FOL). The approach models core programming constructs—such as loops, conditionals, and function calls—that function as logical axioms, enabling equivalence to be assessed at the behavioral level rather than simply by their textual similarity. We initially used the Z3 solver to manually encode Swift and Java code into FOL. To improve scalability and automation, we later integrated ANTLR to parse and translate both the source and transcompiled codes into logical representations. Although the framework is language-agnostic, we demonstrate its effectiveness through a case study of Swift-to-Java transcompilation. The experimental results demonstrated that our method effectively identifies semantic equivalence, even when syntax differs significantly. Our method achieves an average semantic accuracy of 86.1%, compared to BLEU’s syntactic accuracy of 64.45%. This framework bridges the gap between code translation and formal semantic verification. These results highlight the potential for formal equivalence checking to serve as a more reliable validation method in code translation tasks, enabling more trustworthy cross-language code conversion. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

24 pages, 2462 KB  
Article
Two-Layer Low-Carbon Optimal Dispatch of Integrated Energy Systems Based on Stackelberg Game
by Fan Zhang, Jijing Yan, Yuxi Li and Ziwei Zhu
Technologies 2025, 13(12), 579; https://doi.org/10.3390/technologies13120579 - 10 Dec 2025
Viewed by 204
Abstract
As a key node of the energy internet, the park-level integrated energy system undertakes the dual functions of improving energy supply reliability and promoting low-carbon development in the transformation of the global energy structure. The need to simultaneously meet terminal energy demand and [...] Read more.
As a key node of the energy internet, the park-level integrated energy system undertakes the dual functions of improving energy supply reliability and promoting low-carbon development in the transformation of the global energy structure. The need to simultaneously meet terminal energy demand and market regulation requirements constrains operational optimization due to factors such as energy price fluctuations. Future research should focus on supply–demand coordination mechanisms and energy efficiency improvement strategies to advance the high-quality development of such systems. To this end, this study constructs a collaborative optimization framework integrating demand response based on a dual-compensation mechanism and dynamic multi-energy pricing and incorporates it into a Stackelberg game-based low-carbon economic dispatch model. By incorporating a dynamic multi-energy pricing mechanism, the model coordinates and optimizes the interests of the upper-level park integrated energy system operator (PIESO) and the lower-level park users. On the supply side, the model couples a two-stage power-to-gas (P2G) device with a stepwise carbon trading mechanism, forming a low-carbon dispatch system enabling source–grid–load coordination. On the demand side, an integrated demand response mechanism with dual compensation is introduced to enhance the coupling intensity of multi-energy flows and the adjustability of price elasticity. The simulation results show that, compared with traditional models, the proposed optimization framework achieves improvements in three dimensions: carbon emissions, economic benefits, and user costs. Specifically, the carbon emission intensity is reduced by 28.04%, the operating income of the PIESO is increased by 29.53%, and the users’ energy consumption cost is decreased by 13.05%, which verifies the effectiveness and superiority of the proposed model. Full article
Show Figures

Figure 1

26 pages, 5681 KB  
Article
Physiological Artifact Suppression in EEG Signals Using an Efficient Multi-Scale Depth-Wise Separable Convolution and Variational Attention Deep Learning Model for Improved Neurological Health Signal Quality
by Vandana Akshath Raj, Tejasvi Parupudi, Vishnumurthy Kedlaya K, Ananthakrishna Thalengala and Subramanya G. Nayak
Technologies 2025, 13(12), 578; https://doi.org/10.3390/technologies13120578 - 9 Dec 2025
Viewed by 474
Abstract
Artifacts remain a major challenge in electroencephalogram (EEG) recordings, often degrading the accuracy of clinical diagnosis, brain computer interface (BCI) systems, and cognitive research. Although recent deep learning approaches have advanced EEG denoising, most still struggle to model long-range dependencies, maintain computational efficiency, [...] Read more.
Artifacts remain a major challenge in electroencephalogram (EEG) recordings, often degrading the accuracy of clinical diagnosis, brain computer interface (BCI) systems, and cognitive research. Although recent deep learning approaches have advanced EEG denoising, most still struggle to model long-range dependencies, maintain computational efficiency, and generalize to unseen artifact types. To address these challenges, this study proposes MDSC-VA, an efficient denoising framework that integrates multi-scale (M) depth-wise separable convolution (DSConv), variational autoencoder-based (VAE) latent encoding, and a multi-head self-attention mechanism. This unified architecture effectively balances denoising accuracy and model complexity while enhancing generalization to unseen artifact types. Comprehensive evaluations on three open-source EEG datasets, including EEGdenoiseNet, a Motion Artifact Contaminated Multichannel EEG dataset, and the PhysioNet EEG Motor Movement/Imagery dataset, demonstrate that MDSC-VA consistently outperforms state-of-the-art methods, achieving a higher signal-to-noise ratio (SNR), lower relative root mean square error (RRMSE), and stronger correlation coefficient (CC) values. Moreover, the model preserved over 99% of the dominant neural frequency band power, validating its ability to retain physiologically relevant rhythms. These results highlight the potential of MDSC-VA for reliable clinical EEG interpretation, real-time BCI systems, and advancement towards sustainable healthcare technologies in line with SDG-3 (Good Health and Well-Being). Full article
Show Figures

Graphical abstract

34 pages, 20812 KB  
Article
Surreal AI: The Generation, Reconstruction, and Assessment of Surreal Images and 3D Models
by Naai-Jung Shih
Technologies 2025, 13(12), 577; https://doi.org/10.3390/technologies13120577 - 8 Dec 2025
Cited by 1 | Viewed by 679
Abstract
Surrealism applies metaphors to create a vocabulary of contexts and scenes. Can AI interpret surrealism? What occurs if a negative prompt is input for 3D reconstruction? This study aims to generate surreal images in AI and to assess the subsequent 3D reconstructed models [...] Read more.
Surrealism applies metaphors to create a vocabulary of contexts and scenes. Can AI interpret surrealism? What occurs if a negative prompt is input for 3D reconstruction? This study aims to generate surreal images in AI and to assess the subsequent 3D reconstructed models as an exemplification of context. This AI interpretation study uses 87 sets of conflicting prompts to generate images with novel 3D structural and visual details. Eight characteristic 3D models were selected with geometric features modified by functions, such as the reduction in noise, to identify the changes made to the original shape, with upper and lower bounds of between 92.11% and 47.89% for area and between 20.51% and 1.46% for volume, which indicates structural details. This study creates a unique numeric identity of surreal images upon 3D reconstruction in terms of the relative % of the changes made to the original shape. AI can create a connection between 2D surreal imagination and the 3D physical world, in which the images and models are also appropriate for video morphing, situated elaboration in AR scenes, and verified 3D RP prints. Full article
Show Figures

Graphical abstract

25 pages, 8829 KB  
Article
Numerical and Experimental Investigations on Oil Supply Characteristics of a Multi-Passage Lubrication System for a Three-Stage Planetary Transmission in a Tracked Vehicle
by Jing Zhang, Peng Jin, Xiaozhou Hu and Yangmei Yuan
Technologies 2025, 13(12), 576; https://doi.org/10.3390/technologies13120576 - 8 Dec 2025
Viewed by 263
Abstract
The multi-passage lubrication system is adopted to meet the demand of the main heat generation parts (gears and bearings) in the three-stage planetary transmission system of a large tracked vehicle. As rotational speed increases, the flow regime inside the passages with multi-oil outlets [...] Read more.
The multi-passage lubrication system is adopted to meet the demand of the main heat generation parts (gears and bearings) in the three-stage planetary transmission system of a large tracked vehicle. As rotational speed increases, the flow regime inside the passages with multi-oil outlets becomes highly complex. Under high-speed conditions, the flow rate in Zone 2 decreases sharply, and some oil outlets even drop to zero, representing a 100% reduction amplitude, which results in an unstable oil supply for heat generation parts and even potential lubrication cut-off. In the present work, the lubrication characteristics of the oil supply system for the three-stage planetary transmission system are investigated by a combination of CFD (computational fluid dynamics) simulations and experiments. A complete CFD model of the multi-passage lubrication system is established, comprising a stationary oil passage, a main oil passage, and a three-stage variable-speed oil passage. A transient calculation method based on sliding mesh rotation domain control is used to simulate the oil-filling process in the oil passages, and the oil supply characteristics of the variable-speed oil passage are investigated. A test bench for the multi-stage planetary transmission system is designed and constructed to collect oil flow data from outlets of planetary gear sets. The comparison between simulated and experimental results confirms the validity of the proposed numerical method. Additionally, numerical simulations are conducted to investigate the effects of key factors, including input speed, oil supply pressure, and oil temperature, on the oil flow rate of outlets. The results indicate that the rotational speed is the major parameter affecting the oil flow rate at the oil passage outlets. This work provides a practical guidance for optimizing lubrication design in complex multi-stage planetary transmission systems. Full article
Show Figures

Figure 1

21 pages, 2313 KB  
Review
A Bibliometric and Network Analysis of Digital Twins and BIM in Water Distribution Systems
by Chiamba Ricardo Chiteculo Canivete, Mercy Chitauro, Martina Flörke and Maduako E. Okorie
Technologies 2025, 13(12), 575; https://doi.org/10.3390/technologies13120575 - 8 Dec 2025
Viewed by 426
Abstract
The increasing complexity of water distribution systems (WDSs) and the growing demand for sustainable infrastructure management have spurred interest in Building Information Modelling (BIM) and Digital Twin (DT) technologies. This study presents a comprehensive bibliometric and thematic literature review aiming to identify key [...] Read more.
The increasing complexity of water distribution systems (WDSs) and the growing demand for sustainable infrastructure management have spurred interest in Building Information Modelling (BIM) and Digital Twin (DT) technologies. This study presents a comprehensive bibliometric and thematic literature review aiming to identify key trends, research clusters, and knowledge gaps at the intersection of BIM, DT, and WDSs. Using the Scopus database, 95 relevant publications from 2004 to 2024 were systematically analyzed. VOSviewer was applied to create, visualize, and analyze maps of countries, journals, documents, and keywords based on citation, co-citation, collaboration, and co-occurrence data. The results indicate a sharp rise in scholarly attention after 2020, with dominant contributions from European institutions. Co-authorship networks show limited global interconnectedness, suggesting that developing countries should especially prioritize integrated DT and BIM for more inclusive and diverse research partnerships. This study characterizes the state of the art and future requirements for research on the use of DT and BIM technologies in WDSs and makes a noteworthy contribution to the body of knowledge. Future research should focus on integrating DT and BIM technologies with ML, which represents scalability challenges of real-time anomaly detection integration models, advancing decision-making and operational resilience in WDNs. Full article
Show Figures

Figure 1

32 pages, 4849 KB  
Systematic Review
Artificial Intelligence in Solar-Assisted Greenhouse Systems: A Technical, Systematic and Bibliometric Review of Energy Integration and Efficiency Advances
by Edwin Villagran, John Javier Espitia, Fabián Andrés Velázquez, Andres Sarmiento, Diego Alejandro Salinas Velandia and Jader Rodriguez
Technologies 2025, 13(12), 574; https://doi.org/10.3390/technologies13120574 - 6 Dec 2025
Viewed by 767
Abstract
Protected agriculture increasingly requires solutions that reduce energy consumption and environmental impacts while maintaining stable microclimatic conditions. The integration of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) with solar technologies has emerged as a pathway toward autonomous and energy-efficient greenhouses [...] Read more.
Protected agriculture increasingly requires solutions that reduce energy consumption and environmental impacts while maintaining stable microclimatic conditions. The integration of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) with solar technologies has emerged as a pathway toward autonomous and energy-efficient greenhouses and solar dryers. This study analyzes the scientific and technological evolution of this convergence using a mixed review approach bibliometric and systematic, following PRISMA 2020 guidelines. From Scopus records (2012–2025), 115 documents were screened and 79 met the inclusion criteria. Bibliometric results reveal accelerated growth since 2019, led by Engineering, Computer Science, and Energy, with China, India, Saudi Arabia, and the United Kingdom as dominant contributors. Thematic analysis identifies four major research fronts: (i) thermal modeling and energy efficiency, (ii) predictive control and microclimate automation, (iii) integration of photovoltaic–thermal (PV/T) systems and phase change materials (PCMs), and (iv) sustainability and agrivoltaics. Systematic evidence shows that AI, ML, and DL based models improve solar forecasting, microclimate regulation, and energy optimization; model predictive control (MPC), deep reinforcement learning (DRL), and energy management systems (EMS) enhance operational efficiency; and PV/T–PCM hybrids strengthen heat recovery and storage. Remaining gaps include long-term validation, metric standardization, and cross-context comparability. Overall, the field is advancing toward near-zero-energy greenhouses powered by Internet of Things (IoT), AI, and solar energy, enabling resilient, efficient, and decarbonized agro-energy systems. Full article
Show Figures

Figure 1

22 pages, 3542 KB  
Article
Dual Resource Scheduling Method of Production Equipment and Rail-Guided Vehicles Based on Proximal Policy Optimization Algorithm
by Nengqi Zhang, Bo Liu and Jian Zhang
Technologies 2025, 13(12), 573; https://doi.org/10.3390/technologies13120573 - 5 Dec 2025
Viewed by 1599
Abstract
In the context of intelligent manufacturing, the integrated scheduling problem of dual rail-guided vehicles (RGVs) and multiple parallel processing equipment in flexible manufacturing systems has gained increasing importance. This problem exhibits spatiotemporal coupling and dynamic constraint characteristics, making traditional optimization methods ineffective at [...] Read more.
In the context of intelligent manufacturing, the integrated scheduling problem of dual rail-guided vehicles (RGVs) and multiple parallel processing equipment in flexible manufacturing systems has gained increasing importance. This problem exhibits spatiotemporal coupling and dynamic constraint characteristics, making traditional optimization methods ineffective at finding optimal solutions. At the problem formulation level, the dual resource scheduling task is modeled as a mixed-integer optimization problem. An intelligent scheduling framework based on action mask-constrained Proximal Policy Optimization (PPO) deep reinforcement learning is proposed to achieve integrated decision-making for production equipment allocation and RGV path planning. The approach models the scheduling problem as a Markov Decision Process, designing a high-dimensional state space, along with a multi-discrete action space that integrates machine selection and RGV motion control. The framework employs a shared feature extraction layer and dual-head Actor-Critic network architecture, combined with parallel experience collection and synchronous parameter update mechanisms. In computational experiments across different scales, the proposed method achieves an average makespan reduction of 15–20% compared with numerical methods, while exhibiting excellent robustness under uncertain conditions including processing time fluctuations. Full article
(This article belongs to the Section Manufacturing Technology)
Show Figures

Figure 1

41 pages, 6103 KB  
Article
H-RT-IDPS: A Hierarchical Real-Time Intrusion Detection and Prevention System for the Smart Internet of Vehicles via TinyML-Distilled CNN and Hybrid BiLSTM-XGBoost Models
by Ikram Hamdaoui, Chaymae Rami, Zakaria El Allali and Khalid El Makkaoui
Technologies 2025, 13(12), 572; https://doi.org/10.3390/technologies13120572 - 5 Dec 2025
Viewed by 570
Abstract
The integration of connected vehicles into smart city infrastructure introduces critical cybersecurity challenges for the Internet of Vehicles (IoV), where resource-constrained vehicles and powerful roadside units (RSUs) must collaborate for secure communication. We propose H-RT-IDPS, a hierarchical real-time intrusion detection and prevention system [...] Read more.
The integration of connected vehicles into smart city infrastructure introduces critical cybersecurity challenges for the Internet of Vehicles (IoV), where resource-constrained vehicles and powerful roadside units (RSUs) must collaborate for secure communication. We propose H-RT-IDPS, a hierarchical real-time intrusion detection and prevention system targeting two high-priority IoV security pillars: availability (traffic overload) and integrity/authenticity (spoofing), with spoofing evaluated across multiple subclasses (GAS, RPM, SPEED, and steering wheel). In the offline phase, deep learning and hybrid models were benchmarked on the vehicular CAN bus dataset CICIoV2024, with the BiLSTM-XGBoost hybrid chosen for its balance between accuracy and inference speed. Real-time deployment uses a TinyML-distilled CNN on vehicles for ultra-lightweight, low-latency detection, while RSU-level BiLSTM-XGBoost performs a deeper temporal analysis. A Kafka–Spark Streaming pipeline supports localized classification, prevention, and dashboard-based monitoring. In baseline, stealth, and coordinated modes, the evaluation achieved accuracy, precision, recall, and F1-scores all above 97%. The mean end-to-end inference latency was 148.67 ms, and the resource usage was stable. The framework remains robust in both high-traffic and low-frequency attack scenarios, enhancing operator situational awareness through real-time visualizations. These results demonstrate a scalable, explainable, and operator-focused IDPS well suited for securing SC-IoV deployments against evolving threats. Full article
(This article belongs to the Special Issue Research on Security and Privacy of Data and Networks)
Show Figures

Figure 1

22 pages, 698 KB  
Article
Model Predictive Load Frequency Control for Virtual Power Plants: A Mixed Time- and Event-Triggered Approach Dependent on Performance Standard
by Liangyi Pu, Jianhua Hou, Song Wang, Haijun Wei, Yanghaoran Zhu, Xiong Xu and Xiongbo Wan
Technologies 2025, 13(12), 571; https://doi.org/10.3390/technologies13120571 - 5 Dec 2025
Viewed by 406
Abstract
To improve the load frequency control (LFC) performance of power systems incorporating virtual power plants (VPPs) while reducing network resource consumption, a model predictive control (MPC) method based on a mixed time/event-triggered mechanism (MTETM) is proposed. This mechanism integrates an event-triggered mechanism (ETM) [...] Read more.
To improve the load frequency control (LFC) performance of power systems incorporating virtual power plants (VPPs) while reducing network resource consumption, a model predictive control (MPC) method based on a mixed time/event-triggered mechanism (MTETM) is proposed. This mechanism integrates an event-triggered mechanism (ETM) with a time-triggered mechanism (TTM), where ETM avoids unnecessary signal transmission and TTM ensures fundamental control performance. Subsequently, for the LFC system incorporating VPPs, a state hard constrained MPC problem is formulated and transformed into a “min-max” optimisation problem. Through linear matrix inequalities, the original optimisation problem is equivalently transformed into an auxiliary optimisation problem, with the optimal control law solved via rolling optimisation. Theoretical analysis demonstrates that the proposed auxiliary optimisation problem possesses recursive feasibility, whilst the closed-loop system satisfies input-to-state stability. Finally, validation through case studies of two regional power systems demonstrates that the MPC approach based on MTETM outperforms the ETM-based MPC approach in terms of control performance while maintaining a triggering rate of 33.3%. Compared with the TTM-based MPC algorithm, the MTETM-based MPC method reduces the triggering rate by 66.7%, while maintaining nearly equivalent control performance. Consequently, the results validate the effectiveness of the MTETM-based MPC approach in conserving network resources while maintaining control performance. Full article
(This article belongs to the Special Issue Next-Generation Distribution System Planning, Operation, and Control)
Show Figures

Figure 1

32 pages, 6175 KB  
Article
Comprehensive Image-Based Validation Framework for Particle Motion in DEM Models Under Field-like Conditions
by Kuře Jiří and Kuřetová Barbora
Technologies 2025, 13(12), 570; https://doi.org/10.3390/technologies13120570 - 5 Dec 2025
Viewed by 327
Abstract
Accurate numerical prediction of particle–tool interaction requires validation methods that closely reflect the complexity of real operating conditions. This study introduces a comprehensive methodology for validating the motion of particulate material modeled using the Discrete Element Method (DEM) under field-like conditions, with experimental [...] Read more.
Accurate numerical prediction of particle–tool interaction requires validation methods that closely reflect the complexity of real operating conditions. This study introduces a comprehensive methodology for validating the motion of particulate material modeled using the Discrete Element Method (DEM) under field-like conditions, with experimental measurements conducted directly during agricultural processing. The proposed framework integrates image analysis with manual extraction of experimental particle trajectories, providing an efficient, flexible, and cost-effective validation approach. A multilayer perceptron artificial neural network (ANN) trained on 94,939 calibration samples was employed to transform pixel coordinates from two synchronized cameras into 3D spatial positions. To the best of the authors’ knowledge, this represents the first application of an ANN-based trajectory reconstruction method under laboratory soil-channel conditions that replicate field-representative geometry and operating velocities. Experiments were conducted in a laboratory soil channel using a full-scale agricultural chisel operating at 1.0 and 1.5 m·s−1, corresponding to realistic tillage velocities. The ANN achieved excellent accuracy (R2 = 0.9994, 0.9993, and 0.9988 for the X-, Y-, and Z-axes; average deviation 2.7 mm), and the subsequent comparison with DEM simulations resulted in an average nRMSE error of 4.7% for 1 m·s−1 and 9.41% for 1.5 m·s−1. The results confirm that the proposed methodology enables precise reconstruction of particle trajectories and provides a robust framework for the validation and calibration of DEM models under conditions closely approximating real field environments. Full article
Show Figures

Figure 1

19 pages, 3720 KB  
Article
From RGB to Synthetic NIR: Image-to-Image Translation for Pineapple Crop Monitoring Using Pix2PixHD
by Darío Doria Usta, Ricardo Hundelshaussen, Carlos Martínez López, Delio Salgado Chamorro, César López Martínez, João Felipe Coimbra Leite Costa and Marcel Arcari Bassani
Technologies 2025, 13(12), 569; https://doi.org/10.3390/technologies13120569 - 5 Dec 2025
Viewed by 450
Abstract
Near-infrared (NIR) imaging plays a crucial role in precision agriculture; however, the high cost of multispectral sensors limits its widespread adoption. In this study, we generate synthetic NIR images (2592 × 1944 pixels) of pineapple crops from standard RGB drone imagery using the [...] Read more.
Near-infrared (NIR) imaging plays a crucial role in precision agriculture; however, the high cost of multispectral sensors limits its widespread adoption. In this study, we generate synthetic NIR images (2592 × 1944 pixels) of pineapple crops from standard RGB drone imagery using the Pix2PixHD framework. The model was trained for 580 epochs, saving the first model after epoch 1 and then every 10 epochs thereafter. While models trained beyond epoch 460 achieved marginally higher metrics, they introduced visible artifacts. Model 410 was identified as the most effective, offering consistent quantitative performance while producing artifact-free results. Evaluation of Model 410 across 229 test images showed a mean SSIM of 0.6873, PSNR of 29.92, RMSE of 8.146, and PCC of 0.6565, indicating moderate to high structural similarity and reliable spectral accuracy of the synthetic NIR data. The proposed approach demonstrates that reliable NIR information can be obtained without expensive multispectral equipment, reducing costs and enhancing accessibility for farmers. By enabling advanced tasks such as vegetation segmentation and crop health monitoring, this work highlights the potential of deep learning–based image translation to support sustainable and data-driven agricultural practices. Future directions include extending the method to other crops, environmental conditions and real-time drone monitoring. Full article
(This article belongs to the Special Issue AI-Driven Optimization in Robotics and Precision Agriculture)
Show Figures

Figure 1

23 pages, 1517 KB  
Article
Bridging Heterogeneous Agents: A Neuro-Symbolic Knowledge Transfer Approach
by Artem Isakov, Artem Zaglubotskii, Ivan Tomilov, Natalia Gusarova, Aleksandra Vatian and Alexander Boukhanovsky
Technologies 2025, 13(12), 568; https://doi.org/10.3390/technologies13120568 - 4 Dec 2025
Viewed by 522
Abstract
This paper presents a neuro-symbolic approach for constructing distributed knowledge graphs to facilitate cooperation through communication among spatially proximate agents. We develop a graph autoencoder (GAE) that learns rich representations from heterogeneous modalities. The method employs density-adaptive k-nearest neighbor (k-NN) [...] Read more.
This paper presents a neuro-symbolic approach for constructing distributed knowledge graphs to facilitate cooperation through communication among spatially proximate agents. We develop a graph autoencoder (GAE) that learns rich representations from heterogeneous modalities. The method employs density-adaptive k-nearest neighbor (k-NN) construction with Gabriel pruning to build the proximity graphs that balance local density awareness with geometric consistency. When the agents enter the bridging zone, their individual knowledge graphs are aggregated into hypergraphs using a construction algorithm, for which we derive the theoretical bounds on the minimum number of hyperedges required for connectivity under arity and locality constraints. We evaluate the approach in PettingZoo’s communication-oriented environment, observing improvements of approximately 10% in episode rewards and up to 40% in individual agent rewards compared to Deep Q-Network (DQN) baselines, while maintaining comparable policy loss values. The explicit graph structures may offer interpretability benefits for applications requiring auditability. This work explores how structured knowledge representations can support cooperation in distributed multi-agent systems with heterogeneous observations. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop