Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,567)

Search Parameters:
Keywords = Industrial Internet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 24743 KB  
Article
EACCO: Optimizing the Computation and Communication in Resource-Constrained IoT Devices for Energy-Efficient Swarm Robotics
by Amir Ijaz, Hashem Haghbayan, Ethiopia Nigussie, Abdul Malik and Juha Plosila
Sensors 2026, 26(9), 2839; https://doi.org/10.3390/s26092839 - 1 May 2026
Abstract
Energy consumption is a critical concern for Internet of Things (IoT) platforms lacking abundant resources, particularly for swarm robotic systems that rely on numerous devices operating collaboratively over extended periods. This study presents a comprehensive design strategy for improving processing and communication to [...] Read more.
Energy consumption is a critical concern for Internet of Things (IoT) platforms lacking abundant resources, particularly for swarm robotic systems that rely on numerous devices operating collaboratively over extended periods. This study presents a comprehensive design strategy for improving processing and communication to enhance system efficiency and reduce energy consumption. We incorporate energy harvesting (photovoltaic and RF), dynamic power management, and energy-efficient communication protocols (e.g., duty cycle, power control, data compression) into two complementary platforms built for swarm robotics: MCU-based nodes (TI MSP430 with LoRa transceiver), which serve as the experimental prototype for validating energy-aware communication, compression, and scheduling mechanisms; edge platforms (Jetson Nano and TX2), which are used for high-level power profiling and system-level evaluation, particularly for computation intensive workloads and comparative analysis. Our technique involves analyzing the device’s energy usage and harvesting processes, developing efficient communication protocols, and validating the system through simulations and hardware prototypes. Experimental results under outdoor and indoor conditions show that the device maintains an energy neutrality ratio well above unity, even with limited ambient energy. Key findings include significant reductions in energy per bit transmitted and reliable long-term operation. These insights pave the way for deploying swarms of autonomous IoT-based robots with minimal maintenance and maximal longevity. Full article
(This article belongs to the Section Internet of Things)
23 pages, 7922 KB  
Article
Hardware-Assisted Security Enhancements for an FPGA-ARM Embedded Vision System in IoT Applications
by Tomyslav Sledevič and Darius Andriukaitis
Electronics 2026, 15(9), 1887; https://doi.org/10.3390/electronics15091887 - 29 Apr 2026
Viewed by 1
Abstract
EmbeddedField-Programmable Gate Array (FPGA)-Advanced RISC Machine (ARM) systems used in industrial and Internet of Things (IoT) environments increasingly operate as network-connected edge devices. While such connectivity enables distributed processing and remote monitoring, it also exposes embedded vision nodes to security threats, including command [...] Read more.
EmbeddedField-Programmable Gate Array (FPGA)-Advanced RISC Machine (ARM) systems used in industrial and Internet of Things (IoT) environments increasingly operate as network-connected edge devices. While such connectivity enables distributed processing and remote monitoring, it also exposes embedded vision nodes to security threats, including command injection, frame replay, data tampering, and abnormal communication traffic. This paper presents a hardware-assisted security architecture for an FPGA-ARM embedded vision system designed for high-speed image acquisition and network streaming. The proposed solution integrates several lightweight protection mechanisms directly into the FPGA processing pipeline, including frame replay detection, cyclic redundancy check (CRC)-based frame integrity verification, frame sequence monitoring, authenticated command execution, communication anomaly monitoring, and hardware-rooted trust primitives, such as a ring-oscillator physical unclonable function (PUF) and a pseudo-random generator. Optional secure communication is provided via a lightweight ASCON-authenticated encryption core. The architecture was implemented on a Cyclone V System-on-Chip (SoC) platform using an industrial Camera Link camera and evaluated in a low-latency image-acquisition setup operating at 100 fps, with data throughput exceeding 1 Gbps. Experimental results demonstrate that the proposed security architecture introduces only about 1.6% additional FPGA logic utilization while maintaining full real-time acquisition performance. The presented approach demonstrates that practical hardware-level security mechanisms can be integrated into FPGA-based embedded vision nodes with minimal architectural modifications and negligible performance overhead. Full article
31 pages, 2825 KB  
Article
IIoT-Based Remote Monitoring System for Temperature, Current, and Vibration Using PLC and Node-RED in a Data Center Cooling Compressor: A Condition-Based Maintenance Framework
by Jefferson Damián Pinza Apolo, Jonathan Lizandro Bravo Robles, José Luis Dumán Zhicay, Ramiro Xavier Cazares Guerrero, Wilmer Fabian Albarracin Guarochico and Paul Francisco Baldeón Egas
Sensors 2026, 26(9), 2772; https://doi.org/10.3390/s26092772 - 29 Apr 2026
Viewed by 38
Abstract
Climate control systems are critical to ensuring the continuous operation of data centers, as they maintain the environmental conditions required by sensitive electronic equipment. In this context, continuous supervision of refrigeration compressors is essential to prevent failures that may compromise thermal stability. This [...] Read more.
Climate control systems are critical to ensuring the continuous operation of data centers, as they maintain the environmental conditions required by sensitive electronic equipment. In this context, continuous supervision of refrigeration compressors is essential to prevent failures that may compromise thermal stability. This work presents the design, implementation, and experimental validation of a remote monitoring and condition-based maintenance framework built on Industrial Internet of Things (IIoT) technologies for air-conditioning compressors used in data centers. The proposed architecture integrates industrial-grade sensors for temperature, electric current, and vibration, a Siemens LOGO! programmable logic controller (PLC) for signal acquisition and scaling, a Node-RED middleware layer for data flow management, and the ThingSpeak cloud platform for remote storage and analysis. The novel contributions of this work are: (i) a fully integrated low-cost IIoT stack validated on a Copeland ZR144KCE-TF5 scroll compressor under real operating conditions over a continuous 49-day monitoring period; (ii) a hybrid anomaly detection model that combines Z-score statistical baselines with moving-average prediction error to reduce false positives from transient events; and (iii) a condition-based maintenance decision framework that maps the three monitored variables to ISO 10816-3 vibration severity zones and manufacturer-referenced thermal and electrical thresholds, producing recommended maintenance actions. The framework was applied to the acquired dataset, confirming predominantly stable operation (93.4% of samples in ISO 10816-3 Zones A–B) while detecting an emergent mechanical-wear trend (5.64% of samples in Zone C) concentrated in the final days of the monitoring period and demonstrating the feasibility of the proposed architecture as a scalable and replicable solution for condition monitoring and maintenance decision support in critical technological infrastructures. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

28 pages, 2920 KB  
Article
NIDS-Mamba: Lightweight Network Intrusion Detection for IoT Sensor Networks via State Space Models
by Zixiang Ding, Jiahao Zheng and Xianyun Wu
Sensors 2026, 26(9), 2766; https://doi.org/10.3390/s26092766 - 29 Apr 2026
Viewed by 63
Abstract
The ubiquity of resource-constrained Internet-of Things (IoT) nodes creates an urgent demand for network intrusion detection systems (NIDSs) optimized for edge devices with limited computing power. In this paper, we propose a new NIDS system based on Mamba. NIDS-Mamba uses a dynamic sparse [...] Read more.
The ubiquity of resource-constrained Internet-of Things (IoT) nodes creates an urgent demand for network intrusion detection systems (NIDSs) optimized for edge devices with limited computing power. In this paper, we propose a new NIDS system based on Mamba. NIDS-Mamba uses a dynamic sparse attention and a lightweight state space to jointly learn from short-term anomaly and long-term attack patterns. We use standardized NF-UNSW-NB15 and NF-CSE-CIC-IDS2018 datasets to verify the effectiveness of this NIDS-Mamba model. We find that this NIDS-Mamba model is very effective in dealing with extreme class imbalance problems. In the NF-CSE-CIC-IDS2018 dataset, the model achieves 98.32% accuracy, 96.98% F1-score, and an AUC of 0.9996. Most notably, the model is very robust in handling extreme class imbalance problems in the NF-UNSW-NB15 dataset. It achieves 97.03% G-Mean, 0.7915 MCC, and 0.9983 AUC, far exceeding other baseline models. Compared to Transformer-based baselines, NIDS-Mamba achieves nearly an order-of-magnitude improvement in throughput while maintaining a parameter footprint compatible with edge deployment constraints. The proposed architecture effectively mitigates the quadratic complexity and memory wall inherent in standard Transformers, ensuring compatibility with Limited RAM and strict energy constraints. The proposed model achieves a compact design with 1.12 million parameters and a peak inference memory of 5.4 MB, ensuring its feasibility for edge-based IoT nodes. These properties make NIDS-Mamba a strong candidate for deployment on IoT gateways and edge sensor nodes in smart home, industrial IoT, and critical infrastructure scenarios. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

50 pages, 29943 KB  
Systematic Review
Hybrid Approaches of Machine Learning Algorithms in Predictive Maintenance: A Systematic Literature Review
by Jorge Paredes, Danilo Chavez, Ramiro Isa-Jara and Diego Vargas
Appl. Syst. Innov. 2026, 9(5), 90; https://doi.org/10.3390/asi9050090 - 29 Apr 2026
Viewed by 200
Abstract
The advent of Industry 4.0 has precipitated the digitization of myriad industrial processes, a feat attributable to the implementation of sophisticated digital enablers such as artificial intelligence (AI) and the Internet of Things (IoT). These technological advances have facilitated the implementation of various [...] Read more.
The advent of Industry 4.0 has precipitated the digitization of myriad industrial processes, a feat attributable to the implementation of sophisticated digital enablers such as artificial intelligence (AI) and the Internet of Things (IoT). These technological advances have facilitated the implementation of various innovative applications, especially in the field of predictive maintenance. This approach facilitates more precise estimation of the remaining useful life (RUL) of equipment, determination of the health index (HI) of machinery, and planning of effective maintenance schedules that circumvent unexpected and costly shutdowns in industrial operations. The employment of hybrid approaches founded on machine learning algorithms in the domain of predictive maintenance signifies a perpetually evolving field of research, wherein novel techniques, methodologies, and strategies are proposed to enhance maintenance efficiency and reliability. In order to furnish a substantial and exhaustive compendium of information, a methodical literature review is hereby presented, offering a meticulous survey of the hybrid approaches utilized within this domain. The study analyzed 77 papers from the 914 papers found on the topic, to find and organize the body of knowledge, and presents a lucid taxonomy, the primary algorithms employed in hybrid approaches, the most prevalent datasets, the applicable technology architectures, and the maturity level of these solutions. This study provides a robust conceptual foundation for future research, underscoring the significance of hybrid approaches as a promising field of study, with considerable potential for advancement in the realm of industrial predictive maintenance. Full article
Show Figures

Figure 1

33 pages, 1749 KB  
Article
LLM-Conductor: A Closed-Loop Resource-Adaptive Architecture for Secure LLM Deployment in Industrial Sensor Networks and IIoT Systems
by Kai Xu, Diming Zhang and Xuguo Wang
Sensors 2026, 26(9), 2733; https://doi.org/10.3390/s26092733 - 28 Apr 2026
Viewed by 567
Abstract
To address the bottlenecks of missing decision-making closed loop, insufficient experience reuse, and decoupled resource scheduling in industrial LLM deployment, this paper proposes LLM-Conductor, a three-layer collaborative architecture that enables monitoring-feedback autonomous decision-making, structured policy memory, and joint policy-resource optimization.Through ablation studies, horizontal [...] Read more.
To address the bottlenecks of missing decision-making closed loop, insufficient experience reuse, and decoupled resource scheduling in industrial LLM deployment, this paper proposes LLM-Conductor, a three-layer collaborative architecture that enables monitoring-feedback autonomous decision-making, structured policy memory, and joint policy-resource optimization.Through ablation studies, horizontal comparisons with ISOLATEGPT and ReAct, and graded resource-reduction experiments across six tiers, the results demonstrate that the security risk incidence rate is reduced from 70.6 percent to 1.3 percent, the multi-application collaborative task completion rate reaches 100 percent, and token utilization improves to 88.9 percent. Under constraints of at least 512 MB memory and at least 0.5 GHz CPU, the core task completion rate remains above 95 percent. By deeply coupling decision-making with resource scheduling, this architecture provides an integrated pathway toward efficient, secure, and reliable LLM deployment in Industrial Internet of Things scenarios. Current validation focuses on software-layer interaction patterns under simulated resource-constrained environments, with physical-layer industrial integration reserved for future work. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 6011 KB  
Article
Informer-Based Prediction of Mold Level Anomalies in Continuous Casting via Temporal and Frequency-Domain Features
by Xin Xin, Meixia Fu, Wei Li, Hongbing Wang, Qu Wang, Yifan Lu, Zhenqian Wang, Yuntian Brian Bai, Tao Gu, Changyuan Yu and Jianquan Wang
Metals 2026, 16(5), 474; https://doi.org/10.3390/met16050474 - 27 Apr 2026
Viewed by 140
Abstract
The stability of mold level fluctuations (MLFs) is crucial for product quality and process efficiency in continuous casting. Abnormal mold level fluctuations, which are typically associated with multiple factors including stopper rod opening, casting speed, and mold width, are known to lead to [...] Read more.
The stability of mold level fluctuations (MLFs) is crucial for product quality and process efficiency in continuous casting. Abnormal mold level fluctuations, which are typically associated with multiple factors including stopper rod opening, casting speed, and mold width, are known to lead to slab quality defects. In this paper, an Informer-based prediction framework is proposed for the early detection of abnormal MLF. A threshold-based labeling method is developed to quantify the future likelihood and severity of anomalies across different time horizons. Considering the importance of frequency-domain features in mold level prediction, power spectral density (PSD) features are incorporated and smoothed using the exponential moving average (EMA) to enhance predictive performance. Through the integration of temporal and processed spectral features, early indicators of abnormality can be captured, and proactive warnings can be issued. The proposed architecture is validated using approximately 32.5 million data points from a real-world continuous casting process. This approach provides a robust and data-driven solution for predicting and diagnosing abnormal MLF events in continuous casting. Experimental results show that the mean ROC-AUC and PR-AUC reach 0.821 and 0.418, respectively. Full article
(This article belongs to the Section Computation and Simulation on Metals)
27 pages, 3983 KB  
Article
Low-Latency DDoS Detection for IIoT and SCADA Networks Using Proximal Policy Optimisation and Deep Reinforcement Learning
by Mikiyas Alemayehu, Mohamed Chahine Ghanem, Hamza Kheddar, Dipo Dunsin, Chaker Abdelaziz Kerrache and Geetanjali Rathee
Information 2026, 17(5), 412; https://doi.org/10.3390/info17050412 - 26 Apr 2026
Viewed by 129
Abstract
Industrial Internet of Things (IIoT) and SCADA-connected networks are increasingly vulnerable to Distributed Denial of Service (DDoS) attacks, which can disrupt time-sensitive industrial processes and compromise operational continuity. Effective mitigation requires accurate and low-latency attack detection at the network edge, where industrial gateways [...] Read more.
Industrial Internet of Things (IIoT) and SCADA-connected networks are increasingly vulnerable to Distributed Denial of Service (DDoS) attacks, which can disrupt time-sensitive industrial processes and compromise operational continuity. Effective mitigation requires accurate and low-latency attack detection at the network edge, where industrial gateways operate under strict constraints in computation, memory, and energy. This study investigates Deep Reinforcement Learning (DRL) for real-time binary DDoS detection and proposes a detector based on Proximal Policy Optimisation (PPO) for deployment in resource-constrained IIoT environments. Four DRL agents, namely Deep Q-Network (DQN), Double DQN, Dueling DQN, and PPO, are trained and evaluated within a unified experimental pipeline incorporating automatic label mapping, numerical feature selection, robust scaling, and class balancing. Experiments are conducted on three representative benchmark datasets: CIC-DDoS2019, Edge-IIoTset, and CICIoT23. Performance is assessed using accuracy, precision, recall, F1-score, false positive rate, false negative rate, and CPU inference latency. The reward function is asymmetric: +1 for correct classification, −1 for false positive, and −2 for false negative, penalising missed attacks more heavily for IIoT safety. The results show that PPO provides a competitive accuracy–latency tradeoff across all three datasets, achieving the highest mean accuracy of 97.65% and ranking first on CIC-DDoS2019 with a score of 95.92%, while remaining competitive on Edge-IIoTset (99.11%) and CICIoT23 (97.92%). PPO also converges faster than the value-based baselines. Inference latency is below 0.8 ms per sample on a standard CPU (Intel i7-11800H), confirming real-time feasibility. To support practical deployment, the trained PPO policies are exported to ONNX format (≈9 KB per model), enabling lightweight and PyTorch-independent inference on industrial edge gateways. Full article
(This article belongs to the Special Issue Reinforcement Learning for Cyber Security: Methods and Applications)
38 pages, 6938 KB  
Article
DeepSense: An Adaptive Scalable Ensemble Framework for Industrial IoT Anomaly Detection
by Amir Firouzi and Ali A. Ghorbani
Sensors 2026, 26(9), 2662; https://doi.org/10.3390/s26092662 (registering DOI) - 24 Apr 2026
Viewed by 613
Abstract
The Industrial Internet of Things (IIoT) has become a cornerstone of modern industrial automation, enabling real-time monitoring, intelligent decision-making, and large-scale connectivity across cyber–physical systems. However, the growing scale, heterogeneity, and dynamic behavior of IIoT environments significantly expand the attack surface and challenge [...] Read more.
The Industrial Internet of Things (IIoT) has become a cornerstone of modern industrial automation, enabling real-time monitoring, intelligent decision-making, and large-scale connectivity across cyber–physical systems. However, the growing scale, heterogeneity, and dynamic behavior of IIoT environments significantly expand the attack surface and challenge the effectiveness of conventional security mechanisms. In this paper, we propose DeepSense, a hybrid and adaptive anomaly and intrusion detection framework specifically designed for resource-constrained and heterogeneous IIoT deployments. DeepSense integrates three complementary components: DataSense, a realistic data pipeline and experimental testbed supporting synchronized sensor and network data processing; RuleSense, a lightweight rule-based detection layer that provides fast, deterministic, and interpretable anomaly screening at the edge; and NeuroSense, a learning-driven detection module comprising an adaptive ensemble of 22 machine learning and deep learning models spanning classical, neural, hybrid, and Transformer-based architectures. NeuroSense operates as a second detection stage that validates suspicious events flagged by RuleSense and enables both coarse-grained and fine-grained attack classification. To support rigorous and practical assessment, this work further introduces a comprehensive performance evaluation framework that extends beyond accuracy-centric metrics by jointly considering detection quality, latency, resource efficiency, and detection coverage, alongside an optimization-based process for selecting Pareto-optimal model ensembles under realistic IIoT constraints. Extensive experiments across diverse detection scenarios demonstrate that DeepSense exhibits strong generalization, lower false positive rates, and robust performance under evolving attack behaviors. The proposed framework provides a scalable and efficient IIoT security solution that meets the operational requirements of Industry 4.0 and the resilience-oriented objectives of Industry 5.0. Full article
40 pages, 1948 KB  
Systematic Review
Edge–Cloud Collaboration for Machine Condition Monitoring: A Comprehensive Review of Mechanisms, Models, and Applications
by Liyuan Yu, Jitao Fang, Qiuyan Wang, Fajia Li and Haining Liu
Machines 2026, 14(5), 476; https://doi.org/10.3390/machines14050476 (registering DOI) - 24 Apr 2026
Viewed by 149
Abstract
Machine condition monitoring increasingly depends on distributed sensing, edge intelligence, and cloud analytics, yet timely and trustworthy health assessment remains constrained by latency, bandwidth, privacy, and reliability requirements. Cloud-only architectures provide scalable computation and historical data integration but often fail to satisfy real-time [...] Read more.
Machine condition monitoring increasingly depends on distributed sensing, edge intelligence, and cloud analytics, yet timely and trustworthy health assessment remains constrained by latency, bandwidth, privacy, and reliability requirements. Cloud-only architectures provide scalable computation and historical data integration but often fail to satisfy real-time industrial needs, whereas edge-only deployments are limited by restricted computing resources and fragmented local knowledge. Edge–cloud collaboration has, therefore, emerged as a practical architecture for distributing perception, inference, learning, and coordination across hierarchical industrial systems. This review examines 147 publications on edge–cloud collaboration for machine condition monitoring published between 2019 and February 2026. A four-dimensional taxonomy is developed to organize the literature into model-centric, data-centric, resource and task-centric, and architecture and trust-centric mechanisms, while 13 survey and review papers are considered separately for contextual comparison. On this basis, the review analyzes representative collaboration mechanisms and enabling technologies, with particular attention to federated learning, transfer learning, knowledge distillation, digital twins, and deep reinforcement learning, and surveys their deployment in manufacturing, energy, transportation, and infrastructure monitoring scenarios. The literature remains dominated by model-centric collaboration, while architecture and trust-centric studies increasingly provide the system foundations required for practical deployment. The review further identifies major open challenges, including robust generalization under changing operating conditions, efficient data transmission, real-time resource coordination, interoperability, and trustworthy large-scale deployment, and outlines future directions in foundation-model-based edge–cloud collaboration, continual learning, dual digital twins, trustworthy collaboration, and privacy-preserving industrial ecosystems. Full article
15 pages, 1316 KB  
Article
Study of Graphene-Based Strain Sensing Output Signals Under External Electromagnetic Interference Conditions
by Furong Kang, Shuqi Han, Kaixi Bi, Jian He and Xiujian Chou
Nanomaterials 2026, 16(9), 509; https://doi.org/10.3390/nano16090509 (registering DOI) - 23 Apr 2026
Viewed by 495
Abstract
Graphene possesses exceptional mechanical strength, high electrical conductivity, and a stable lattice structure, making it an ideal material for sensors in advanced manufacturing. However, these sensors face stability challenges due to complex electromagnetic interference (EMI) environments generated by electrical equipment. Therefore, investigating the [...] Read more.
Graphene possesses exceptional mechanical strength, high electrical conductivity, and a stable lattice structure, making it an ideal material for sensors in advanced manufacturing. However, these sensors face stability challenges due to complex electromagnetic interference (EMI) environments generated by electrical equipment. Therefore, investigating the influence of EMI on sensor performance is of significant importance. In this study, simulations were performed to analyze electrical parameter perturbations of intrinsic graphene films under EMI conditions. The Magnetic Fields, Solid Mechanics, and Electrostatics modules in COMSOL Multiphysics were employed to construct a coupled model of a three-phase power transformer and a graphene-based pressure sensor. The results indicate that EMI can induce baseline drift on the order of ~5% full scale (FS) in the graphene current density, accompanied by degradation in signal-to-noise ratio (SNR) exceeding ~15 dB under typical simulation conditions. Graphene in direct contact with metal electrodes shows enhanced sensitivity to EMI, with more pronounced noise amplification due to interfacial coupling effects. In contrast, cavity-suspended graphene configurations exhibit relatively improved robustness, suggesting that suspended membrane architectures can mitigate EMI by reducing parasitic coupling and enhancing mechanical isolation. Compared with previous studies, this work highlights the role of multiphysics coupling and membrane suspension in influencing EMI-induced perturbations, providing theoretical guidance for the design of graphene-based sensors in power system and industrial Internet of Things (IoT) applications. Full article
(This article belongs to the Section Nanoelectronics, Nanosensors and Devices)
18 pages, 604 KB  
Review
A Narrative Review on Internet of Things and Artificial Intelligence for Poultry Production
by Anjan Dhungana, Bidur Paneru, Samin Dahal and Lilong Chai
Animals 2026, 16(9), 1285; https://doi.org/10.3390/ani16091285 - 22 Apr 2026
Viewed by 410
Abstract
Recently, poultry production has increased worldwide to address the increasing demand of affordable animal-sourced protein. To meet this requirement, poultry production operations have become more concentrated, introducing management challenges related to disease control, productivity, and animal welfare. However, manual flock monitoring and management [...] Read more.
Recently, poultry production has increased worldwide to address the increasing demand of affordable animal-sourced protein. To meet this requirement, poultry production operations have become more concentrated, introducing management challenges related to disease control, productivity, and animal welfare. However, manual flock monitoring and management have become impractical in such cases, creating a need for automatic data-driven management approaches. In this context, the Internet of Things (IoT) has emerged as a potential technological solution for continuous flock monitoring, data sharing, and decision-making. Despite this, its adoption in poultry production is limited compared with its widespread use in crop production, transportation, and manufacturing industrial sectors. Furthermore, advanced analytical techniques such as artificial intelligence (AI), applied to data gathered by IoT-enabled devices, have shown promising results by generating actionable information. Existing literature suggests that the integration of IoT and AI can address the major challenges associated with modern large-scale poultry production systems. While most applications remain at the research scale, such technologies have the potential for improving flock monitoring, enhancing productivity, and ensuring proper animal welfare. This narrative review examines the current state of IoT and AI based technologies, together or in part identifies the limitations, research gaps, and opportunities for future development. Full article
(This article belongs to the Section Poultry)
Show Figures

Figure 1

20 pages, 2659 KB  
Article
A Security-Aware Ambient Intelligence Framework for Detecting Violent Language in Airline Customer Reviews
by Fahad Alanazi and Osama Rabie
Future Internet 2026, 18(5), 224; https://doi.org/10.3390/fi18050224 - 22 Apr 2026
Viewed by 241
Abstract
The aviation industry operates in a security-sensitive environment where customer feedback may contain not only expressions of satisfaction or dissatisfaction but also threatening or violent language with potential security implications. While conventional sentiment analysis effectively captures customer opinions, it remains insufficient for identifying [...] Read more.
The aviation industry operates in a security-sensitive environment where customer feedback may contain not only expressions of satisfaction or dissatisfaction but also threatening or violent language with potential security implications. While conventional sentiment analysis effectively captures customer opinions, it remains insufficient for identifying security-relevant linguistic cues that could signal risks requiring proactive intervention. This study addresses this gap by introducing a security-aware ambient intelligence framework for detecting violent language in airline customer reviews. This framework supports intelligent internet-based monitoring systems and real-time threat detection. We present the first annotated dataset of airline reviews specifically labeled for violent and threatening content, derived from 3629 reviews and balanced through manual resampling to achieve equal representation across positive, neutral, negative, and violent classes. The proposed framework employs VADER-based sentiment analysis for initial polarity estimation, combined with a validated annotation process to identify violent or threat-related content, followed by comprehensive feature engineering combining TF-IDF (2000 features) with text statistics and sentiment scores. We systematically evaluate individual classifiers (Random Forest, Decision Tree, SVM, Naive Bayes) against ensemble methods (Voting, Stacking, Boosting) using accuracy, precision, recall, F1-score, and ROC AUC metrics. Results demonstrate that Stacking achieves the highest raw performance (98.57% accuracy, F1-macro 0.9856), while Naive Bayes offers an optimal balance between effectiveness and computational efficiency (81.79% accuracy, F1-macro 0.8172, training time 0.03 s). This is the first dataset and framework designed for security-aware analysis of airline reviews. The selected Naive Bayes model achieves per-class F1-scores of 0.9978 for neutral, 0.7814 for negative, 0.7482 for violent, and 0.7415 for positive reviews, with a macro-average ROC AUC of 0.7123. The framework is deployed with serialized components enabling real-time prediction, supporting both single-review analysis and batch processing for integration into airline security monitoring systems. This work establishes a foundation for security-aware natural language processing in critical infrastructure contexts, bridging the gap between conventional sentiment analysis and proactive threat detection. Full article
Show Figures

Figure 1

21 pages, 2215 KB  
Article
Optimal Consensus Tracking Control for Nonlinear Multi-Agent Systems via Actor–Critic Reinforcement Learning
by Yi Mo, Xinsuo Li, Kunyu Xiang and Dengguo Xu
Symmetry 2026, 18(4), 691; https://doi.org/10.3390/sym18040691 - 21 Apr 2026
Viewed by 251
Abstract
This paper presents an adaptive optimal consensus tracking control scheme for canonical nonlinear multi-agent systems (MASs) with unknown dynamics, employing an actor–critic reinforcement learning (RL) framework. The scheme integrates a sliding mode mechanism to suppress tracking errors and ensure consensus tracking between the [...] Read more.
This paper presents an adaptive optimal consensus tracking control scheme for canonical nonlinear multi-agent systems (MASs) with unknown dynamics, employing an actor–critic reinforcement learning (RL) framework. The scheme integrates a sliding mode mechanism to suppress tracking errors and ensure consensus tracking between the followers and the leader. Additionally, optimal control is designed to find a Nash equilibrium in a graphical game. To address the intractability of obtaining an analytical solution for the coupled Hamilton–Jacobi–Bellman (HJB) equation, a policy iteration algorithm is utilized. Within this algorithm, a critic neural network (NN) approximates the gradient of the optimal value function, while an actor NN approximates the optimal control policy. Together, these networks form a compact actor–critic (AC) architecture that achieves optimal consensus tracking. Furthermore, the proposed method guarantees the boundedness of all closed-loop signals while ensuring consensus tracking. Finally, two simulations are conducted to verify the effectiveness and advantages of the proposed method. Full article
(This article belongs to the Special Issue Symmetry in Control Systems: Theory, Design, and Application)
Show Figures

Figure 1

28 pages, 3851 KB  
Article
Joint Service Chain Orchestration and Computation Offloading via GNN-Based QMIX in Industrial IoT
by Xinzhi Huang and Bingxin Tian
Sensors 2026, 26(8), 2559; https://doi.org/10.3390/s26082559 - 21 Apr 2026
Viewed by 189
Abstract
In IIoT edge computing, multi-edge server collaborative scheduling faces two core issues due to random task arrivals, heterogeneous resources, and complex topology: traditional model-driven methods cannot make dynamic decisions in dynamic environments, and conventional MARL fails to characterize inter-node topological dependencies and load [...] Read more.
In IIoT edge computing, multi-edge server collaborative scheduling faces two core issues due to random task arrivals, heterogeneous resources, and complex topology: traditional model-driven methods cannot make dynamic decisions in dynamic environments, and conventional MARL fails to characterize inter-node topological dependencies and load correlations. To address this, this paper investigates the joint optimization of task offloading, computing resource allocation, and SFC orchestration in IIoT, constructs a cloud-edge-end collaborative architecture, and models the problem as a POMDP to minimize the overall system cost under multiple constraints. A graph-guided value-decomposition MARL method is proposed, which extracts spatial topology and neighborhood-load features of edge nodes via a GNN and combines them with the QMIX framework to realize multi-agent centralized training and distributed execution. Simulations show that the algorithm converges stably under different server scales and task loads, significantly outperforms benchmark algorithms, and can suppress performance degradation in high-load scenarios, demonstrating its robustness and scalability in complex industrial environments. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

Back to TopTop