Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (577)

Search Parameters:
Keywords = smart learning service

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1501 KB  
Article
MA-JTATO: Multi-Agent Joint Task Association and Trajectory Optimization in UAV-Assisted Edge Computing System
by Yunxi Zhang and Zhigang Wen
Drones 2026, 10(4), 267; https://doi.org/10.3390/drones10040267 - 7 Apr 2026
Abstract
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area [...] Read more.
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area coverage capabilities, offer an innovative architecture for low-latency and highly reliable edge services. However, the practical deployment of such systems faces a highly complex multi-objective optimization problem featured by the tight coupling of task offloading decisions, UAV trajectory planning, and edge server resource allocation. Conventional optimization methods are difficult to adapt to the dynamic and high-dimensional characteristics of this problem, leading to suboptimal system performance. To address this critical challenge, this paper constructs an intelligent collaborative optimization framework for UAV-assisted edge computing systems and formulates the system quality of service (QoS) optimization problem as a mixed-integer non-convex programming problem with the dual objectives of minimizing task processing latency and reducing overall system energy consumption. A multi-agent joint task association and trajectory optimization (MA-JTATO) algorithm based on hybrid reinforcement learning is proposed to solve this intractable problem, which innovatively decouples the original coupled optimization problem into three interrelated subproblems and realizes their collaborative and efficient solution. Specifically, the Advantage Actor-Critic (A2C) algorithm is adopted to realize dynamic and optimal task association between UAVs and edge servers for discrete decision-making requirements; the multi-agent deep deterministic policy gradient (MADDPG) method is employed to achieve cooperative and energy-efficient trajectory planning for multiple UAVs to meet the needs of continuous control in dynamic environments; and convex optimization theory is applied to obtain a closed-form optimal solution for the efficient allocation of computational resources on edge servers. Simulation results demonstrate that the proposed MA-JTATO algorithm significantly outperforms traditional baseline algorithms in enhancing overall QoS, effectively validating the framework’s superior performance and robustness in dynamic and complex scenarios. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

22 pages, 2316 KB  
Article
Operational Management of Multi-Vendor Wi Fi Networks in Smart Campus Environments
by Weerapatr Ta-Armart and Charuay Savithi
Technologies 2026, 14(4), 204; https://doi.org/10.3390/technologies14040204 - 30 Mar 2026
Viewed by 252
Abstract
Digital transformation in higher education increasingly hinges on the robustness and governability of Information and Communication Technology (ICT) infrastructures, with campus Wi-Fi networks serving as the operational backbone of digital learning, research collaboration, and administrative services. In large universities, these networks typically evolve [...] Read more.
Digital transformation in higher education increasingly hinges on the robustness and governability of Information and Communication Technology (ICT) infrastructures, with campus Wi-Fi networks serving as the operational backbone of digital learning, research collaboration, and administrative services. In large universities, these networks typically evolve into heterogeneous, multi-vendor environments, introducing ongoing challenges in monitoring coherence, configuration governance, and cross-platform performance diagnosis. Despite the centrality of these issues, smart campus scholarship has paid limited attention to day-to-day operational management. This study examines the design and operational performance of a dual-platform Wi-Fi network management architecture implemented at Mahasarakham University, Thailand. The architecture strategically integrates SolarWinds and LibreNMS to combine centralized network-wide visibility with fine-grained, device-level diagnostics across a multi-vendor infrastructure. An engineering-oriented mixed-method approach was employed, drawing on production monitoring logs and semi-structured interviews with campus network engineers. Findings indicate that SolarWinds strengthens configuration oversight and campus-level situational awareness, whereas LibreNMS enhances detailed performance analytics and accelerates fault isolation. Their coordinated deployment improves operational stability, diagnostic clarity, and long-term maintainability of campus Wi-Fi systems. The study provides practical architectural guidance for managing heterogeneous ICT infrastructures in smart campus and enterprise-scale environments. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

17 pages, 2985 KB  
Article
EDIN: An Enhanced Deep Inertial Navigation Method for Pedestrian Localization
by Jin Wu, Gong Cheng and Jianga Shang
Electronics 2026, 15(6), 1306; https://doi.org/10.3390/electronics15061306 - 20 Mar 2026
Viewed by 264
Abstract
Indoor pedestrian navigation tasks, as a key part of smart cities and navigation services, face dual challenges of accuracy and cost under complex building environments. Currently, neural inertial navigation is at the vanguard of current research in indoor pedestrian navigation, and existing related [...] Read more.
Indoor pedestrian navigation tasks, as a key part of smart cities and navigation services, face dual challenges of accuracy and cost under complex building environments. Currently, neural inertial navigation is at the vanguard of current research in indoor pedestrian navigation, and existing related studies have achieved positive results. However, the exploration of deep learning solutions is still not sufficient, mainly reflected in the lack of explorations of model training configurations. Based on testing results under different deep learning schemes, this paper proposes EDIN, an enhanced deep inertial navigation approach. This method benefits from a proprietary neural network based on ResNeXt with Convolutional Block Attention Module (CBAM) to predict the relationship between inertial data and motion trajectory. Compared to existing projects, this paper also makes improvements in the model training process, thereby improving the predictive effect of the trained model. Specifically, this paper innovatively uses Logcosh as the loss function and combines data rotation and additional noise as data augment methods. To assess EDIN’s performance, extensive tests were conducted using three publicly available datasets: RoNIN, OXIOD, and RIDI. The results clearly indicate EDIN’s superior performance relative to other neural inertial navigation systems. Notably, localization accuracy improved significantly, with an average enhancement of 16.06% compared to the RoNIN-ResNet method. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 6306 KB  
Article
Trustless Federated Reinforcement Learning for VPP Dispatch
by Xin Zhang and Fan Liang
Electronics 2026, 15(6), 1303; https://doi.org/10.3390/electronics15061303 - 20 Mar 2026
Viewed by 238
Abstract
Large-scale Virtual Power Plants (VPPs) are increasingly essential as Distributed Energy Resources (DERs) assume ancillary service duties once supplied by conventional generation, yet scaling a VPP exposes a persistent trilemma among economic efficiency, data privacy, and operational security. Centralized coordination can approach optimal [...] Read more.
Large-scale Virtual Power Plants (VPPs) are increasingly essential as Distributed Energy Resources (DERs) assume ancillary service duties once supplied by conventional generation, yet scaling a VPP exposes a persistent trilemma among economic efficiency, data privacy, and operational security. Centralized coordination can approach optimal revenue but requires collecting fine-grained DER operational data and creates a single point of compromise. Federated Learning (FL) mitigates raw data centralization by keeping measurements and experience local, but it introduces a fragile trust assumption that the aggregator will correctly and fairly combine model updates. This trust gap is acute in reinforcement learning-based VPP control because aggregation deviations, including selectively dropping updates, manipulating weights, replaying stale models, or injecting a replacement model, can silently bias the learned policy and degrade both profit and compliance. We propose a zero-knowledge federated reinforcement learning framework for trustless VPP coordination in which each DER trains a local deep reinforcement learning agent to solve a multi-objective dispatch problem that balances ancillary service revenue against battery degradation under operational and grid constraints, while the global aggregation step is made externally verifiable. In each round, participants bind membership via signed receipts and commit to their updates, and the aggregator produces a zk-SNARK, proving that the published global parameters equal the agreed aggregation rule applied to the receipt-bound set of committed updates under a fixed-point encoding with range constraints. Verification is lightweight and can be performed independently by each DER, removing the need to trust the aggregator for aggregation integrity without centralizing raw DER operational data or trajectories. The proposed design does not aim to hide model updates from the aggregator. Instead, it provides external verifiability of the aggregation computation while keeping raw measurements and local experience. We formalize the threat model and verifiable security properties for aggregation correctness and update inclusion, present a circuit construction with proof complexity characterized by model dimension and fleet size, and evaluate the approach in power and cyber co-simulation on the IEEE 33 bus feeder with ancillary service signals. Results show near-centralized economic performance under benign conditions and improved robustness to aggregator side deviations compared to standard federated reinforcement learning. Full article
Show Figures

Figure 1

12 pages, 1019 KB  
Proceeding Paper
Intelligent Drone Patrolling with Real-Time Object Detection and GPS-Based Path Adaptation
by Gurugubelli V. S. Narayana, Shiba Prasad Swain, Debabrata Pattnayak, Manas Ranjan Pradhan and P. Ankit Krishna
Eng. Proc. 2026, 124(1), 82; https://doi.org/10.3390/engproc2026124082 - 18 Mar 2026
Viewed by 382
Abstract
Background: The need for autonomous aerial surveillance originates from weaknesses in manual monitoring, such as late response, low scalability and rigid patrol plans. AI and GPS-driven smart aerial monitoring present an attractive solution for continuous adaptive wide-area surveillance. Objective: In this paper, we [...] Read more.
Background: The need for autonomous aerial surveillance originates from weaknesses in manual monitoring, such as late response, low scalability and rigid patrol plans. AI and GPS-driven smart aerial monitoring present an attractive solution for continuous adaptive wide-area surveillance. Objective: In this paper, we aim at designing and validating experimentally a low-cost drone-based unmanned autonomous mission patrolling system with waypoint navigation, real-time video backhauling, AI-based human/object detection and GPS path re-planning when an event occurs to ensure the safety of patrol missions under battery constraints. Methods: The proposed architecture combines autonomous navigation and embedded flight-control with online analog video streaming and ground-station-based computer vision processing. Object detection based on deep learning for live aerial video is used, and the proposed system’s performance is tested at different altitudes, lighting states and GPS patrol plans. Results: Experimental results show that the proposed method can obtain stable waypoint tracking with a clear real-time video downlink in patrol missions. The system is able to adaptively modify paths as a reaction to detected events and commence safe return-to-home functionality during low-battery conditions. The proposed detection model obtains a mean average precision of 87.4%, with an F1-score of 0.89 and real-time inference latency (20–25 ms per frame) that enables fast service without any interruption in practice during surveillance deployment. Conclusions: Experimental results show that the proposed method can obtain stable waypoint tracking with a clear real-time video downlink in patrol missions. The system can adaptively modify paths as a reaction to detected events and commence safe return-to-home functionality during low-battery conditions. The proposed detection model obtains a mean average precision of 87.4%, with an F1-score of 0.89 and real-time inference latency (20–25 ms per frame) that enables fast service without any interruption in practice during surveillance deployment. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

20 pages, 1122 KB  
Article
A Robust Fingerprint-Based Machine Learning Model for Indoor Navigation in Real Time
by Md. Selim Al Mamun and Fatema Akhter
Signals 2026, 7(2), 26; https://doi.org/10.3390/signals7020026 - 16 Mar 2026
Viewed by 369
Abstract
The accurate positioning of location in indoor environment has become crucial in many location-based services, mainly where global positioning systems (GPSs) are unavailable or fail to navigate correctly. Conventional fingerprint-based approaches face challenges with instability, low accuracy, and being sensitive to changes in [...] Read more.
The accurate positioning of location in indoor environment has become crucial in many location-based services, mainly where global positioning systems (GPSs) are unavailable or fail to navigate correctly. Conventional fingerprint-based approaches face challenges with instability, low accuracy, and being sensitive to changes in the environment. This study proposes a robust fingerprint-based machine learning (ML) model for dynamic environment indoor navigation in real time. The proposed model uses link quality indicator (LQI) values from IEEE 802.15.4 as fingerprints and supervised learning algorithms, showing high accuracy and a strong ability to adapt to changes in the environment. A room within a building floor has been regarded as the unit of location identification instead of the user’s exact coordinates to make the suggested model more relevant under practical conditions. The model was trained and tested using a real LQI dataset collected from varied indoor conditions to ensure the system can adapt effectively and operate consistently in dynamic environments and signal conditions. The results show that the proposed model surpasses fingerprinting indoor navigation in room detection accuracy and flexibility to environmental changes. An implemented prototype proved the real-time capability of the proposal in smart buildings, hospitals, and industrial IoT settings. Full article
Show Figures

Figure 1

22 pages, 10587 KB  
Article
Accelerating Optimal Building Control Through Reinforcement Learning with Surrogate Building Models
by Andres Sebastian Cespedes Cubides, Christian Friborg Laursen and Muhyiddine Jradi
Appl. Sci. 2026, 16(6), 2790; https://doi.org/10.3390/app16062790 - 13 Mar 2026
Viewed by 425
Abstract
Buildings account for a substantial share of global energy use, yet the adoption of advanced optimal control strategies remains limited due to high computational costs and the difficulty of safe deployment. This paper presents a fully Python-based, data-driven deep reinforcement learning (DRL) supervisory [...] Read more.
Buildings account for a substantial share of global energy use, yet the adoption of advanced optimal control strategies remains limited due to high computational costs and the difficulty of safe deployment. This paper presents a fully Python-based, data-driven deep reinforcement learning (DRL) supervisory control framework that leverages gray box surrogate modeling and Imitation Learning to overcome these barriers. The novelty of this work lies in the integration of an ontology-based Twin4Build surrogate model with Imitation Learning and Deep Reinforcement Learning, enabling efficient training of building control policies in a low-cost environment before transfer to a high-fidelity BOPTEST emulator. Results demonstrate that the trade-off of using a lower-accuracy surrogate accelerates training by a factor of 11 compared to high-fidelity models. Furthermore, the RL agent successfully learned load-shifting and peak-shaving strategies, eliminating start-up power spikes and achieving energy savings of up to 28.9%. Beyond substantial energy reductions, this pipeline yields a calibrated digital twin suitable for ongoing building services like anomaly detection, presenting a scalable path for real-world smart building optimization. Full article
Show Figures

Figure 1

17 pages, 1817 KB  
Review
Research Advances in Decision-Making Technologies for Precision Pesticide Application in Crops
by Xiaofu Feng, Tongye Shi, Huimin Wu, Mengran Yang, Mengyao Luo, Jiali Li and Changling Wang
Agronomy 2026, 16(6), 605; https://doi.org/10.3390/agronomy16060605 - 12 Mar 2026
Viewed by 335
Abstract
Global agricultural production is severely threatened by the intensification of crop diseases and pests. Traditional pesticide application methods, characterized by inefficiency and frequent phytotoxicity, necessitate the urgent development of smart plant protection technologies that feature precision, dosage reduction, and high efficiency. This study [...] Read more.
Global agricultural production is severely threatened by the intensification of crop diseases and pests. Traditional pesticide application methods, characterized by inefficiency and frequent phytotoxicity, necessitate the urgent development of smart plant protection technologies that feature precision, dosage reduction, and high efficiency. This study focuses on the core component of intelligent decision-making, systematically delineating the technological trajectory of the field through a three-tier analytical framework: “model evolution–system integration–application form.” Analysis reveals that decision-making models have transitioned from rule-driven and data-driven approaches to fusion-driven paradigms. This evolution marks a shift from the codification of empirical experience to data learning, culminating in the synergistic integration of multi-source information and domain knowledge. At the system application level, the core technical architecture—comprising multi-dimensional information sensing, real-time edge computing, and precise control execution—has facilitated the translation of intelligent pesticide application from laboratory settings to field deployment. Future decision-making systems are projected to evolve towards causal understanding, cluster collaboration, and ubiquitous service, providing critical technical support for the green transformation and sustainable development of agriculture. Full article
Show Figures

Figure 1

25 pages, 3570 KB  
Article
A Context-Aware Flood Warning Framework Integrating Ensemble Learning and LLMs
by Adnan Ahmed Abi Sen, Fares Hamad Aljohani, Nour Mahmoud Bahbouh, Adel Ben Mnaouer, Omar Tayan and Ahmad. B. Alkhodre
GeoHazards 2026, 7(1), 35; https://doi.org/10.3390/geohazards7010035 - 11 Mar 2026
Viewed by 395
Abstract
Smart cities require effective disaster management (like flooding, solar storms, sandstorms, or hurricanes), as it directly impacts people’s lives. The key challenges of disaster management are timely detection and effective notification during the crisis. This research presents a smart multi-layer framework for notification [...] Read more.
Smart cities require effective disaster management (like flooding, solar storms, sandstorms, or hurricanes), as it directly impacts people’s lives. The key challenges of disaster management are timely detection and effective notification during the crisis. This research presents a smart multi-layer framework for notification classification and management before and during flooding disasters. The framework includes an early detection module as the main phase in the alerting process. This step depends on an Ensemble Learning (EL) model based on a triad of the three best selected models (Deep Learning (DL), Random Forest (RF), and K-nearest Neighbor (KNN)) to analyze data collected continuously from the Internet of Things (IoT) layer. In the boosting phase, the framework utilizes Large Language Models (LLMs) with DL to analyze social textual crowdsourcing data. The results will enable the framework to identify the most affected areas during a flood. The framework adds a fog computing layer alongside a cloud layer to enable instantaneous processing of user responses and generate specialized alerts based on contextual factors such as location, time, risk level, alert type, and user characteristics. Through testing and implementation, the proposed algorithms demonstrated an accuracy rate of over 98% in detecting threats using a dataset of real, collected weather and flooding data. Additionally, the framework proposes a centralized control panel and a design of a smartphone application that offers essential services and facilitates communication among managed civil defense teams, citizens, and volunteers. Full article
Show Figures

Figure 1

31 pages, 5508 KB  
Article
An Edge–Fog–Cloud IoT Framework for Real-Time Cardiac Monitoring and Rapid Clinical Alerts in Hospital Wards
by Tehseen Baig, Nauman Riaz Chaudhry, Reema Choudhary, Pankaj Yadav, Younus Ahamad Shaik and Ayesha Rashid
Future Internet 2026, 18(3), 130; https://doi.org/10.3390/fi18030130 - 2 Mar 2026
Viewed by 504
Abstract
The difficulties of continuously monitoring cardiac patients in general hospital wards are still present because of the manual charting system and the slow clinical reaction to worsening physiological state. This paper outlines an edge- and fog-based Internet of Things (IoT) healthcare system to [...] Read more.
The difficulties of continuously monitoring cardiac patients in general hospital wards are still present because of the manual charting system and the slow clinical reaction to worsening physiological state. This paper outlines an edge- and fog-based Internet of Things (IoT) healthcare system to acquire, process, and prioritize the vital signs of patients in real time to minimize the alert latency and increase the time of clinical interventions. Wearable 12-lead ECG sensors transmit physiological measurements, such as heart rate, blood pressure, and oxygen saturation, to an intelligent edge service, where preprocessing, triage by threshold, and machine learning ECG classification are performed, and selective synchronization of physiological data with a cloud backend and data delivery to the clinician are made possible by a mobile application. The proposed architecture combines a ribbon-like streaming scheme, Flask-based gateway services, and Firebase Firestore to coordinate scalable mob/cloud with the help of multi-client data dissemination. To encompass borderline clinical deterioration, which is often unnoticed by conventional threshold systems, physiological parameters are classified into normal, alarming, emergency, and a new state, average. The Pan–Tompkins++ peak detector algorithm and multiple edge-resident classifiers, such as random forest, XGBoost, decision tree, naive Bayes, K-nearest neighbor, and support vector machine, are used to analyze the ECG waveforms. Experimental analysis of PhysioNet datasets and tests in real wards prove that the ensemble models can reach the highest possible ECG classification precision of 91.96 percent and snapshot-driven mobile alerts can decrease routine patient evaluation time by several minutes, to an average of 15.23 ± 2.71 s. These results suggest that edge-centric IoT systems can be appropriate in latency-critical hospital settings and that fog-based coordination is useful in next-generation smart healthcare systems. Full article
(This article belongs to the Special Issue Edge and Fog Computing for the Internet of Things, 2nd Edition)
Show Figures

Graphical abstract

27 pages, 2900 KB  
Review
Electric Mobility Transition, Intelligent Digital Platforms, and Grid–Vehicle Integration Models: A Systematic Review
by Eduardo Javier Pozo-Burgos, Luis Omar Alpala and Argenis Lissander Heredia-Campaña
World Electr. Veh. J. 2026, 17(3), 123; https://doi.org/10.3390/wevj17030123 - 28 Feb 2026
Cited by 1 | Viewed by 1158
Abstract
The transition to electric mobility requires the coordinated evolution of vehicles, charging infrastructure, power systems, and intelligent digital platforms. This study examines the role of Industry 4.0 technologies in enabling large-scale electric vehicle (EV) adoption and effective EV grid integration and synthesizes the [...] Read more.
The transition to electric mobility requires the coordinated evolution of vehicles, charging infrastructure, power systems, and intelligent digital platforms. This study examines the role of Industry 4.0 technologies in enabling large-scale electric vehicle (EV) adoption and effective EV grid integration and synthesizes the existing evidence into a coherent analytical framework to support planning and policy decision-making. A systematic review of 27 peer-reviewed studies published between 2018 and 2025 was conducted in accordance with PRISMA 2020 guidelines, capturing the acceleration of electromobility following the consolidation of Industry 4.0 technologies and the emergence of large-scale policy commitments worldwide. The analysis covers six technology families, including the Internet of Things, big data and analytics, artificial intelligence and machine learning, blockchain, digital twins, and extended reality, and examines their applications in smart charging, grid vehicle coordination, fleet optimization, and vehicle-to-grid services. The findings show that analytics and artificial intelligence consistently enhance operational reliability and efficiency, while digital twins are increasingly applied to infrastructure siting, grid impact assessment, and scenario analysis. Building on these results, the study proposes a three-layer analytical framework composed of physical, digital, and decision layers, together with a functional EV grid generation integration model that links technology readiness to system-level deployment. In addition, a transition timeline for the 2025–2040 period and a concise set of key performance indicators are introduced to support evaluation and comparison. Policy implications for Ecuador and Latin America emphasize interoperability, data governance, realistic cost assessment, and a phased approach to vehicle-to-grid deployment. Full article
(This article belongs to the Section Charging Infrastructure and Grid Integration)
Show Figures

Graphical abstract

40 pages, 916 KB  
Review
Machine Learning-Enabled 5G and 6G Networks: Methods, Challenges, and Opportunities
by Muhammad Owais and Thokozani Shongwe
Appl. Sci. 2026, 16(4), 2071; https://doi.org/10.3390/app16042071 - 20 Feb 2026
Viewed by 925
Abstract
Fifth-generation (5G) and sixth-generation (6G) wireless communications aim to achieve significantly higher data speeds, remarkably low latency, and substantial improvements in the efficiency of base stations. With the rapid increase in the utilization of broadband data driven by Internet of Things (IoT) gadgets, [...] Read more.
Fifth-generation (5G) and sixth-generation (6G) wireless communications aim to achieve significantly higher data speeds, remarkably low latency, and substantial improvements in the efficiency of base stations. With the rapid increase in the utilization of broadband data driven by Internet of Things (IoT) gadgets, smart home systems, autonomous vehicles, and virtual reality devices, 5G and 6G networks are set to overcome the limitations of earlier telecommunication technologies and serve as key enablers for future IoT applications. Anticipated as the primary infrastructure for delivering emerging services, 5G cellular networks introduce new requirements and challenges that complicate the achievement of desired objectives. This paper provides a comprehensive overview of machine learning (ML) methods and their application in 5G and 6G wireless networks, covering supervised, unsupervised, and reinforcement learning (RL) approaches. ML is set to play a central and important role in 6G systems for these wireless networks. Subsequently, this paper thoroughly explores a series of challenges within the domain of 5G and 6G networks and examines research opportunities for applying ML techniques to address these challenges. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 6606 KB  
Data Descriptor
Annotated IoT Dataset of Waste Collection Events
by Peter Tarábek, Andrej Michalek, Roman Hriník, Ľubomír Králik and Karol Decsi
Data 2026, 11(2), 38; https://doi.org/10.3390/data11020038 - 11 Feb 2026
Viewed by 536
Abstract
This work presents a curated dataset of multimodal sensor measurements from Internet of Things (IoT) units mounted on waste collection vehicles. Each unit records multiple data streams including GPS position, vehicle velocity, radar-based container presence, accelerometer readings of the lifting arm, and RFID [...] Read more.
This work presents a curated dataset of multimodal sensor measurements from Internet of Things (IoT) units mounted on waste collection vehicles. Each unit records multiple data streams including GPS position, vehicle velocity, radar-based container presence, accelerometer readings of the lifting arm, and RFID tag identifiers of the bins. The dataset provides two complementary forms of annotation: (1) algorithmically generated events that were manually cleaned through visual inspection of sensor signals, offering large-scale coverage across 5 vehicles over a total of 25 collection days, and (2) manually validated events derived from synchronized video recordings, representing ground truth for 3 vehicles over 8 collection days. In total, the dataset contains 12,391 annotated waste collection events. The dataset spans diverse operational conditions with varying container sizes and includes both RFID-equipped and non-RFID bins. It can be used to train and evaluate machine learning models for event detection, anomaly recognition, or explainability studies, and to support practical applications such as Pay-as-you-throw (PAYT) waste management schemes. By combining multimodal sensor signals with reliable annotations, the dataset represents a unique resource for advancing research in smart waste collection and the broader field of IoT-enabled urban services. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

31 pages, 3500 KB  
Article
Lightweight Protection Mechanisms for IoT Networks Based on Trust Modelling
by Andric Rodríguez, Asdrúbal López-Chau, Leticia Dávila-Nicanor, Víctor Landassuri-Moreno and Saul Lazcano-Salas
IoT 2026, 7(1), 18; https://doi.org/10.3390/iot7010018 - 10 Feb 2026
Viewed by 719
Abstract
Since the deployment of the Internet of Things (IoT), it has transformed everyday life by enabling intelligent environments that improve efficiency and automate services in domains such as agriculture, healthcare, smart cities, and industry. However, the rapid proliferation of IoT devices has introduced [...] Read more.
Since the deployment of the Internet of Things (IoT), it has transformed everyday life by enabling intelligent environments that improve efficiency and automate services in domains such as agriculture, healthcare, smart cities, and industry. However, the rapid proliferation of IoT devices has introduced significant security challenges, largely driven by the heterogeneity of devices, resource constraints, and the increasing exposure of network communications. This work proposes a lightweight security protection mechanism for IoT networks based on trust modelling. The proposed approach integrates machine learning techniques to evaluate IoT node behavior using network-layer (Layer 3) traffic features under different labeling granularities, including binary, categorical, and subcategorical classifications. By focusing on network-layer observations, the model remains applicable across heterogeneous IoT devices while preserving a low computational footprint. In addition, the Common Vulnerability Scoring System (CVSS) is incorporated as a standardized vulnerability severity metric, enabling the integration of probabilistic security evidence with contextual information about potential impact. This combination allows the estimation of trust to reflect not only the likelihood of anomalous behavior but also its associated severity. Experimental evaluation was conducted using a representative IoT traffic dataset, multiple preprocessing strategies, and several classical machine learning models. The results demonstrate that aggregating traffic-based intrusion detection outputs with vulnerability severity metrics enables a more robust, flexible, and interpretable trust estimation process. This approach supports the early identification of potentially compromised nodes while maintaining scalability and efficiency, making it suitable for deployment in heterogeneous IoT environments. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of the Internet of Things)
Show Figures

Figure 1

18 pages, 714 KB  
Article
LoRa-Based IoT Multi-Hop Architecture for Smart Vineyard Monitoring: Simulation Framework and System Design
by Chiara Suraci, Pietro Zema, Giuseppe Marrara, Angelo Tropeano, Alessandro Campolo, Mariateresa Russo and Giuseppe Araniti
Sensors 2026, 26(4), 1112; https://doi.org/10.3390/s26041112 - 9 Feb 2026
Viewed by 538
Abstract
The growing interest in precision agriculture has led, in recent years, to an increase in the adoption of Internet of Things (IoT) technologies in the service of smart agriculture to optimize agricultural production processes through the monitoring of environmental conditions and prevent food [...] Read more.
The growing interest in precision agriculture has led, in recent years, to an increase in the adoption of Internet of Things (IoT) technologies in the service of smart agriculture to optimize agricultural production processes through the monitoring of environmental conditions and prevent food loss. This work stems from research conducted as part of the Tech4You project, where the enabling digital technologies developed in Spoke 6 contribute to the advanced solutions envisaged by Spoke 3 to facilitate the transition to a sustainable agrifood system. In particular, we present the design and evaluation of a multi-hop Device-to-Device (D2D) communication architecture that leverages Long Range (LoRa) technology, specifically designed for monitoring vineyards in the context of passito wine production. The proposed framework addresses the challenge of monitoring mobile containers for grapes during the drying phase, a critical stage in which inadequate temperatures and humidity can promote the growth of fungi and the formation of mycotoxins. The integration of simulation-based performance evaluation with a multi-layer system architecture is presented in this work. The objective is to compare the performance of different routing strategies in choosing data forwarding paths to the gateway. The simulation results show that the proposed routing strategy, which is based on learning but also focuses on energy consumption, offers good performance. In particular, it achieves packet delivery rates of over 92% and preserves over 95% of active nodes after 2 h of operation. Energy-aware routing strategies also perform well compared to those that only consider the distance from the destination, but overall, the proposed strategy achieves a better trade-off on the metrics analyzed. Full article
(This article belongs to the Special Issue 5G/6G Networks for Wireless Communication and IoT—2nd Edition)
Show Figures

Figure 1

Back to TopTop