You are currently viewing a new version of our website. To view the old version click .

Future Internet

Future Internet is an international, peer-reviewed, open access journal on internet technologies and the information society, published monthly online by MDPI.

Quartile Ranking JCR - Q2 (Computer Science, Information Systems)

All Articles (3,207)

The use of resource-constrained Low-Power and Lossy Networks (LLNs), where the IPv6 Routing Protocol for LLNs (RPL) is the de facto routing standard, has increased due to the Internet of Things’ (IoT) explosive growth. Because of the dynamic nature of IoT deployments and the lack of in-protocol security, RPL is still quite susceptible to routing-layer attacks like Blackhole, Lowered Rank, version number manipulation, and Flooding despite its lightweight architecture. Lightweight, data-driven intrusion detection methods are necessary since traditional cryptographic countermeasures are frequently unfeasible for LLNs. However, the lack of RPL-specific control-plane semantics in current cybersecurity datasets restricts the use of machine learning (ML) for practical anomaly identification. In order to close this gap, this work models both static and mobile networks under benign and adversarial settings by creating a novel, large-scale multiclass RPL attack dataset using Contiki-NG’s Cooja simulator. To record detailed packet-level and control-plane activity including DODAG Information Object (DIO), DODAG Information Solicitation (DIS), and Destination Advertisement Object (DAO) message statistics along with forwarding and dropping patterns and objective-function fluctuations, a protocol-aware feature extraction pipeline is developed. This dataset is used to evaluate fifteen classifiers, including Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), k-Nearest Neighbors (KNN), Random Forest (RF), Extra Trees (ET), Gradient Boosting (GB), AdaBoost (AB), and XGBoost (XGB) and several ensemble strategies like soft/hard voting, stacking, and bagging, as part of a comprehensive ML-based detection system. Numerous tests show that ensemble approaches offer better generalization and prediction performance. With overfitting gaps less than 0.006 and low cross-validation variance, the Soft Voting Classifier obtains the greatest accuracy of 99.47%, closely followed by XGBoost with 99.45% and Random Forest with 99.44%.

7 January 2026

Network simulation environments used for data collection under different benign, attack, and mobility configurations: (a) 11-node benign static topology, (b) 11-node attack-injected topology, (c) 11-node topology with mobility, (d) 21-node benign static topology, (e) 21-node attack-injected topologym, (f) 21-node topology with mobility.

Performance Assessment of DL for Network Intrusion Detection on a Constrained IoT Device

  • Armin Mazinani,
  • Daniele Antonucci and
  • Luca Davoli
  • + 1 author

This work investigates the deployment of Deep Learning (DL) models for network intrusion detection on resource-constrained IoT devices, using the public CICIoT2023 dataset. In particular, we consider the following DL models: Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), Temporal Convolutional Network (TCN), Multi-Layer Perceptron (MLP). Bayesian optimization is employed to fine-tune the models’ hyperparameters and ensure reliable performance evaluation across both binary (2-class) and multi-class (8-class, 34-class) intrusion detection. Then, the computational complexity of each DL model is analyzed—in terms of the number of Multiply–ACCumulate operations (MACCs), RAM usage, and inference time—through the STMicroelectronics Cube.AI Analyzer tool, with models being deployed on an STM32H7S78-DK board. To assess the practical deployability of the considered DL models, a trade-off score (balancing classification accuracy and computational efficiency) is introduced: according to this score, our experimental results indicate that MLP and TCN outperform the other models. Furthermore, Post-Training Quantization (PTQ) to 8-bit integer precision is applied, allowing the model size to be reduced by more than 90% with negligible performance degradation. This demonstrates the effectiveness of quantization in optimizing DL models for real-world deployment on resource-constrained IoT devices.

7 January 2026

Applications of smart cities, smart buildings, smart agriculture systems, smart grids, and other smart systems benefit from Internet of Things (IoT) protocols, networks, and architecture. Wireless Sensor Networks (WSNs) in smart systems that employ IoT use wireless communication technologies between sensors in the Things layer and the Fog layer hub. Such wireless protocols and networks include WiFi, Bluetooth, and Zigbee, among others. However, the payload formats of these protocols are heterogeneous, and thus, they lack a unified frame format that ensures interoperability. In this paper, a lightweight, interoperable frame format for low–rate, small–size Wireless Sensor Networks (WSNs) in IoT–based systems is designed, implemented, and tested. The practicality of this system is underscored by the development of a gateway that transfers collected data from sensors that use the unified frame to online servers via message queuing and telemetry transport (MQTT) secured with transport layer security (TLS), ensuring interoperability using the JavaScript Object Notation (JSON) format. The proposed frame is tested using market–available technologies such as Bluetooth and Zigbee, and then applied to smart home applications. The smart home scenario is chosen because it encompasses various smart subsystems, such as healthcare monitoring systems, energy monitoring systems, and entertainment systems, among others. The proposed system offers several advantages, including a low–cost architecture, ease of setup, improved interoperability, high flexibility, and a lightweight frame that can be applied to other wireless–based smart systems and applications.

7 January 2026

Knowledge graphs (KGs) offer a structured and collaborative approach to integrating diverse knowledge from various domains. However, constructing knowledge graphs typically requires significant manual effort and heavily relies on pretrained models, limiting their adaptability to specific sub-domains. This paper proposes an innovative, efficient, and locally deployable knowledge graph construction framework that leverages low-rank adaptation (LoRA) to fine-tune large language models (LLMs) in order to reduce noise. By integrating iterative optimization, consistency-guided filtering, and prompt-based extraction, the proposed method achieves a balance between precision and coverage, enabling the robust extraction of standardized subject–predicate–object triples from raw long texts. This makes it highly effective for knowledge graph construction and downstream reasoning tasks. We applied the parameter-efficient open-source model Qwen3-14B, and experimental results on the SciERC dataset show that, under strict matching (i.e., ensuring the exact matching of all components), our method achieved an F1 score of 0.358, outperforming the baseline model’s F1 score of 0.349. Under fuzzy matching (allowing some parts of the triples to be unmatched), the F1 score reached 0.447, outperforming the baseline model’s F1 score of 0.392, demonstrating the effectiveness of our approach. Ablation studies validate the robustness and generalization potential of our method, highlighting the contribution of each component to the overall performance.

6 January 2026

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

IoT Security
Reprint

IoT Security

Threat Detection, Analysis and Defense
Editors: Olivier Markowitch, Jean-Michel Dricot
Virtual Reality and Metaverse
Reprint

Virtual Reality and Metaverse

Impact on the Digital Transformation of Society II
Editors: Diego Vergara

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Future Internet - ISSN 1999-5903