Next Issue
Volume 16, March
Previous Issue
Volume 16, January
 
 

Future Internet, Volume 16, Issue 2 (February 2024) – 31 articles

Cover Story (view full-size image): Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. Ultimately, we anticipate that the proposed algorithms will offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 3761 KiB  
Article
Energy-Efficient De-Duplication Mechanism for Healthcare Data Aggregation in IoT
by Muhammad Nafees Ulfat Khan, Weiping Cao, Zhiling Tang, Ata Ullah and Wanghua Pan
Future Internet 2024, 16(2), 66; https://doi.org/10.3390/fi16020066 - 19 Feb 2024
Cited by 1 | Viewed by 1597
Abstract
The rapid development of the Internet of Things (IoT) has opened the way for transformative advances in numerous fields, including healthcare. IoT-based healthcare systems provide unprecedented opportunities to gather patients’ real-time data and make appropriate decisions at the right time. Yet, the deployed [...] Read more.
The rapid development of the Internet of Things (IoT) has opened the way for transformative advances in numerous fields, including healthcare. IoT-based healthcare systems provide unprecedented opportunities to gather patients’ real-time data and make appropriate decisions at the right time. Yet, the deployed sensors generate normal readings most of the time, which are transmitted to Cluster Heads (CHs). Handling these voluminous duplicated data is quite challenging. The existing techniques have high energy consumption, storage costs, and communication costs. To overcome these problems, in this paper, an innovative Energy-Efficient Fuzzy Data Aggregation System (EE-FDAS) has been presented. In it, at the first level, it is checked that sensors either generate normal or critical readings. In the first case, readings are converted to Boolean digit 0. This reduced data size takes only 1 digit which considerably reduces energy consumption. In the second scenario, sensors generating irregular readings are transmitted in their original 16 or 32-bit form. Then, data are aggregated and transmitted to respective CHs. Afterwards, these data are further transmitted to Fog servers, from where doctors have access. Lastly, for later usage, data are stored in the cloud server. For checking the proficiency of the proposed EE-FDAS scheme, extensive simulations are performed using NS-2.35. The results showed that EE-FDAS has performed well in terms of aggregation factor, energy consumption, packet drop rate, communication, and storage cost. Full article
Show Figures

Figure 1

18 pages, 1478 KiB  
Article
IoTwins: Implementing Distributed and Hybrid Digital Twins in Industrial Manufacturing and Facility Management Settings
by Paolo Bellavista and Giuseppe Di Modica
Future Internet 2024, 16(2), 65; https://doi.org/10.3390/fi16020065 - 17 Feb 2024
Viewed by 1889
Abstract
A Digital Twin (DT) refers to a virtual representation or digital replica of a physical object, system, process, or entity. This concept involves creating a detailed, real-time digital counterpart that mimics the behavior, characteristics, and attributes of its physical counterpart. DTs have the [...] Read more.
A Digital Twin (DT) refers to a virtual representation or digital replica of a physical object, system, process, or entity. This concept involves creating a detailed, real-time digital counterpart that mimics the behavior, characteristics, and attributes of its physical counterpart. DTs have the potential to improve efficiency, reduce costs, and enhance decision-making by providing a detailed, real-time understanding of the physical systems they represent. While this technology is finding application in numerous fields, such as energy, healthcare, and transportation, it appears to be a key component of the digital transformation of industries fostered by the fourth Industrial revolution (Industry 4.0). In this paper, we present the research results achieved by IoTwins, a European research project aimed at investigating opportunities and issues of adopting DTs in the fields of industrial manufacturing and facility management. Particularly, we discuss a DT model and a reference architecture for use by the research community to implement a platform for the development and deployment of industrial DTs in the cloud continuum. Guided by the devised architectures’ principles, we implemented an open platform and a development methodology to help companies build DT-based industrial applications and deploy them in the so-called Edge/Cloud continuum. To prove the research value and the usability of the implemented platform, we discuss a simple yet practical development use case. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

15 pages, 510 KiB  
Article
Online Optimization of Pickup and Delivery Problem Considering Feasibility
by Ryo Matsuoka, Koichi Kobayashi and Yuh Yamashita
Future Internet 2024, 16(2), 64; https://doi.org/10.3390/fi16020064 - 17 Feb 2024
Viewed by 1573
Abstract
A pickup and delivery problem by multiple agents has many applications, such as food delivery service and disaster rescue. In this problem, there are cases where fuels must be considered (e.g., the case of using drones as agents). In addition, there are cases [...] Read more.
A pickup and delivery problem by multiple agents has many applications, such as food delivery service and disaster rescue. In this problem, there are cases where fuels must be considered (e.g., the case of using drones as agents). In addition, there are cases where demand forecasting should be considered (e.g., the case where a large number of orders are carried by a small number of agents). In this paper, we consider an online pickup and delivery problem considering fuel and demand forecasting. First, the pickup and delivery problem with fuel constraints is formulated. The information on demand forecasting is included in the cost function. Based on the orders, the agents’ paths (e.g., the paths from stores to customers) are calculated. We suppose that the target area is given by an undirected graph. Using a given graph, several constraints such as the moves and fuels of the agents are introduced. This problem is reduced to a mixed integer linear programming (MILP) problem. Next, in online optimization, the MILP problem is solved depending on the acceptance of orders. Owing to new orders, the calculated future paths may be changed. Finally, by using a numerical example, we present the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in Japan 2022-2023)
Show Figures

Figure 1

20 pages, 19399 KiB  
Article
Speech Inpainting Based on Multi-Layer Long Short-Term Memory Networks
by Haohan Shi, Xiyu Shi and Safak Dogan
Future Internet 2024, 16(2), 63; https://doi.org/10.3390/fi16020063 - 17 Feb 2024
Cited by 2 | Viewed by 1465
Abstract
Audio inpainting plays an important role in addressing incomplete, damaged, or missing audio signals, contributing to improved quality of service and overall user experience in multimedia communications over the Internet and mobile networks. This paper presents an innovative solution for speech inpainting using [...] Read more.
Audio inpainting plays an important role in addressing incomplete, damaged, or missing audio signals, contributing to improved quality of service and overall user experience in multimedia communications over the Internet and mobile networks. This paper presents an innovative solution for speech inpainting using Long Short-Term Memory (LSTM) networks, i.e., a restoring task where the missing parts of speech signals are recovered from the previous information in the time domain. The lost or corrupted speech signals are also referred to as gaps. We regard the speech inpainting task as a time-series prediction problem in this research work. To address this problem, we designed multi-layer LSTM networks and trained them on different speech datasets. Our study aims to investigate the inpainting performance of the proposed models on different datasets and with varying LSTM layers and explore the effect of multi-layer LSTM networks on the prediction of speech samples in terms of perceived audio quality. The inpainted speech quality is evaluated through the Mean Opinion Score (MOS) and a frequency analysis of the spectrogram. Our proposed multi-layer LSTM models are able to restore up to 1 s of gaps with high perceptual audio quality using the features captured from the time domain only. Specifically, for gap lengths under 500 ms, the MOS can reach up to 3~4, and for gap lengths ranging between 500 ms and 1 s, the MOS can reach up to 2~3. In the time domain, the proposed models can proficiently restore the envelope and trend of lost speech signals. In the frequency domain, the proposed models can restore spectrogram blocks with higher similarity to the original signals at frequencies less than 2.0 kHz and comparatively lower similarity at frequencies in the range of 2.0 kHz~8.0 kHz. Full article
(This article belongs to the Special Issue Deep Learning and Natural Language Processing II)
Show Figures

Figure 1

16 pages, 1186 KiB  
Article
Merging Ontologies and Data from Electronic Health Records
by Salvatore Calcagno, Andrea Calvagna, Emiliano Tramontana and Gabriella Verga
Future Internet 2024, 16(2), 62; https://doi.org/10.3390/fi16020062 - 17 Feb 2024
Cited by 1 | Viewed by 1542
Abstract
The Electronic Health Record (EHR) is a system for collecting and storing patient medical records as data that can be mechanically accessed, hence facilitating and assisting the medical decision-making process. EHRs exist in several formats, and each format lists thousands of keywords to [...] Read more.
The Electronic Health Record (EHR) is a system for collecting and storing patient medical records as data that can be mechanically accessed, hence facilitating and assisting the medical decision-making process. EHRs exist in several formats, and each format lists thousands of keywords to classify patients data. The keywords are specific and are medical jargon; hence, data classification is very accurate. As the keywords constituting the formats of medical records express concepts by means of specific jargon without definitions or references, their proper use is left to clinicians and could be affected by their background, hence the interpretation of data could become slow or less accurate than that desired. This article presents an approach that accurately relates data in EHRs to ontologies in the medical realm. Thanks to ontologies, clinicians can be assisted when writing or analysing health records, e.g., our solution promptly suggests rigorous definitions for scientific terms, and automatically connects data spread over several parts of EHRs. The first step of our approach consists of converting selected data and keywords from several EHR formats into a format easier to parse, then the second step is merging the extracted data with specialised medical ontologies. Finally, enriched versions of the medical data are made available to professionals. The proposed approach was validated by taking samples of medical records and ontologies in the real world. The results have shown both versatility on handling data, precision of query results, and appropriate suggestions for relations among medical records. Full article
Show Figures

Figure 1

24 pages, 1502 KiB  
Article
Enhancing Energy Efficiency in IoT-NDN via Parameter Optimization
by Dennis Papenfuß, Bennet Gerlach, Stefan Fischer and Mohamed Ahmed Hail
Future Internet 2024, 16(2), 61; https://doi.org/10.3390/fi16020061 - 16 Feb 2024
Viewed by 1448
Abstract
The IoT encompasses objects, sensors, and everyday items not typically considered computers. IoT devices are subject to severe energy, memory, and computation power constraints. Employing NDN for the IoT is a recent approach to accommodate these issues. To gain a deeper insight into [...] Read more.
The IoT encompasses objects, sensors, and everyday items not typically considered computers. IoT devices are subject to severe energy, memory, and computation power constraints. Employing NDN for the IoT is a recent approach to accommodate these issues. To gain a deeper insight into how different network parameters affect energy consumption, analyzing a range of parameters using hyperparameter optimization seems reasonable. The experiments from this work’s ndnSIM-based hyperparameter setup indicate that the data packet size has the most significant impact on energy consumption, followed by the caching scheme, caching strategy, and finally, the forwarding strategy. The energy footprint of these parameters is orders of magnitude apart. Surprisingly, the packet request sequence influences the caching parameters’ energy footprint more than the graph size and topology. Regarding energy consumption, the results indicate that data compression may be more relevant than expected, and caching may be more significant than the forwarding strategy. The framework for ndnSIM developed in this work can be used to simulate NDN networks more efficiently. Furthermore, the work presents a valuable basis for further research on the effect of specific parameter combinations not examined before. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

18 pages, 6477 KiB  
Article
The Microverse: A Task-Oriented Edge-Scale Metaverse
by Qian Qu, Mohsen Hatami, Ronghua Xu, Deeraj Nagothu, Yu Chen, Xiaohua Li, Erik Blasch, Erika Ardiles-Cruz and Genshe Chen
Future Internet 2024, 16(2), 60; https://doi.org/10.3390/fi16020060 - 13 Feb 2024
Cited by 6 | Viewed by 2270
Abstract
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to [...] Read more.
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to harness the full potential of these smart environments, the horizon brightens with the promise of an immersive, interconnected 3D world. The forthcoming paradigm shift in how we live, work, and interact owes much to groundbreaking innovations in augmented reality (AR), virtual reality (VR), extended reality (XR), blockchain, and digital twins (DTs). However, realizing the expansive digital vista in our daily lives is challenging. Current limitations include an incomplete integration of pivotal techniques, daunting bandwidth requirements, and the critical need for near-instantaneous data transmission, all impeding the digital VR metaverse from fully manifesting as envisioned by its proponents. This paper seeks to delve deeply into the intricacies of the immersive, interconnected 3D realm, particularly in applications demanding high levels of intelligence. Specifically, this paper introduces the microverse, a task-oriented, edge-scale, pragmatic solution for smart cities. Unlike all-encompassing metaverses, each microverse instance serves a specific task as a manageable digital twin of an individual network slice. Each microverse enables on-site/near-site data processing, information fusion, and real-time decision-making within the edge–fog–cloud computing framework. The microverse concept is verified using smart public safety surveillance (SPSS) for smart communities as a case study, demonstrating its feasibility in practical smart city applications. The aim is to stimulate discussions and inspire fresh ideas in our community, guiding us as we navigate the evolving digital landscape of smart cities to embrace the potential of the metaverse. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

16 pages, 3576 KiB  
Article
Digital-Twin-Based Monitoring System for Slab Production Process
by Tianjie Fu, Peiyu Li, Chenke Shi and Youzhu Liu
Future Internet 2024, 16(2), 59; https://doi.org/10.3390/fi16020059 - 13 Feb 2024
Viewed by 1597
Abstract
The growing demand for high-quality steel across various industries has led to an increasing need for superior-grade steel. The quality of slab ingots is a pivotal factor influencing the final quality of steel production. However, the current level of intelligence in the steelmaking [...] Read more.
The growing demand for high-quality steel across various industries has led to an increasing need for superior-grade steel. The quality of slab ingots is a pivotal factor influencing the final quality of steel production. However, the current level of intelligence in the steelmaking industry’s processes is relatively insufficient. Consequently, slab ingot quality inspection is characterized by high-temperature risks and imprecision. The positional accuracy of quality detection is inadequate, and the precise quantification of slab ingot production and quality remains challenging. This paper proposes a digital twin (DT)-based monitoring system for the slab ingot production process that integrates DT technology with slab ingot process detection. A neural network is introduced for defect identification to ensure precise defect localization and efficient recognition. Concurrently, environmental production factors are considered, leading to the introduction of a defect prediction module. The effectiveness of this system is validated through experimental verification. Full article
Show Figures

Figure 1

16 pages, 463 KiB  
Article
CROWDMATCH: Optimizing Crowdsourcing Matching through the Integration of Matching Theory and Coalition Games
by Adedamola Adesokan, Rowan Kinney and Eirini Eleni Tsiropoulou
Future Internet 2024, 16(2), 58; https://doi.org/10.3390/fi16020058 - 11 Feb 2024
Cited by 1 | Viewed by 1466
Abstract
This paper tackles the challenges inherent in crowdsourcing dynamics by introducing the CROWDMATCH mechanism. Aimed at enabling crowdworkers to strategically select suitable crowdsourcers while contributing information to crowdsourcing tasks, CROWDMATCH considers incentives, information availability and cost, and the decisions of fellow crowdworkers to [...] Read more.
This paper tackles the challenges inherent in crowdsourcing dynamics by introducing the CROWDMATCH mechanism. Aimed at enabling crowdworkers to strategically select suitable crowdsourcers while contributing information to crowdsourcing tasks, CROWDMATCH considers incentives, information availability and cost, and the decisions of fellow crowdworkers to model the utility functions for both the crowdworkers and the crowdsourcers. Specifically, the paper presents an initial Approximate CROWDMATCH mechanism grounded in matching theory principles, eliminating externalities from crowdworkers’ decisions and enabling each entity to maximize its utility. Subsequently, the Accurate CROWDMATCH mechanism is introduced, which is initiated by the outcome of the Approximate CROWDMATCH mechanism, and coalition game-theoretic principles are employed to refine the matching process by accounting for externalities. The paper’s contributions include the introduction of the CROWDMATCH system model, the development of both Approximate and Accurate CROWDMATCH mechanisms, and a demonstration of their superior performance through comprehensive simulation results. The mechanisms’ scalability in large-scale crowdsourcing systems and operational advantages are highlighted, distinguishing them from existing methods and highlighting their efficacy in empowering crowdworkers in crowdsourcer selection. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

26 pages, 17847 KiB  
Article
Distributed Mobility Management Support for Low-Latency Data Delivery in Named Data Networking for UAVs
by Mohammed Bellaj, Najib Naja and Abdellah Jamali
Future Internet 2024, 16(2), 57; https://doi.org/10.3390/fi16020057 - 10 Feb 2024
Cited by 1 | Viewed by 1742
Abstract
Named Data Networking (NDN) has emerged as a promising architecture to overcome the limitations of the conventional Internet Protocol (IP) architecture, particularly in terms of mobility, security, and data availability. However, despite the advantages it offers, producer mobility management remains a significant challenge [...] Read more.
Named Data Networking (NDN) has emerged as a promising architecture to overcome the limitations of the conventional Internet Protocol (IP) architecture, particularly in terms of mobility, security, and data availability. However, despite the advantages it offers, producer mobility management remains a significant challenge for NDN, especially for moving vehicles and emerging technologies such as Unmanned Aerial Vehicles (UAVs), known for their high-speed and unpredictable movements, which makes it difficult for NDN to maintain seamless communication. To solve this mobility problem, we propose a Distributed Mobility Management Scheme (DMMS) to support UAV mobility and ensure low-latency content delivery in NDN architecture. DMMS utilizes decentralized Anchors to forward proactively the consumer’s Interest packets toward the producer’s predicted location when handoff occurs. Moreover, it introduces a new forwarding approach that combines the standard and location-based forwarding strategy to improve forwarding efficiency under producer mobility without changing the network structure. Using a realistic scenario, DMMS is evaluated and compared against two well-known solutions, namely MAP-ME and Kite, using the ndnSIM simulations. We demonstrate that DMMS achieves better results compared to Kite and MAP-ME solutions in terms of network cost and consumer quality-of-service metrics. Full article
Show Figures

Figure 1

24 pages, 8449 KiB  
Article
A Secure Opportunistic Network with Efficient Routing for Enhanced Efficiency and Sustainability
by Ayman Khalil and Besma Zeddini
Future Internet 2024, 16(2), 56; https://doi.org/10.3390/fi16020056 - 8 Feb 2024
Cited by 4 | Viewed by 1544
Abstract
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, [...] Read more.
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, and sustainable networking solutions has never been more pressing. Opportunistic networks, characterized by intermittent connectivity and dynamic network conditions, present unique challenges that necessitate innovative approaches for optimal performance and sustainability. This paper introduces a groundbreaking paradigm that integrates the principles of cybersecurity with opportunistic networks. At its core, this study presents a novel routing protocol meticulously designed to significantly outperform existing solutions concerning key metrics such as delivery probability, overhead ratio, and communication delay. Leveraging cybersecurity’s inherent strengths, our protocol not only fortifies the network’s security posture but also provides a foundation for enhancing efficiency and sustainability in opportunistic networks. The overarching goal of this paper is to address the inherent limitations of conventional opportunistic network protocols. By proposing an innovative routing protocol, we aim to optimize data delivery, minimize overhead, and reduce communication latency. These objectives are crucial for ensuring seamless and timely information exchange, especially in scenarios where traditional networking infrastructures fall short. By large-scale simulations, the new model proves its effectiveness in the different scenarios, especially in terms of message delivery probability, while ensuring reasonable overhead and latency. Full article
Show Figures

Figure 1

17 pages, 855 KiB  
Article
Automated Identification of Sensitive Financial Data Based on the Topic Analysis
by Meng Li, Jiqiang Liu and Yeping Yang
Future Internet 2024, 16(2), 55; https://doi.org/10.3390/fi16020055 - 8 Feb 2024
Viewed by 1313
Abstract
Data governance is an extremely important protection and management measure throughout the entire life cycle of data. However, there are still data governance issues, such as data security risks, data privacy breaches, and difficulties in data management and access control. These problems lead [...] Read more.
Data governance is an extremely important protection and management measure throughout the entire life cycle of data. However, there are still data governance issues, such as data security risks, data privacy breaches, and difficulties in data management and access control. These problems lead to a risk of data breaches and abuse. Therefore, the security classification and grading of data has become an important task to accurately identify sensitive data and adopt appropriate maintenance and management measures with different sensitivity levels. This work started from the problems existing in the current data security classification and grading work, such as inconsistent classification and grading standards, difficult data acquisition and sorting, and weak semantic information of data fields, to find the limitations of the current methods and the direction for improvement. The automatic identification method of sensitive financial data proposed in this paper is based on topic analysis and was constructed by incorporating Jieba word segmentation, word frequency statistics, the skip-gram model, K-means clustering, and other technologies. Expert assistance was sought to select appropriate keywords for enhanced accuracy. This work used the descriptive text library and real business data of a Chinese financial institution for training and testing to further demonstrate its effectiveness and usefulness. The evaluation indicators illustrated the effectiveness of this method in the classification of data security. The proposed method addressed the challenge of sensitivity level division in texts with limited semantic information, which overcame the limitations on model expansion across different domains and provided an optimized application model. All of the above pointed out the direction for the real-time updating of the method. Full article
Show Figures

Figure 1

3 pages, 155 KiB  
Editorial
Modern Trends in Multi-Agent Systems
by Martin Kenyeres, Ivana Budinská, Ladislav Hluchý and Agostino Poggi
Future Internet 2024, 16(2), 54; https://doi.org/10.3390/fi16020054 - 8 Feb 2024
Viewed by 1864
Abstract
The term “multi-agent system” is generally understood as an interconnected set of independent entities that can effectively solve complex and time-consuming problems exceeding the individual abilities of common problem solvers [...] Full article
(This article belongs to the Special Issue Modern Trends in Multi-Agent Systems)
4 pages, 141 KiB  
Editorial
State-of-the-Art Future Internet Technology in Italy 2022–2023
by Massimo Cafaro, Italo Epicoco and Marco Pulimeno
Future Internet 2024, 16(2), 53; https://doi.org/10.3390/fi16020053 - 6 Feb 2024
Viewed by 1347
Abstract
This Special Issue aims to provide a comprehensive overview of the current state of the art in Future Internet Technology in Italy [...] Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in Italy 2022–2023)
22 pages, 1501 KiB  
Article
A QoS-Aware IoT Edge Network for Mobile Telemedicine Enabling In-Transit Monitoring of Emergency Patients
by Adwitiya Mukhopadhyay, Aryadevi Remanidevi Devidas, Venkat P. Rangan and Maneesha Vinodini Ramesh
Future Internet 2024, 16(2), 52; https://doi.org/10.3390/fi16020052 - 6 Feb 2024
Cited by 1 | Viewed by 1890
Abstract
Addressing the inadequacy of medical facilities in rural communities and the high number of patients affected by ailments that need to be treated immediately is of prime importance for all countries. The various recent healthcare emergency situations bring out the importance of telemedicine [...] Read more.
Addressing the inadequacy of medical facilities in rural communities and the high number of patients affected by ailments that need to be treated immediately is of prime importance for all countries. The various recent healthcare emergency situations bring out the importance of telemedicine and demand rapid transportation of patients to nearby hospitals with available resources to provide the required medical care. Many current healthcare facilities and ambulances are not equipped to provide real-time risk assessment for each patient and dynamically provide the required medical interventions. This work proposes an IoT-based mobile medical edge (IM2E) node to be integrated with wearable and portable devices for the continuous monitoring of emergency patients transported via ambulances and it delves deeper into the existing challenges, such as (a) a lack of a simplified patient risk scoring system, (b) the need for architecture that enables seamless communication for dynamically varying QoS requirements, and (c)the need for context-aware knowledge regarding the effect of end-to-end delay and the packet loss ratio (PLR) on the real-time monitoring of health risks in emergency patients. The proposed work builds a data path selection model to identify the most effective path through which to route the data packets in an effective manner. The signal-to-noise interference ratio and the fading in the path are chosen to analyze the suitable path for data transmission. Full article
(This article belongs to the Special Issue Novel 5G Deployment Experience and Performance Results)
Show Figures

Figure 1

15 pages, 2026 KiB  
Article
Optimizing Session-Aware Recommenders: A Deep Dive into GRU-Based Latent Interaction Integration
by Ming-Yen Lin, Ping-Chun Wu and Sue-Chen Hsueh
Future Internet 2024, 16(2), 51; https://doi.org/10.3390/fi16020051 - 1 Feb 2024
Viewed by 1615
Abstract
This study introduces session-aware recommendation models, leveraging GRU (Gated Recurrent Unit) and attention mechanisms for advanced latent interaction data integration. A primary advancement is enhancing latent context, a critical factor for boosting recommendation accuracy. We address the existing models’ rigidity by dynamically blending [...] Read more.
This study introduces session-aware recommendation models, leveraging GRU (Gated Recurrent Unit) and attention mechanisms for advanced latent interaction data integration. A primary advancement is enhancing latent context, a critical factor for boosting recommendation accuracy. We address the existing models’ rigidity by dynamically blending short-term (most recent) and long-term (historical) preferences, moving beyond static period definitions. Our approaches, pre-combination (LCII-Pre) and post-combination (LCII-Post), with fixed (Fix) and flexible learning (LP) weight configurations, are thoroughly evaluated. We conducted extensive experiments to assess our models’ performance on public datasets such as Amazon and MovieLens 1M. Notably, on the MovieLens 1M dataset, LCII-PreFix achieved a 1.85% and 2.54% higher Recall@20 than II-RNN and BERT4Rec+st+TSA, respectively. On the Steam dataset, LCII-PostLP outperformed these models by 18.66% and 5.5%. Furthermore, on the Amazon dataset, LCII showed a 2.59% and 1.89% improvement in Recall@20 over II-RNN and CAII. These results affirm the significant enhancement our models bring to session-aware recommendation systems, showcasing their potential for both academic and practical applications in the field. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

14 pages, 3418 KiB  
Article
Enhancing Smart City Safety and Utilizing AI Expert Systems for Violence Detection
by Pradeep Kumar, Guo-Liang Shih, Bo-Lin Guo, Siva Kumar Nagi, Yibeltal Chanie Manie, Cheng-Kai Yao, Michael Augustine Arockiyadoss and Peng-Chun Peng
Future Internet 2024, 16(2), 50; https://doi.org/10.3390/fi16020050 - 31 Jan 2024
Cited by 1 | Viewed by 2574
Abstract
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a [...] Read more.
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a model aimed at enhancing real-time emergency response capabilities and swiftly identifying criminals. This initiative aims to foster a safer environment and better manage criminal activity within smart cities. The proposed architecture combines an image-to-image stable diffusion model with violence detection and pose estimation approaches. The diffusion model generates synthetic data while the object detection approach uses YOLO v7 to identify violent objects like baseball bats, knives, and pistols, complemented by MediaPipe for action detection. Further, a long short-term memory (LSTM) network classifies the action attacks involving violent objects. Subsequently, an ensemble consisting of an edge device and the entire proposed model is deployed onto the edge device for real-time data testing using a dash camera. Thus, this study can handle violent attacks and send alerts in emergencies. As a result, our proposed YOLO model achieves a mean average precision (MAP) of 89.5% for violent attack detection, and the LSTM classifier model achieves an accuracy of 88.33% for violent action classification. The results highlight the model’s enhanced capability to accurately detect violent objects, particularly in effectively identifying violence through the implemented artificial intelligence system. Full article
(This article belongs to the Special Issue Challenges in Real-Time Intelligent Systems)
Show Figures

Figure 1

19 pages, 3392 KiB  
Article
A New Dynamic Game-Based Pricing Model for Cloud Environment
by Hamid Saadatfar, Hamid Gholampour Ahangar and Javad Hassannataj Joloudari
Future Internet 2024, 16(2), 49; https://doi.org/10.3390/fi16020049 - 31 Jan 2024
Viewed by 1520
Abstract
Resource pricing in cloud computing has become one of the main challenges for cloud providers. The challenge is determining a fair and appropriate price to satisfy users and resource providers. To establish a justifiable price, it is imperative to take into account the [...] Read more.
Resource pricing in cloud computing has become one of the main challenges for cloud providers. The challenge is determining a fair and appropriate price to satisfy users and resource providers. To establish a justifiable price, it is imperative to take into account the circumstances and requirements of both the provider and the user. This research tries to provide a pricing mechanism for cloud computing based on game theory. The suggested approach considers three aspects: the likelihood of faults, the interplay among virtual machines, and the amount of energy used, in order to determine a justifiable price. In the game that is being proposed, the provider is responsible for determining the price of the virtual machine that can be made available to the user on each physical machine. The user, on the other hand, has the authority to choose between the virtual machines that are offered in order to run their application. The whole game is implemented as a function of the resource broker component. The proposed mechanism is simulated and evaluated using the CloudSim simulator. Its performance is compared with several previous recent mechanisms. The results indicate that the suggested mechanism has successfully identified a more rational price for both the user and the provider, consequently enhancing the overall profitability of the cloud system. Full article
Show Figures

Figure 1

28 pages, 8697 KiB  
Article
Efficient Privacy-Aware Forwarding for Enhanced Communication Privacy in Opportunistic Mobile Social Networks
by Azizah Assiri and Hassen Sallay
Future Internet 2024, 16(2), 48; https://doi.org/10.3390/fi16020048 - 31 Jan 2024
Viewed by 1519
Abstract
Opportunistic mobile social networks (OMSNs) have become increasingly popular in recent years due to the rise of social media and smartphones. However, message forwarding and sharing social information through intermediary nodes on OMSNs raises privacy concerns as personal data and activities become more [...] Read more.
Opportunistic mobile social networks (OMSNs) have become increasingly popular in recent years due to the rise of social media and smartphones. However, message forwarding and sharing social information through intermediary nodes on OMSNs raises privacy concerns as personal data and activities become more exposed. Therefore, maintaining privacy without limiting efficient social interaction is a challenging task. This paper addresses this specific problem of safeguarding user privacy during message forwarding by integrating a privacy layer on the state-of-the-art OMSN routing decision models that empowers users to control their message dissemination. Mainly, we present three user-centric privacy-aware forwarding modes guiding the selection of the next hop in the forwarding path based on social metrics such as common friends and exchanged messages between OMSN nodes. More specifically, we define different social relationship strengths approximating real-world scenarios (familiar, weak tie, stranger) and trust thresholds to give users choices on trust levels for different social contexts and guide the routing decisions. We evaluate the privacy enhancement and network performance through extensive simulations using ONE simulator for several routing schemes (Epidemic, Prophet, and Spray and Wait) and different movement models (random way, bus, and working day). We demonstrate that our modes can enhance privacy by up to 45% in various network scenarios, as measured by the reduction in the likelihood of unintended message propagation, while keeping the message-delivery process effective and efficient. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

44 pages, 38595 KiB  
Article
Enhancing Urban Resilience: Smart City Data Analyses, Forecasts, and Digital Twin Techniques at the Neighborhood Level
by Andreas F. Gkontzis, Sotiris Kotsiantis, Georgios Feretzakis and Vassilios S. Verykios
Future Internet 2024, 16(2), 47; https://doi.org/10.3390/fi16020047 - 30 Jan 2024
Cited by 8 | Viewed by 3583
Abstract
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of [...] Read more.
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of the urban environment, fostering real-time monitoring, simulation, and analysis of urban systems. This study underscores the significance of real-time monitoring, simulation, and analysis of urban systems to support test scenarios that identify bottlenecks and enhance smart city efficiency. This paper delves into the crucial roles of citizen report analytics, prediction, and digital twin technologies at the neighborhood level. The study integrates extract, transform, load (ETL) processes, artificial intelligence (AI) techniques, and a digital twin methodology to process and interpret urban data streams derived from citizen interactions with the city’s coordinate-based problem mapping platform. Using an interactive GeoDataFrame within the digital twin methodology, dynamic entities facilitate simulations based on various scenarios, allowing users to visualize, analyze, and predict the response of the urban system at the neighborhood level. This approach reveals antecedent and predictive patterns, trends, and correlations at the physical level of each city area, leading to improvements in urban functionality, resilience, and resident quality of life. Full article
Show Figures

Graphical abstract

18 pages, 2156 KiB  
Article
Context-Aware Behavioral Tips to Improve Sleep Quality via Machine Learning and Large Language Models
by Erica Corda, Silvia M. Massa and Daniele Riboni
Future Internet 2024, 16(2), 46; https://doi.org/10.3390/fi16020046 - 30 Jan 2024
Cited by 2 | Viewed by 1967
Abstract
As several studies demonstrate, good sleep quality is essential for individuals’ well-being, as a lack of restoring sleep may disrupt different physical, mental, and social dimensions of health. For this reason, there is increasing interest in tools for the monitoring of sleep based [...] Read more.
As several studies demonstrate, good sleep quality is essential for individuals’ well-being, as a lack of restoring sleep may disrupt different physical, mental, and social dimensions of health. For this reason, there is increasing interest in tools for the monitoring of sleep based on personal sensors. However, there are currently few context-aware methods to help individuals to improve their sleep quality through behavior change tips. In order to tackle this challenge, in this paper, we propose a system that couples machine learning algorithms and large language models to forecast the next night’s sleep quality, and to provide context-aware behavior change tips to improve sleep. In order to encourage adherence and to increase trust, our system includes the use of large language models to describe the conditions that the machine learning algorithm finds harmful to sleep health, and to explain why the behavior change tips are generated as a consequence. We develop a prototype of our system, including a smartphone application, and perform experiments with a set of users. Results show that our system’s forecast is correlated to the actual sleep quality. Moreover, a preliminary user study suggests that the use of large language models in our system is useful in increasing trust and engagement. Full article
Show Figures

Figure 1

23 pages, 2549 KiB  
Article
Non-Profiled Unsupervised Horizontal Iterative Attack against Hardware Elliptic Curve Scalar Multiplication Using Machine Learning
by Marcin Aftowicz, Ievgen Kabin, Zoya Dyka and Peter Langendörfer
Future Internet 2024, 16(2), 45; https://doi.org/10.3390/fi16020045 - 29 Jan 2024
Cited by 2 | Viewed by 1440
Abstract
While IoT technology makes industries, cities, and homes smarter, it also opens the door to security risks. With the right equipment and physical access to the devices, the attacker can leverage side-channel information, like timing, power consumption, or electromagnetic emanation, to compromise cryptographic [...] Read more.
While IoT technology makes industries, cities, and homes smarter, it also opens the door to security risks. With the right equipment and physical access to the devices, the attacker can leverage side-channel information, like timing, power consumption, or electromagnetic emanation, to compromise cryptographic operations and extract the secret key. This work presents a side channel analysis of a cryptographic hardware accelerator for the Elliptic Curve Scalar Multiplication operation, implemented in a Field-Programmable Gate Array and as an Application-Specific Integrated Circuit. The presented framework consists of initial key extraction using a state-of-the-art statistical horizontal attack and is followed by regularized Artificial Neural Networks, which take, as input, the partially incorrect key guesses from the horizontal attack and correct them iteratively. The initial correctness of the horizontal attack, measured as the fraction of correctly extracted bits of the secret key, was improved from 75% to 98% by applying the iterative learning. Full article
Show Figures

Figure 1

14 pages, 2162 KiB  
Article
Computer Vision and Machine Learning-Based Predictive Analysis for Urban Agricultural Systems
by Arturs Kempelis, Inese Polaka, Andrejs Romanovs and Antons Patlins
Future Internet 2024, 16(2), 44; https://doi.org/10.3390/fi16020044 - 28 Jan 2024
Cited by 3 | Viewed by 2111
Abstract
Urban agriculture presents unique challenges, particularly in the context of microclimate monitoring, which is increasingly important in food production. This paper explores the application of convolutional neural networks (CNNs) to forecast key sensor measurements from thermal images within this context. This research focuses [...] Read more.
Urban agriculture presents unique challenges, particularly in the context of microclimate monitoring, which is increasingly important in food production. This paper explores the application of convolutional neural networks (CNNs) to forecast key sensor measurements from thermal images within this context. This research focuses on using thermal images to forecast sensor measurements of relative air humidity, soil moisture, and light intensity, which are integral to plant health and productivity in urban farming environments. The results indicate a higher accuracy in forecasting relative air humidity and soil moisture levels, with Mean Absolute Percentage Errors (MAPEs) within the range of 10–12%. These findings correlate with the strong dependency of these parameters on thermal patterns, which are effectively extracted by the CNNs. In contrast, the forecasting of light intensity proved to be more challenging, yielding lower accuracy. The reduced performance is likely due to the more complex and variable factors that affect light in urban environments. The insights gained from the higher predictive accuracy for relative air humidity and soil moisture may inform targeted interventions for urban farming practices, while the lower accuracy in light intensity forecasting highlights the need for further research into the integration of additional data sources or hybrid modeling approaches. The conclusion suggests that the integration of these technologies can significantly enhance the predictive maintenance of plant health, leading to more sustainable and efficient urban farming practices. However, the study also acknowledges the challenges in implementing these technologies in urban agricultural models. Full article
Show Figures

Figure 1

17 pages, 851 KiB  
Article
A Spectral Gap-Based Topology Control Algorithm for Wireless Backhaul Networks
by Sergio Jesús González-Ambriz, Rolando Menchaca-Méndez, Sergio Alejandro Pinacho-Castellanos and Mario Eduardo Rivero-Ángeles 
Future Internet 2024, 16(2), 43; https://doi.org/10.3390/fi16020043 - 26 Jan 2024
Viewed by 2063
Abstract
This paper presents the spectral gap-based topology control algorithm (SGTC) for wireless backhaul networks, a novel approach that employs the Laplacian Spectral Gap (LSG) to find expander-like graphs that optimize the topology of the network in terms of robustness, diameter, energy cost, and [...] Read more.
This paper presents the spectral gap-based topology control algorithm (SGTC) for wireless backhaul networks, a novel approach that employs the Laplacian Spectral Gap (LSG) to find expander-like graphs that optimize the topology of the network in terms of robustness, diameter, energy cost, and network entropy. The latter measures the network’s ability to promote seamless traffic offloading from the Macro Base Stations to smaller cells by providing a high diversity of shortest paths connecting all the stations. Given the practical constraints imposed by cellular technologies, the proposed algorithm uses simulated annealing to search for feasible network topologies with a large LSG. Then, it computes the Pareto front of the set of feasible solutions found during the annealing process when considering robustness, diameter, and entropy as objective functions. The algorithm’s result is the Pareto efficient solution that minimizes energy cost. A set of experimental results shows that by optimizing the LSG, the proposed algorithm simultaneously optimizes the set of desirable topological properties mentioned above. The results also revealed that generating networks with good spectral expansion is possible even under the restrictions imposed by current wireless technologies. This is a desirable feature because these networks have strong connectivity properties even if they do not have a large number of links. Full article
Show Figures

Figure 1

29 pages, 743 KiB  
Article
TinyML Algorithms for Big Data Management in Large-Scale IoT Systems
by Aristeidis Karras, Anastasios Giannaros, Christos Karras, Leonidas Theodorakopoulos, Constantinos S. Mammassis, George A. Krimpas and Spyros Sioutas
Future Internet 2024, 16(2), 42; https://doi.org/10.3390/fi16020042 - 25 Jan 2024
Cited by 5 | Viewed by 2869
Abstract
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed [...] Read more.
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHybridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the proposed algorithms offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems II)
Show Figures

Figure 1

31 pages, 5283 KiB  
Article
Beyond Lexical Boundaries: LLM-Generated Text Detection for Romanian Digital Libraries
by Melania Nitu and Mihai Dascalu
Future Internet 2024, 16(2), 41; https://doi.org/10.3390/fi16020041 - 25 Jan 2024
Cited by 2 | Viewed by 2278
Abstract
Machine-generated content reshapes the landscape of digital information; hence, ensuring the authenticity of texts within digital libraries has become a paramount concern. This work introduces a corpus of approximately 60 k Romanian documents, including human-written samples as well as generated texts using six [...] Read more.
Machine-generated content reshapes the landscape of digital information; hence, ensuring the authenticity of texts within digital libraries has become a paramount concern. This work introduces a corpus of approximately 60 k Romanian documents, including human-written samples as well as generated texts using six distinct Large Language Models (LLMs) and three different generation methods. Our robust experimental dataset covers five domains, namely books, news, legal, medical, and scientific publications. The exploratory text analysis revealed differences between human-authored and artificially generated texts, exposing the intricacies of lexical diversity and textual complexity. Since Romanian is a less-resourced language requiring dedicated detectors on which out-of-the-box solutions do not work, this paper introduces two techniques for discerning machine-generated texts. The first method leverages a Transformer-based model to categorize texts as human or machine-generated, while the second method extracts and examines linguistic features, such as identifying the top textual complexity indices via Kruskal–Wallis mean rank and computes burstiness, which are further fed into a machine-learning model leveraging an extreme gradient-boosting decision tree. The methods show competitive performance, with the first technique’s results outperforming the second one in two out of five domains, reaching an F1 score of 0.96. Our study also includes a text similarity analysis between human-authored and artificially generated texts, coupled with a SHAP analysis to understand which linguistic features contribute more to the classifier’s decision. Full article
(This article belongs to the Special Issue Digital Analysis in Digital Humanities)
Show Figures

Figure 1

57 pages, 2070 KiB  
Review
A Holistic Analysis of Internet of Things (IoT) Security: Principles, Practices, and New Perspectives
by Mahmud Hossain, Golam Kayas, Ragib Hasan, Anthony Skjellum, Shahid Noor and S. M. Riazul Islam
Future Internet 2024, 16(2), 40; https://doi.org/10.3390/fi16020040 - 24 Jan 2024
Cited by 6 | Viewed by 7307
Abstract
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of [...] Read more.
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of crime and e-crime. Despite ongoing research and study, there remains a significant gap in the thorough analysis of security challenges, feasible solutions, and open secure problems for IoT. To bridge this gap, we provide a comprehensive overview of the state of the art in IoT security with a critical investigation-based approach. This includes a detailed analysis of vulnerabilities in IoT-based systems and potential attacks. We present a holistic review of the security properties required to be adopted by IoT devices, applications, and services to mitigate IoT vulnerabilities and, thus, successful attacks. Moreover, we identify challenges to the design of security protocols for IoT systems in which constituent devices vary markedly in capability (such as storage, computation speed, hardware architecture, and communication interfaces). Next, we review existing research and feasible solutions for IoT security. We highlight a set of open problems not yet addressed among existing security solutions. We provide a set of new perspectives for future research on such issues including secure service discovery, on-device credential security, and network anomaly detection. We also provide directions for designing a forensic investigation framework for IoT infrastructures to inspect relevant criminal cases, execute a cyber forensic process, and determine the facts about a given incident. This framework offers a means to better capture information on successful attacks as part of a feedback mechanism to thwart future vulnerabilities and threats. This systematic holistic review will both inform on current challenges in IoT security and ideally motivate their future resolution. Full article
(This article belongs to the Special Issue Cyber Security in the New "Edge Computing + IoT" World)
Show Figures

Figure 1

23 pages, 958 KiB  
Systematic Review
Volumetric Techniques for Product Routing and Loading Optimisation in Industry 4.0: A Review
by Ricardo Lopes, Marcello Trovati and Ella Pereira
Future Internet 2024, 16(2), 39; https://doi.org/10.3390/fi16020039 - 24 Jan 2024
Viewed by 1656
Abstract
Industry 4.0 has become a crucial part in the majority of processes, components, and related modelling, as well as predictive tools that allow a more efficient, automated and sustainable approach to industry. The availability of large quantities of data, and the advances in [...] Read more.
Industry 4.0 has become a crucial part in the majority of processes, components, and related modelling, as well as predictive tools that allow a more efficient, automated and sustainable approach to industry. The availability of large quantities of data, and the advances in IoT, AI, and data-driven frameworks, have led to an enhanced data gathering, assessment, and extraction of actionable information, resulting in a better decision-making process. Product picking and its subsequent packing is an important area, and has drawn increasing attention for the research community. However, depending of the context, some of the related approaches tend to be either highly mathematical, or applied to a specific context. This article aims to provide a survey on the main methods, techniques, and frameworks relevant to product packing and to highlight the main properties and features that should be further investigated to ensure a more efficient and optimised approach. Full article
Show Figures

Figure 1

14 pages, 1090 KiB  
Article
Refined Semi-Supervised Modulation Classification: Integrating Consistency Regularization and Pseudo-Labeling Techniques
by Min Ma, Shanrong Liu, Shufei Wang and Shengnan Shi
Future Internet 2024, 16(2), 38; https://doi.org/10.3390/fi16020038 - 23 Jan 2024
Viewed by 1948
Abstract
Automatic modulation classification (AMC) plays a crucial role in wireless communication by identifying the modulation scheme of received signals, bridging signal reception and demodulation. Its main challenge lies in performing accurate signal processing without prior information. While deep learning has been applied to [...] Read more.
Automatic modulation classification (AMC) plays a crucial role in wireless communication by identifying the modulation scheme of received signals, bridging signal reception and demodulation. Its main challenge lies in performing accurate signal processing without prior information. While deep learning has been applied to AMC, its effectiveness largely depends on the availability of labeled samples. To address the scarcity of labeled data, we introduce a novel semi-supervised AMC approach combining consistency regularization and pseudo-labeling. This method capitalizes on the inherent data distribution of unlabeled data to supplement the limited labeled data. Our approach involves a dual-component objective function for model training: one part focuses on the loss from labeled data, while the other addresses the regularized loss for unlabeled data, enhanced through two distinct levels of data augmentation. These combined losses concurrently refine the model parameters. Our method demonstrates superior performance over established benchmark algorithms, such as decision trees (DTs), support vector machines (SVMs), pi-models, and virtual adversarial training (VAT). It exhibits a marked improvement in the recognition accuracy, particularly when the proportion of labeled samples is as low as 1–4%. Full article
Show Figures

Figure 1

15 pages, 481 KiB  
Article
DDPG-MPCC: An Experience Driven Multipath Performance Oriented Congestion Control
by Shiva Raj Pokhrel, Jonathan Kua, Deol Satish, Sebnem Ozer, Jeff Howe and Anwar Walid
Future Internet 2024, 16(2), 37; https://doi.org/10.3390/fi16020037 - 23 Jan 2024
Cited by 3 | Viewed by 1927
Abstract
We introduce a novel multipath data transport approach at the transport layer referred to as ‘Deep Deterministic Policy Gradient for Multipath Performance-oriented Congestion Control’ (DDPG-MPCC), which leverages deep reinforcement learning to enhance congestion management in multipath networks. Our method combines DDPG [...] Read more.
We introduce a novel multipath data transport approach at the transport layer referred to as ‘Deep Deterministic Policy Gradient for Multipath Performance-oriented Congestion Control’ (DDPG-MPCC), which leverages deep reinforcement learning to enhance congestion management in multipath networks. Our method combines DDPG with online convex optimization to optimize fairness and performance in simultaneously challenging multipath internet congestion control scenarios. Through experiments by developing kernel implementation, we show how DDPG-MPCC performs compared to the state-of-the-art solutions. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop