Previous Issue
Volume 16, May
 
 

Future Internet, Volume 16, Issue 6 (June 2024) – 30 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1641 KiB  
Article
Enabling End-User Development in Smart Homes: A Machine Learning-Powered Digital Twin for Energy Efficient Management
by Luca Cotti, Davide Guizzardi, Barbara Rita Barricelli and Daniela Fogli
Future Internet 2024, 16(6), 208; https://doi.org/10.3390/fi16060208 (registering DOI) - 14 Jun 2024
Abstract
End-User Development has been proposed over the years to allow end users to control and manage their Internet of Things-based environments, such as smart homes. With End-User Development, end users are able to create trigger-action rules or routines to tailor the behavior of [...] Read more.
End-User Development has been proposed over the years to allow end users to control and manage their Internet of Things-based environments, such as smart homes. With End-User Development, end users are able to create trigger-action rules or routines to tailor the behavior of their smart homes. However, the scientific research proposed to date does not encompass methods that evaluate the suitability of user-created routines in terms of energy consumption. This paper proposes using Machine Learning to build a Digital Twin of a smart home that can predict the energy consumption of smart appliances. The Digital Twin will allow end users to simulate possible scenarios related to the creation of routines. Simulations will be used to assess the effects of the activation of appliances involved in the routines under creation and possibly modify them to save energy consumption according to the Digital Twin’s suggestions. Full article
Show Figures

Figure 1

18 pages, 1182 KiB  
Article
Towards a New Business Model for Streaming Platforms Using Blockchain Technology
by Rendrikson Soares and André Araújo
Future Internet 2024, 16(6), 207; https://doi.org/10.3390/fi16060207 - 13 Jun 2024
Viewed by 182
Abstract
Streaming platforms have revolutionized the digital entertainment industry, but challenges and research opportunities remain to be addressed. One current concern is the lack of transparency in the business model of video streaming platforms, which makes it difficult for content creators to access viewing [...] Read more.
Streaming platforms have revolutionized the digital entertainment industry, but challenges and research opportunities remain to be addressed. One current concern is the lack of transparency in the business model of video streaming platforms, which makes it difficult for content creators to access viewing metrics and receive payments without the intermediary of third parties. Additionally, there is no way to trace payment transactions. This article presents a computational architecture based on blockchain technology to enable transparency in audience management and payments in video streaming platforms. Smart contracts will define the business rules of the streaming services, while middleware will integrate the metadata of the streaming platforms with the proposed computational solution. The proposed solution has been validated through data transactions on different blockchain networks and interviews with content creators from video streaming platforms. The results confirm the viability of the proposed solution in enhancing transparency and auditability in the realm of audience control services and payments on video streaming platforms. Full article
Show Figures

Figure 1

26 pages, 953 KiB  
Article
Visual Data and Pattern Analysis for Smart Education: A Robust DRL-Based Early Warning System for Student Performance Prediction
by Wala Bagunaid, Naveen Chilamkurti, Ahmad Salehi Shahraki and Saeed Bamashmos
Future Internet 2024, 16(6), 206; https://doi.org/10.3390/fi16060206 - 11 Jun 2024
Viewed by 224
Abstract
Artificial Intelligence (AI) and Deep Reinforcement Learning (DRL) have revolutionised e-learning by creating personalised, adaptive, and secure environments. However, challenges such as privacy, bias, and data limitations persist. E-FedCloud aims to address these issues by providing more agile, personalised, and secure e-learning experiences. [...] Read more.
Artificial Intelligence (AI) and Deep Reinforcement Learning (DRL) have revolutionised e-learning by creating personalised, adaptive, and secure environments. However, challenges such as privacy, bias, and data limitations persist. E-FedCloud aims to address these issues by providing more agile, personalised, and secure e-learning experiences. This study introduces E-FedCloud, an AI-assisted, adaptive e-learning system that automates personalised recommendations and tracking, thereby enhancing student performance. It employs federated learning-based authentication to ensure secure and private access for both course instructors and students. Intelligent Software Agents (ISAs) evaluate weekly student engagement using the Shannon Entropy method, classifying students into either engaged or not-engaged clusters. E-FedCloud utilises weekly engagement status, demographic information, and an innovative DRL-based early warning system, specifically ID2QN, to predict the performance of not-engaged students. Based on these predictions, the system categorises students into three groups: risk of dropping out, risk of scoring lower in the final exam, and risk of failing the end exam. It employs a multi-disciplinary ontology graph and an attention-based capsule network for automated, personalised recommendations. The system also integrates performance tracking to enhance student engagement. Data are securely stored on a blockchain using the LWEA encryption method. Full article
30 pages, 2109 KiB  
Article
Enhancing Efficiency and Security in Unbalanced PSI-CA Protocols through Cloud Computing and Homomorphic Encryption in Mobile Networks
by Wuzheng Tan, Shenglong Du and Jian Weng
Future Internet 2024, 16(6), 205; https://doi.org/10.3390/fi16060205 - 7 Jun 2024
Viewed by 281
Abstract
Private Set Intersection Cardinality (PSI-CA) is a cryptographic method in secure multi-party computation that allows entities to identify the cardinality of the intersection without revealing their private data.Traditional approaches assume similar-sized datasets and equal computational power, overlooking practical imbalances. In real-world applications, dataset [...] Read more.
Private Set Intersection Cardinality (PSI-CA) is a cryptographic method in secure multi-party computation that allows entities to identify the cardinality of the intersection without revealing their private data.Traditional approaches assume similar-sized datasets and equal computational power, overlooking practical imbalances. In real-world applications, dataset sizes and computational capacities often vary, particularly in Internet of Things and mobile scenarios where device limitations restrict computational types. Traditional PSI-CA protocols are inefficient here, as computational and communication complexities correlate with the size of larger datasets. Thus, adapting PSI-CA protocols to these imbalances is crucial. This paper explores unbalanced scenarios where one party (the receiver) has a relatively small dataset and limited computational power, while the other party (the sender) has a large amount of data and strong computational capabilities.This paper, based on the concept of commutative encryption, introduces Cuckoo filter, cloud computing technology, and homomorphic encryption, among other technologies, to construct three novel solutions for unbalanced Private Set Intersection Cardinality (PSI-CA): an unbalanced PSI-CA protocol based on Cuckoo filter, an unbalanced PSI-CA protocol based on single-cloud assistance, and an unbalanced PSI-CA protocol based on dual-cloud assistance. Depending on performance and security requirements, different protocols can be employed for various applications. Full article
(This article belongs to the Section Cybersecurity)
23 pages, 575 KiB  
Article
Usability Evaluation of Wearable Smartwatches Using Customized Heuristics and System Usability Scale Score
by Majed A. Alshamari and Maha M. Althobaiti
Future Internet 2024, 16(6), 204; https://doi.org/10.3390/fi16060204 - 6 Jun 2024
Viewed by 229
Abstract
The mobile and wearable nature of smartwatches poses challenges in evaluating their usability. This paper presents a study employing customized heuristic evaluation and use of the system usability scale (SUS) on four smartwatches, along with their mobile applications. A total of 11 heuristics [...] Read more.
The mobile and wearable nature of smartwatches poses challenges in evaluating their usability. This paper presents a study employing customized heuristic evaluation and use of the system usability scale (SUS) on four smartwatches, along with their mobile applications. A total of 11 heuristics were developed and validated by experts by combining Nielsen’s heuristic and Motti and Caines’ heuristics. In this study, 20 participants used the watches and participated in the SUS survey. A total of 307 usability issues were reported by the evaluators. The results of this study show that the Galaxy Watch 5 scored highest in terms of efficiency, ease of use, features, and battery life compared to the other three smartwatches and has fewer usability issues. The results indicate that ease of use, features, and flexibility are important usability attributes for future smartwatches. The Galaxy Watch 5 received the highest SUS score of 87.375. Both evaluation methods showed no significant differences in results, and customized heuristics were found to be useful for smartwatch evaluation. Full article
18 pages, 2716 KiB  
Article
Evaluation of Radio Access Protocols for V2X in 6G Scenario-Based Models
by Héctor Orrillo, André Sabino and Mário Marques da Silva
Future Internet 2024, 16(6), 203; https://doi.org/10.3390/fi16060203 - 6 Jun 2024
Viewed by 431
Abstract
The expansion of mobile connectivity with the arrival of 6G paves the way for the new Internet of Verticals (6G-IoV), benefiting autonomous driving. This article highlights the importance of vehicle-to-everything (V2X) and vehicle-to-vehicle (V2V) communication in improving road safety. Current technologies such as [...] Read more.
The expansion of mobile connectivity with the arrival of 6G paves the way for the new Internet of Verticals (6G-IoV), benefiting autonomous driving. This article highlights the importance of vehicle-to-everything (V2X) and vehicle-to-vehicle (V2V) communication in improving road safety. Current technologies such as IEEE 802.11p and LTE-V2X are being improved, while new radio access technologies promise more reliable, lower-latency communications. Moreover, 3GPP is developing NR-V2X to improve the performance of communications between vehicles, while IEEE proposes the 802.11bd protocol, aiming for the greater interoperability and detection of transmissions between vehicles. Both new protocols are being developed and improved to make autonomous driving more efficient. This study analyzes and compares the performance of the protocols mentioned, namely 802.11p, 802.11bd, LTE-V2X, and NR-V2X. The contribution of this study is to identify the most suitable protocol that meets the requirements of V2V communications in autonomous driving. The relevance of V2V communication has driven intense research in the scientific community. Among the various applications of V2V communication are Cooperative Awareness, V2V Unicast Exchange, and V2V Decentralized Environmental Notification, among others. To this end, the performance of the Link Layer of these protocols is evaluated and compared. Based on the analysis of the results, it can be concluded that NR-V2X outperforms IEEE 802.11bd in terms of transmission latency (L) and data rate (DR). In terms of the packet error rate (PER), it is shown that both LTE-V2X and NR-V2X exhibit a lower PER compared to IEEE protocols, especially as the distance between the vehicles increases. This advantage becomes even more significant in scenarios with greater congestion and network interference. Full article
29 pages, 2761 KiB  
Article
Metric Space Indices for Dynamic Optimization in a Peer to Peer-Based Image Classification Crowdsourcing Platform
by Fernando Loor, Veronica Gil-Costa and Mauricio Marin
Future Internet 2024, 16(6), 202; https://doi.org/10.3390/fi16060202 - 6 Jun 2024
Viewed by 230
Abstract
Large-scale computer platforms that process users’ online requests must be capable of handling unexpected spikes in arrival rates. These platforms, which are composed of distributed components, can be configured with parameters to ensure both the quality of the results obtained for each request [...] Read more.
Large-scale computer platforms that process users’ online requests must be capable of handling unexpected spikes in arrival rates. These platforms, which are composed of distributed components, can be configured with parameters to ensure both the quality of the results obtained for each request and low response times. In this work, we propose a dynamic optimization engine based on metric space indexing to address this problem. The engine is integrated into the platform and periodically monitors performance metrics to determine whether new configuration parameter values need to be computed. Our case study focuses on a P2P platform designed for classifying crowdsourced images related to natural disasters. We evaluate our approach under scenarios with high and low workloads, comparing it against alternative methods based on deep reinforcement learning. The results show that our approach reduces processing time by an average of 40%. Full article
Show Figures

Figure 1

32 pages, 1109 KiB  
Article
Impact, Compliance, and Countermeasures in Relation to Data Breaches in Publicly Traded U.S. Companies
by Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Guilherme Fay Vergara, Robson de Oliveira Albuquerque and Georges Daniel Amvame Nze
Future Internet 2024, 16(6), 201; https://doi.org/10.3390/fi16060201 - 5 Jun 2024
Viewed by 394
Abstract
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result [...] Read more.
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result in substantial economic losses for the affected companies. To diminish the frequency and severity of data breaches in the future, it is imperative to research their causes and explore preventive measures. In pursuit of this goal, this study considers a dataset of data breach incidents affecting companies listed on the New York Stock Exchange and NASDAQ. This dataset has been augmented with additional information regarding the targeted company. This paper employs statistical visualizations of the data to clarify these incidents and assess their consequences on the affected companies and individuals whose data were compromised. We then propose mitigation controls based on established frameworks such as the NIST Cybersecurity Framework. Additionally, this paper reviews the compliance scenario by examining the relevant laws and regulations applicable to each case, including SOX, HIPAA, GLBA, and PCI-DSS, and evaluates the impacts of data breaches on stock market prices. We also review guidelines for appropriately responding to data leaks in the U.S., for compliance achievement and cost reduction. By conducting this analysis, this work aims to contribute to a comprehensive understanding of data breaches and empower organizations to safeguard against them proactively, improving the technical quality of their basic services. To our knowledge, this is the first paper to address compliance with data protection regulations, security controls as countermeasures, financial impacts on stock prices, and incident response strategies. Although the discussion is focused on publicly traded companies in the United States, it may also apply to public and private companies worldwide. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Graphical abstract

22 pages, 2903 KiB  
Article
Implementation of Lightweight Machine Learning-Based Intrusion Detection System on IoT Devices of Smart Homes
by Abbas Javed, Amna Ehtsham, Muhammad Jawad, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi and Hadi Larijani
Future Internet 2024, 16(6), 200; https://doi.org/10.3390/fi16060200 - 5 Jun 2024
Viewed by 470
Abstract
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems [...] Read more.
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems (IDSs) have been implemented on the edge and the cloud; however, IDSs have not been embedded in IoT devices. To address this, we propose a novel machine learning-based two-layered IDS for smart home IoT devices, enhancing accuracy and computational efficiency. The first layer of the proposed IDS is deployed on a microcontroller-based smart thermostat, which uploads the data to a website hosted on a cloud server. The second layer of the IDS is deployed on the cloud side for classification of attacks. The proposed IDS can detect the threats with an accuracy of 99.50% at cloud level (multiclassification). For real-time testing, we implemented the Raspberry Pi 4-based adversary to generate a dataset for man-in-the-middle (MITM) and denial of service (DoS) attacks on smart thermostats. The results show that the XGBoost-based IDS detects MITM and DoS attacks in 3.51 ms on a smart thermostat with an accuracy of 97.59%. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

15 pages, 1885 KiB  
Article
Metaverse and Fashion: An Analysis of Consumer Online Interest
by Carmen Ruiz Viñals, Marta Gil Ibáñez and José Luis Del Olmo Arriaga
Future Internet 2024, 16(6), 199; https://doi.org/10.3390/fi16060199 - 4 Jun 2024
Viewed by 262
Abstract
Recent studies have demonstrated the value that the Internet and web applications bring to businesses. Among other tools are those that enable the analysis and monitoring of searches, such as Google Trends, which is currently used by the fashion industry to guide experiential [...] Read more.
Recent studies have demonstrated the value that the Internet and web applications bring to businesses. Among other tools are those that enable the analysis and monitoring of searches, such as Google Trends, which is currently used by the fashion industry to guide experiential practices in a context of augmented reality and/or virtual reality, and even to predict purchasing behaviours through the metaverse. Data from this tool provide insight into fashion consumer search patterns. Understanding and managing this digital tool is an essential factor in rethinking businesses’ marketing strategies. The aim of this study is to analyse online user search behaviour by analysing and monitoring the terms “metaverse” and “fashion” on Google Trends. A quantitative descriptive cross-sectional method was employed. The results show that there is growing consumer interest in both concepts on the Internet, despite the lack of homogeneity in the behaviour of the five Google search tools. Full article
Show Figures

Figure 1

17 pages, 825 KiB  
Article
The Use of Artificial Intelligence in eParticipation: Mapping Current Research
by Zisis Vasilakopoulos, Theocharis Tavantzis, Rafail Promikyridis and Efthimios Tambouris
Future Internet 2024, 16(6), 198; https://doi.org/10.3390/fi16060198 - 3 Jun 2024
Viewed by 191
Abstract
Electronic Participation (eParticipation) enables citizens to engage in political and decision-making processes using information and communication technologies. As in many other fields, Artificial Intelligence (AI) has recently started to dictate some of the realities of eParticipation. As a result, an increasing number of [...] Read more.
Electronic Participation (eParticipation) enables citizens to engage in political and decision-making processes using information and communication technologies. As in many other fields, Artificial Intelligence (AI) has recently started to dictate some of the realities of eParticipation. As a result, an increasing number of studies are investigating the use of AI in eParticipation. The aim of this paper is to map current research on the use of AI in eParticipation. Following PRISMA methodology, the authors identified 235 relevant papers in Web of Science and Scopus and selected 46 studies for review. For analysis purposes, an analysis framework was constructed that combined eParticipation elements (namely actors, activities, effects, contextual factors, and evaluation) with AI elements (namely areas, algorithms, and algorithm evaluation). The results suggest that certain eParticipation actors and activities, as well as AI areas and algorithms, have attracted significant attention from researchers. However, many more remain largely unexplored. The findings can be of value to both academics looking for unexplored research fields and practitioners looking for empirical evidence on what works and what does not. Full article
Show Figures

Graphical abstract

18 pages, 1017 KiB  
Article
In-Home Evaluation of the NeoCare Artificial Intelligence Sound-Based Fall Detection System
by Carol Maher, Kylie A. Dankiw, Ben Singh, Svetlana Bogomolova and Rachel G. Curtis
Future Internet 2024, 16(6), 197; https://doi.org/10.3390/fi16060197 - 2 Jun 2024
Viewed by 285
Abstract
The NeoCare home monitoring system aims to detect falls and other events using artificial intelligence. This study evaluated NeoCare’s accuracy and explored user perceptions through a 12-week in-home trial with 18 households of adults aged 65+ years old at risk of falls (mean [...] Read more.
The NeoCare home monitoring system aims to detect falls and other events using artificial intelligence. This study evaluated NeoCare’s accuracy and explored user perceptions through a 12-week in-home trial with 18 households of adults aged 65+ years old at risk of falls (mean age: 75.3 years old; 67% female). Participants logged events that were cross-referenced with NeoCare logs to calculate sensitivity and specificity for fall detection and response. Qualitative interviews gathered in-depth user feedback. During the trial, 28 falls/events were documented, with 12 eligible for analysis as others occurred outside the home or when devices were offline. NeoCare was activated 4939 times—4930 by everyday household sounds and 9 by actual falls. Fall detection sensitivity was 75.00% and specificity 6.80%. For responding to falls, sensitivity was 62.50% and specificity 17.28%. Users felt more secure with NeoCare but identified needs for further calibration to improve accuracy. Advantages included avoiding wearables, while key challenges were misinterpreting noises and occasional technical issues like going offline. Suggested improvements were visual indicators, trigger words, and outdoor capability. The study demonstrated NeoCare’s potential with modifications. Users found it beneficial, but highlighted areas for improvement. Real-world evaluations and user-centered design are crucial for healthcare technology development. Full article
(This article belongs to the Special Issue eHealth and mHealth)
22 pages, 890 KiB  
Article
Efficiency of Federated Learning and Blockchain in Preserving Privacy and Enhancing the Performance of Credit Card Fraud Detection (CCFD) Systems
by Tahani Baabdullah, Amani Alzahrani, Danda B. Rawat and Chunmei Liu
Future Internet 2024, 16(6), 196; https://doi.org/10.3390/fi16060196 - 2 Jun 2024
Viewed by 183
Abstract
Increasing global credit card usage has elevated it to a preferred payment method for daily transactions, underscoring its significance in global financial cybersecurity. This paper introduces a credit card fraud detection (CCFD) system that integrates federated learning (FL) with blockchain technology. The experiment [...] Read more.
Increasing global credit card usage has elevated it to a preferred payment method for daily transactions, underscoring its significance in global financial cybersecurity. This paper introduces a credit card fraud detection (CCFD) system that integrates federated learning (FL) with blockchain technology. The experiment employs FL to establish a global learning model on the cloud server, which transmits initial parameters to individual local learning models on fog nodes. With three banks (fog nodes) involved, each bank trains its learning model locally, ensuring data privacy, and subsequently sends back updated parameters to the global learning model. Through the integration of FL and blockchain, our system ensures privacy preservation and data protection. We utilize three machine learning and deep neural network learning algorithms, RF, CNN, and LSTM, alongside deep optimization techniques such as ADAM, SGD, and MSGD. The SMOTE oversampling technique is also employed to balance the dataset before model training. Our proposed framework has demonstrated efficiency and effectiveness in enhancing classification performance and prediction accuracy. Full article
19 pages, 1936 KiB  
Article
GreenLab, an IoT-Based Small-Scale Smart Greenhouse
by Cristian Volosciuc, Răzvan Bogdan, Bianca Blajovan, Cristina Stângaciu and Marius Marcu
Future Internet 2024, 16(6), 195; https://doi.org/10.3390/fi16060195 - 31 May 2024
Viewed by 264
Abstract
In an era of connectivity, the Internet of Things introduces smart solutions for smart and sustainable agriculture, bringing alternatives to overcome the food crisis. Among these solutions, smart greenhouses support crop and vegetable agriculture regardless of season and cultivated area by carefully controlling [...] Read more.
In an era of connectivity, the Internet of Things introduces smart solutions for smart and sustainable agriculture, bringing alternatives to overcome the food crisis. Among these solutions, smart greenhouses support crop and vegetable agriculture regardless of season and cultivated area by carefully controlling and managing parameters like temperature, air and soil humidity, and light. Smart technologies have proven to be successful tools for increasing agricultural production at both the macro and micro levels, which is an important step in streamlining small-scale agriculture. This paper presents an experimental Internet of Things-based small-scale greenhouse prototype as a proof of concept for the benefits of merging smart sensing, connectivity, IoT, and mobile-based applications, for growing cultures. Our proposed solution is cost-friendly and includes a photovoltaic panel and a buffer battery for reducing energy consumption costs, while also assuring functionality during night and cloudy weather and a mobile application for easy data visualization and monitoring of the greenhouse. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

14 pages, 3949 KiB  
Article
Research on Multi-Modal Pedestrian Detection and Tracking Algorithm Based on Deep Learning
by Rui Zhao, Jutao Hao and Huan Huo
Future Internet 2024, 16(6), 194; https://doi.org/10.3390/fi16060194 - 31 May 2024
Viewed by 219
Abstract
In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal [...] Read more.
In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal daytime lighting, visibility is enhanced, leading to superior pedestrian detection outcomes. Conversely, under low-light conditions, visible light mode imaging falters due to the inadequate provision of pedestrian target information, resulting in a marked decline in detection efficacy. In this context, infrared light mode imaging emerges as a valuable supplement, bolstering pedestrian information provision. This paper delves into pedestrian detection and tracking algorithms within a multi-modal image framework grounded in deep learning methodologies. Leveraging the YOLOv4 algorithm as a foundation, augmented by a channel stack fusion module, a novel multi-modal pedestrian detection algorithm tailored for intelligent transportation is proposed. This algorithm capitalizes on the fusion of visible and infrared light mode image features to enhance pedestrian detection performance amidst complex road environments. Experimental findings demonstrate that compared to the Visible-YOLOv4 algorithm, renowned for its high performance, the proposed Double-YOLOv4-CSE algorithm exhibits a notable improvement, boasting a 5.0% accuracy rate enhancement and a 6.9% reduction in logarithmic average missing rate. This research’s goal is to ensure that the algorithm can run smoothly even on a low configuration 1080 Ti GPU and to improve the algorithm’s coverage at the application layer, making it affordable and practical for both urban and rural areas. This addresses the broader research problem within the scope of smart cities and remote ends with limited computational power. Full article
Show Figures

Figure 1

16 pages, 335 KiB  
Article
Enhancing Sensor Data Imputation: OWA-Based Model Aggregation for Missing Values
by Muthana Al-Amidie, Laith Alzubaidi, Muhammad Aminul Islam and Derek T. Anderson
Future Internet 2024, 16(6), 193; https://doi.org/10.3390/fi16060193 - 31 May 2024
Viewed by 181
Abstract
Due to some limitations in the data collection process caused either by human-related errors or by collection electronics, sensors, and network connectivity-related errors, the important values at some points could be lost. However, a complete dataset is required for the desired performance of [...] Read more.
Due to some limitations in the data collection process caused either by human-related errors or by collection electronics, sensors, and network connectivity-related errors, the important values at some points could be lost. However, a complete dataset is required for the desired performance of the subsequent applications in various fields like engineering, data science, statistics, etc. An efficient data imputation technique is desired to fill in the missing data values to achieve completeness within the dataset. The fuzzy integral is considered one of the most powerful techniques for multi-source information fusion. It has a wide range of applications in many real-world decision-making problems that often require decisions to be made with partially observable/available information. To address this problem, algorithms impute missing data with a representative sample or by predicting the most likely value given the observed data. In this article, we take a completely different approach to the information fusion task in the ordered weighted averaging (OWA) context. In particular, we empirically explore for different distributions how the weights/importance of the missing sources are distributed across the observed inputs/sources. The experimental results on the synthetic and real-world datasets demonstrate the applicability of the proposed methods. Full article
Show Figures

Figure 1

16 pages, 5464 KiB  
Article
Prophet–CEEMDAN–ARBiLSTM-Based Model for Short-Term Load Forecasting
by Jindong Yang, Xiran Zhang, Wenhao Chen and Fei Rong
Future Internet 2024, 16(6), 192; https://doi.org/10.3390/fi16060192 - 31 May 2024
Viewed by 179
Abstract
Accurate short-term load forecasting (STLF) plays an essential role in sustainable energy development. Specifically, energy companies can efficiently plan and manage their generation capacity, lessening resource wastage and promoting the overall efficiency of power resource utilization. However, existing models cannot accurately capture the [...] Read more.
Accurate short-term load forecasting (STLF) plays an essential role in sustainable energy development. Specifically, energy companies can efficiently plan and manage their generation capacity, lessening resource wastage and promoting the overall efficiency of power resource utilization. However, existing models cannot accurately capture the nonlinear features of electricity data, leading to a decline in the forecasting performance. To relieve this issue, this paper designs an innovative load forecasting method, named Prophet–CEEMDAN–ARBiLSTM, which consists of Prophet, Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), and the residual Bidirectional Long Short-Term Memory (BiLSTM) network. Specifically, this paper firstly employs the Prophet method to learn cyclic and trend features from input data, aiming to discern the influence of these features on the short-term electricity load. Then, the paper adopts CEEMDAN to decompose the residual series and yield components with distinct modalities. In the end, this paper designs the advanced residual BiLSTM (ARBiLSTM) block as the input of the above extracted features to obtain the forecasting results. By conducting multiple experiments on the New England public dataset, it demonstrates that the Prophet–CEEMDAN–ARBiLSTM method can achieve better performance compared with the existing Prophet-based ones. Full article
Show Figures

Figure 1

22 pages, 2048 KiB  
Article
Harnessing the Cloud: A Novel Approach to Smart Solar Plant Monitoring
by Mohammad Imran Ali, Shahi Dost, Khurram Shehzad Khattak, Muhammad Imran Khan and Riaz Muhammad
Future Internet 2024, 16(6), 191; https://doi.org/10.3390/fi16060191 - 29 May 2024
Viewed by 384
Abstract
Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature [...] Read more.
Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature and sensitivity to environmental parameters (e.g., irradiance, dust, temperature, aging and humidity), real-time solar plant monitoring is imperative. This paper’s contribution is to compare and analyze current IoT trends and propose future research directions. As a result, this will be instrumental in the development of low-cost, real-time, scalable, reliable, and power-optimized solar plant monitoring systems. In this work, a comparative analysis has been performed on proposed solutions using the existing literature. This comparative analysis has been conducted considering five aspects: computer boards, sensors, communication, servers, and architectural paradigms. IoT architectural paradigms employed have been summarized and discussed with respect to communication, application layers, and storage capabilities. To facilitate enhanced IoT-based solar monitoring, an edge computing paradigm has been proposed. Suggestions are presented for the fabrication of edge devices and nodes using optimum compute boards, sensors, and communication modules. Different cloud platforms have been explored, and it was concluded that the public cloud platform Amazon Web Services is the ideal solution. Artificial intelligence-based techniques, methods, and outcomes are presented, which can help in the monitoring, analysis, and management of solar PV systems. As an outcome, this paper can be used to help researchers and academics develop low-cost, real-time, effective, scalable, and reliable solar monitoring systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

15 pages, 845 KiB  
Article
Tracing Student Activity Patterns in E-Learning Environments: Insights into Academic Performance
by Evgenia Paxinou, Georgios Feretzakis, Rozita Tsoni, Dimitrios Karapiperis, Dimitrios Kalles and Vassilios S. Verykios
Future Internet 2024, 16(6), 190; https://doi.org/10.3390/fi16060190 - 29 May 2024
Viewed by 641
Abstract
In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and [...] Read more.
In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and reveal information about their academic behavior within a course. These learning paths can be depicted as sequences of transitions between various states, such as completing quizzes, submitting assignments, downloading files, and participating in forum discussions, among others. Considering that a specific learning path summarizes the students’ trajectory in a course during an academic year, we analyzed data on students’ actions extracted from Moodle logs to investigate how the distribution of user actions within different Moodle resources can impact academic achievements. Our analysis was conducted using a Markov Chain Model, whereby transition matrices were constructed to identify steady states, and eigenvectors were calculated. Correlations were explored between specific states in users’ eigenvectors and their final grades, which were used as a proxy of academic performance. Our findings offer valuable insights into the relationship between student actions, link weight vectors, and academic performance, in an attempt to optimize students’ learning paths, tutors’ guidance, and course structures in the Moodle environment. Full article
20 pages, 2832 KiB  
Article
Dynamic Spatial–Temporal Self-Attention Network for Traffic Flow Prediction
by Dong Wang, Hongji Yang and Hua Zhou
Future Internet 2024, 16(6), 189; https://doi.org/10.3390/fi16060189 - 25 May 2024
Viewed by 415
Abstract
Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and [...] Read more.
Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and dynamic spatiotemporal dependencies within the traffic data. In recent years, researchers have proposed various methods to enhance the accuracy of traffic flow prediction, but certain issues still persist. For instance, some methods rely on specific static assumptions, failing to adequately simulate the dynamic changes in the data, thus limiting their modelling capacity. On the other hand, some approaches inadequately capture the spatiotemporal dependencies, resulting in the omission of crucial information and leading to unsatisfactory prediction outcomes. To address these challenges, this paper proposes a model called the Dynamic Spatial–Temporal Self-Attention Network (DSTSAN). Firstly, this research enhances the interaction between different dimension features in the traffic data through a feature augmentation module, thereby improving the model’s representational capacity. Subsequently, the current investigation introduces two masking matrices: one captures local spatial dependencies and the other captures global spatial dependencies, based on the spatial self-attention module. Finally, the methodology employs a temporal self-attention module to capture and integrate the dynamic temporal dependencies of traffic data. We designed experiments using historical data from the previous hour to predict traffic flow conditions in the hour ahead, and the experiments were extensively compared to the DSTSAN model, with 11 baseline methods using four real-world datasets. The results demonstrate the effectiveness and superiority of the proposed approach. Full article
Show Figures

Figure 1

19 pages, 261 KiB  
Article
Studying the Quality of Source Code Generated by Different AI Generative Engines: An Empirical Evaluation
by Davide Tosi
Future Internet 2024, 16(6), 188; https://doi.org/10.3390/fi16060188 - 24 May 2024
Viewed by 521
Abstract
The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in [...] Read more.
The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in solving coding problems and generating software source code. In this paper, an empirical evaluation of AI-generated source code is performed: three complex coding problems (selected from the exams for the Java Programming course at the University of Insubria) are prompted to three different Large Language Model (LLM) Engines, and the generated code is evaluated in its correctness and quality by means of human-implemented test suites and quality metrics. The experimentation shows that the three evaluated LLM engines are able to solve the three exams but with the constant supervision of software experts in performing these tasks. Currently, LLM engines need human-expert support to produce running code that is of good quality. Full article
17 pages, 1140 KiB  
Article
Enhanced Beacons Dynamic Transmission over TSCH
by Erik Ortiz Guerra, Mario Martínez Morfa, Carlos Manuel García Algora, Hector Cruz-Enriquez, Kris Steenhaut and Samuel Montejo-Sánchez
Future Internet 2024, 16(6), 187; https://doi.org/10.3390/fi16060187 - 24 May 2024
Viewed by 415
Abstract
Time slotted channel hopping (TSCH) has become the standard multichannel MAC protocol for low-power lossy networks. The procedure for associating nodes in a TSCH-based network is not included in the standard and has been defined in the minimal 6TiSCH configuration. Faster network formation [...] Read more.
Time slotted channel hopping (TSCH) has become the standard multichannel MAC protocol for low-power lossy networks. The procedure for associating nodes in a TSCH-based network is not included in the standard and has been defined in the minimal 6TiSCH configuration. Faster network formation ensures that data packet transmission can start sooner. This paper proposes a dynamic beacon transmission schedule over the TSCH mechanism that achieves a shorter network formation time than the default minimum 6TiSCH static schedule. A theoretical model is derived for the proposed mechanism to estimate the expected time for a node to get associated with the network. Simulation results obtained with different network topologies and channel conditions show that the proposed mechanism reduces the average association time and average power consumption during network formation compared to the default minimal 6TiSCH configuration. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Graphical abstract

19 pages, 4134 KiB  
Article
Data Collection in Areas without Infrastructure Using LoRa Technology and a Quadrotor
by Josué I. Rojo-García, Sergio A. Vera-Chavarría, Yair Lozano-Hernández, Victor G. Sánchez-Meza, Jaime González-Sierra and Luz N. Oliva-Moreno
Future Internet 2024, 16(6), 186; https://doi.org/10.3390/fi16060186 - 24 May 2024
Viewed by 373
Abstract
The use of sensor networks in monitoring applications has increased; they are useful in security, environmental, and health applications, among others. These networks usually transmit data through short-range stations, which makes them attractive for incorporation into applications and devices for use in places [...] Read more.
The use of sensor networks in monitoring applications has increased; they are useful in security, environmental, and health applications, among others. These networks usually transmit data through short-range stations, which makes them attractive for incorporation into applications and devices for use in places without access to satellite or mobile signals, for example, forests, seas, and jungles. To this end, unmanned aerial vehicles (UAVs) have attractive characteristics for data collection and transmission in remote areas without infrastructure. Integrating systems based on wireless sensors and UAVs seems to be an economical and easy-to-use solution. However, the main difficulty is the amount of data sent, which affects the communication time and even the flight status of the UAV. Additionally, factors such as the UAV model and the hardware used for these tasks must be considered. Based on those difficulties mentioned, this paper proposes a system based on long-range (LoRa) technology. We present a low-cost wireless sensor network that is flexible, easy to deploy, and capable of collecting/sending data via LoRa transceivers. The readings obtained are packaged and sent to a UAV. The UAV performs predefined flights at a constant height of 30 m and with a direct line-of-sight (LoS) to the stations, during which it collects information from two data stations, concluding that it is possible to carry out a correct data transmission with a flight speed of 10 m/s and a transmission radius of 690 m for a group of three packages confirmed by 20 messages each. Thus, it is possible to collect data from routes of up to 8 km for each battery charge, considering the return of the UAV. Full article
Show Figures

Graphical abstract

21 pages, 1248 KiB  
Article
HP-LSTM: Hawkes Process–LSTM-Based Detection of DDoS Attack for In-Vehicle Network
by Xingyu Li, Ruifeng Li and Yanchen Liu
Future Internet 2024, 16(6), 185; https://doi.org/10.3390/fi16060185 - 23 May 2024
Viewed by 286
Abstract
Connected and autonomous vehicles (CAVs) are advancing at a fast speed with the improvement of the automotive industry, which opens up new possibilities for different attacks. A Distributed Denial-of-Service (DDoS) attacker floods the in-vehicle network with fake messages, resulting in the failure of [...] Read more.
Connected and autonomous vehicles (CAVs) are advancing at a fast speed with the improvement of the automotive industry, which opens up new possibilities for different attacks. A Distributed Denial-of-Service (DDoS) attacker floods the in-vehicle network with fake messages, resulting in the failure of driving assistance systems and impairment of vehicle control functionalities, seriously disrupting the normal operation of the vehicle. In this paper, we propose a novel DDoS attack detection method for in-vehicle Ethernet Scalable service-Oriented Middleware over IP (SOME/IP), which integrates the Hawkes process with Long Short-Term Memory networks (LSTMs) to capture the dynamic behavioral features of the attacker. Specifically, we employ the Hawkes process to capture features of the DDoS attack, with its parameters reflecting the dynamism and self-exciting properties of the attack events. Subsequently, we propose a novel deep learning network structure, an HP-LSTM block, inspired by the Hawkes process, while employing a residual attention block to enhance the model’s detection efficiency and accuracy. Additionally, due to the scarcity of publicly available datasets for SOME/IP, we employed a mature SOME/IP generator to create a dataset for evaluating the validity of the proposed detection model. Finally, extensive experiments were conducted to demonstrate the effectiveness of the proposed DDoS attack detection method. Full article
(This article belongs to the Special Issue Security for Vehicular Ad Hoc Networks)
19 pages, 1169 KiB  
Article
Exploiting Autoencoder-Based Anomaly Detection to Enhance Cybersecurity in Power Grids
by Fouzi Harrou, Benamar Bouyeddou, Abdelkader Dairi and Ying Sun
Future Internet 2024, 16(6), 184; https://doi.org/10.3390/fi16060184 - 22 May 2024
Viewed by 369
Abstract
The evolution of smart grids has led to technological advances and a demand for more efficient and sustainable energy systems. However, the deployment of communication systems in smart grids has increased the threat of cyberattacks, which can result in power outages and disruptions. [...] Read more.
The evolution of smart grids has led to technological advances and a demand for more efficient and sustainable energy systems. However, the deployment of communication systems in smart grids has increased the threat of cyberattacks, which can result in power outages and disruptions. This paper presents a semi-supervised hybrid deep learning model that combines a Gated Recurrent Unit (GRU)-based Stacked Autoencoder (AE-GRU) with anomaly detection algorithms, including Isolation Forest, Local Outlier Factor, One-Class SVM, and Elliptical Envelope. Using GRU units in both the encoder and decoder sides of the stacked autoencoder enables the effective capture of temporal patterns and dependencies, facilitating dimensionality reduction, feature extraction, and accurate reconstruction for enhanced anomaly detection in smart grids. The proposed approach utilizes unlabeled data to monitor network traffic and identify suspicious data flow. Specifically, the AE-GRU is performed for data reduction and extracting relevant features, and then the anomaly algorithms are applied to reveal potential cyberattacks. The proposed framework is evaluated using the widely adopted IEC 60870-5-104 traffic dataset. The experimental results demonstrate that the proposed approach outperforms standalone algorithms, with the AE-GRU-based LOF method achieving the highest detection rate. Thus, the proposed approach can potentially enhance the cybersecurity in smart grids by accurately detecting and preventing cyberattacks. Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
21 pages, 4943 KiB  
Article
Cross-Layer Optimization for Enhanced IoT Connectivity: A Novel Routing Protocol for Opportunistic Networks
by Ayman Khalil and Besma Zeddini
Future Internet 2024, 16(6), 183; https://doi.org/10.3390/fi16060183 - 22 May 2024
Viewed by 433
Abstract
Opportunistic networks, an evolution of mobile Ad Hoc networks (MANETs), offer decentralized communication without relying on preinstalled infrastructure, enabling nodes to route packets through different mobile nodes dynamically. However, due to the absence of complete paths and rapidly changing connectivity, routing in opportunistic [...] Read more.
Opportunistic networks, an evolution of mobile Ad Hoc networks (MANETs), offer decentralized communication without relying on preinstalled infrastructure, enabling nodes to route packets through different mobile nodes dynamically. However, due to the absence of complete paths and rapidly changing connectivity, routing in opportunistic networks presents unique challenges. This paper proposes a novel probabilistic routing model for opportunistic networks, leveraging nodes’ meeting probabilities to route packets towards their destinations. Thismodel dynamically builds routes based on the likelihood of encountering the destination node, considering factors such as the last meeting time and acknowledgment tables to manage network overload. Additionally, an efficient message detection scheme is introduced to alleviate high overhead by selectively deleting messages from buffers during congestion. Furthermore, the proposed model incorporates cross-layer optimization techniques, integrating optimization strategies across multiple protocol layers to maximize energy efficiency, adaptability, and message delivery reliability. Through extensive simulations, the effectiveness of the proposed model is demonstrated, showing improved message delivery probability while maintaining reasonable overhead and latency. This research contributes to the advancement of opportunistic networks, particularly in enhancing connectivity and efficiency for Internet of Things (IoT) applications deployed in challenging environments. Full article
Show Figures

Figure 1

16 pages, 2410 KiB  
Systematic Review
Urban Green Spaces and Mental Well-Being: A Systematic Review of Studies Comparing Virtual Reality versus Real Nature
by Liyuan Liang, Like Gobeawan, Siu-Kit Lau, Ervine Shengwei Lin and Kai Keng Ang
Future Internet 2024, 16(6), 182; https://doi.org/10.3390/fi16060182 - 21 May 2024
Viewed by 780
Abstract
Increasingly, urban planners are adopting virtual reality (VR) in designing urban green spaces (UGS) to visualize landscape designs in immersive 3D. However, the psychological effect of green spaces from the experience in VR may differ from the actual experience in the real world. [...] Read more.
Increasingly, urban planners are adopting virtual reality (VR) in designing urban green spaces (UGS) to visualize landscape designs in immersive 3D. However, the psychological effect of green spaces from the experience in VR may differ from the actual experience in the real world. In this paper, we systematically reviewed studies in the literature that conducted experiments to investigate the psychological benefits of nature in both VR and the real world to study nature in VR anchored to nature in the real world. We separated these studies based on the type of VR setup used, specifically, 360-degree video or 3D virtual environment, and established a framework of commonly used standard questionnaires used to measure the perceived mental states. The most common questionnaires include Positive and Negative Affect Schedule (PANAS), Perceived Restorativeness Scale (PRS), and Restoration Outcome Scale (ROS). Although the results from studies that used 360-degree video were less clear, results from studies that used 3D virtual environments provided evidence that virtual nature is comparable to real-world nature and thus showed promise that UGS designs in VR can transfer into real-world designs to yield similar physiological effects. Full article
(This article belongs to the Special Issue Advances in Extended Reality for Smart Cities)
Show Figures

Figure 1

20 pages, 4156 KiB  
Article
MADDPG-Based Offloading Strategy for Timing-Dependent Tasks in Edge Computing
by Yuchen Wang, Zishan Huang, Zhongcheng Wei and Jijun Zhao
Future Internet 2024, 16(6), 181; https://doi.org/10.3390/fi16060181 - 21 May 2024
Viewed by 411
Abstract
With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application [...] Read more.
With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application service applications from multiple users, this paper proposes an improved edge computing timing-dependent, task-offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) that aims to shorten the offloading delay and improve the resource utilization rate by means of resource prediction and collaboration among multiple agents to shorten the offloading delay and improve the resource utilization. First, to coordinate the global computing resource, the gated recurrent unit is utilized, which predicts the next computing resource requirements of the timing-dependent tasks according to historical information. Second, the predicted information, the historical offloading decisions and the current state are used as inputs, and the training process of the reinforcement learning algorithm is improved to propose a task-offloading algorithm based on MADDPG. The simulation results show that the algorithm reduces the response latency by 6.7% and improves the resource utilization by 30.6% compared with the suboptimal benchmark algorithm, and it reduces nearly 500 training rounds during the learning process, which effectively improves the timeliness of the offloading strategy. Full article
Show Figures

Figure 1

21 pages, 718 KiB  
Review
Using ChatGPT in Software Requirements Engineering: A Comprehensive Review
by Nuno Marques, Rodrigo Rocha Silva and Jorge Bernardino
Future Internet 2024, 16(6), 180; https://doi.org/10.3390/fi16060180 - 21 May 2024
Viewed by 738
Abstract
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role [...] Read more.
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role of large language models (LLMs), such as ChatGPT-3.5, in software requirements engineering, a critical area in software engineering experiencing rapid advances due to artificial intelligence (AI). By analyzing several studies, we systematically evaluate the integration of ChatGPT into software requirements engineering, focusing on its benefits, challenges, and ethical considerations. This evaluation is based on a comparative analysis that highlights ChatGPT’s efficiency in eliciting requirements, accuracy in capturing user needs, potential to improve communication among stakeholders, and impact on the responsibilities of requirements engineers. The selected studies were analyzed for their insights into the effectiveness of ChatGPT, the importance of human feedback, prompt engineering techniques, technological limitations, and future research directions in using LLMs in software requirements engineering. This comprehensive analysis aims to provide a differentiated perspective on how ChatGPT can reshape software requirements engineering practices and provides strategic recommendations for leveraging ChatGPT to effectively improve the software requirements engineering process. Full article
Show Figures

Figure 1

14 pages, 5787 KiB  
Article
Object and Event Detection Pipeline for Rink Hockey Games
by Jorge Miguel Lopes, Luis Paulo Mota, Samuel Marques Mota, José Manuel Torres, Rui Silva Moreira, Christophe Soares, Ivo Pereira, Feliz Ribeiro Gouveia and Pedro Sobral
Future Internet 2024, 16(6), 179; https://doi.org/10.3390/fi16060179 - 21 May 2024
Viewed by 449
Abstract
All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also [...] Read more.
All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also to make other judgments, such as being able to locate the ball. In this work, we present a real-time pipeline consisting of an object detection model specifically designed for rink hockey games, followed by a knowledge-based event detection module. Even in the presence of occlusions and fast movements, our deep learning object detection model effectively identifies and tracks important visual elements in real time, such as: ball, players, sticks, referees, crowd, goalkeeper, and goal. Using a curated dataset consisting of a collection of rink hockey videos containing 2525 annotated frames, we trained and evaluated the algorithm’s performance and compared it to state-of-the-art object detection techniques. Our object detection model, based on YOLOv7, presents a global accuracy of 80% and, according to our results, good performance in terms of accuracy and speed, making it a good choice for rink hockey applications. In our initial tests, the event detection module successfully detected an important event type in rink hockey games, namely, the occurrence of penalties. Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia II)
Show Figures

Figure 1

Previous Issue
Back to TopTop