Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 13967 KiB  
Article
Transforming Digital Marketing with Generative AI
by Tasin Islam, Alina Miron, Monomita Nandy, Jyoti Choudrie, Xiaohui Liu and Yongmin Li
Computers 2024, 13(7), 168; https://doi.org/10.3390/computers13070168 - 8 Jul 2024
Cited by 12 | Viewed by 14737
Abstract
The current marketing landscape faces challenges in content creation and innovation, relying heavily on manually created content and traditional channels like social media and search engines. While effective, these methods often lack the creativity and uniqueness needed to stand out in a competitive [...] Read more.
The current marketing landscape faces challenges in content creation and innovation, relying heavily on manually created content and traditional channels like social media and search engines. While effective, these methods often lack the creativity and uniqueness needed to stand out in a competitive market. To address this, we introduce MARK-GEN, a conceptual framework that utilises generative artificial intelligence (AI) models to transform marketing content creation. MARK-GEN provides a comprehensive, structured approach for businesses to employ generative AI in producing marketing materials, representing a new method in digital marketing strategies. We present two case studies within the fashion industry, demonstrating how MARK-GEN can generate compelling marketing content using generative AI technologies. This proposition paper builds on our previous technical developments in virtual try-on models, including image-based, multi-pose, and image-to-video techniques, and is intended for a broad audience, particularly those in business management. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

25 pages, 437 KiB  
Article
Enhancing the Security of Classical Communication with Post-Quantum Authenticated-Encryption Schemes for the Quantum Key Distribution
by Farshad Rahimi Ghashghaei, Yussuf Ahmed, Nebrase Elmrabit and Mehdi Yousefi
Computers 2024, 13(7), 163; https://doi.org/10.3390/computers13070163 - 1 Jul 2024
Cited by 6 | Viewed by 3516
Abstract
This research aims to establish a secure system for key exchange by using post-quantum cryptography (PQC) schemes in the classic channel of quantum key distribution (QKD). Modern cryptography faces significant threats from quantum computers, which can solve classical problems rapidly. PQC schemes address [...] Read more.
This research aims to establish a secure system for key exchange by using post-quantum cryptography (PQC) schemes in the classic channel of quantum key distribution (QKD). Modern cryptography faces significant threats from quantum computers, which can solve classical problems rapidly. PQC schemes address critical security challenges in QKD, particularly in authentication and encryption, to ensure the reliable communication across quantum and classical channels. The other objective of this study is to balance security and communication speed among various PQC algorithms in different security levels, specifically CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon, which are finalists in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography Standardization project. The quantum channel of QKD is simulated with Qiskit, which is a comprehensive and well-supported tool in the field of quantum computing. By providing a detailed analysis of the performance of these three algorithms with Rivest–Shamir–Adleman (RSA), the results will guide companies and organizations in selecting an optimal combination for their QKD systems to achieve a reliable balance between efficiency and security. Our findings demonstrate that the implemented PQC schemes effectively address security challenges posed by quantum computers, while keeping the the performance similar to RSA. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

44 pages, 4162 KiB  
Review
Object Tracking Using Computer Vision: A Review
by Pushkar Kadam, Gu Fang and Ju Jia Zou
Computers 2024, 13(6), 136; https://doi.org/10.3390/computers13060136 - 28 May 2024
Cited by 8 | Viewed by 11389
Abstract
Object tracking is one of the most important problems in computer vision applications such as robotics, autonomous driving, and pedestrian movement. There has been a significant development in camera hardware where researchers are experimenting with the fusion of different sensors and developing image [...] Read more.
Object tracking is one of the most important problems in computer vision applications such as robotics, autonomous driving, and pedestrian movement. There has been a significant development in camera hardware where researchers are experimenting with the fusion of different sensors and developing image processing algorithms to track objects. Image processing and deep learning methods have significantly progressed in the last few decades. Different data association methods accompanied by image processing and deep learning are becoming crucial in object tracking tasks. The data requirement for deep learning methods has led to different public datasets that allow researchers to benchmark their methods. While there has been an improvement in object tracking methods, technology, and the availability of annotated object tracking datasets, there is still scope for improvement. This review contributes by systemically identifying different sensor equipment, datasets, methods, and applications, providing a taxonomy about the literature and the strengths and limitations of different approaches, thereby providing guidelines for selecting equipment, methods, and applications. Research questions and future scope to address the unresolved issues in the object tracking field are also presented with research direction guidelines. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

19 pages, 1329 KiB  
Article
Blockchain Integration and Its Impact on Renewable Energy
by Hamed Taherdoost
Computers 2024, 13(4), 107; https://doi.org/10.3390/computers13040107 - 22 Apr 2024
Cited by 11 | Viewed by 11387
Abstract
This paper investigates the evolving landscape of blockchain technology in renewable energy. The study, based on a Scopus database search on 21 February 2024, reveals a growing trend in scholarly output, predominantly in engineering, energy, and computer science. The diverse range of source [...] Read more.
This paper investigates the evolving landscape of blockchain technology in renewable energy. The study, based on a Scopus database search on 21 February 2024, reveals a growing trend in scholarly output, predominantly in engineering, energy, and computer science. The diverse range of source types and global contributions, led by China, reflects the interdisciplinary nature of this field. This comprehensive review delves into 33 research papers, examining the integration of blockchain in renewable energy systems, encompassing decentralized power dispatching, certificate trading, alternative energy selection, and management in applications like intelligent transportation systems and microgrids. The papers employ theoretical concepts such as decentralized power dispatching models and permissioned blockchains, utilizing methodologies involving advanced algorithms, consensus mechanisms, and smart contracts to enhance efficiency, security, and transparency. The findings suggest that blockchain integration can reduce costs, increase renewable source utilization, and optimize energy management. Despite these advantages, challenges including uncertainties, privacy concerns, scalability issues, and energy consumption are identified, alongside legal and regulatory compliance and market acceptance hurdles. Overcoming resistance to change and building trust in blockchain-based systems are crucial for successful adoption, emphasizing the need for collaborative efforts among industry stakeholders, regulators, and technology developers to unlock the full potential of blockchains in renewable energy integration. Full article
Show Figures

Figure 1

34 pages, 7324 KiB  
Article
The Explainability of Transformers: Current Status and Directions
by Paolo Fantozzi and Maurizio Naldi
Computers 2024, 13(4), 92; https://doi.org/10.3390/computers13040092 - 4 Apr 2024
Cited by 9 | Viewed by 9026
Abstract
An increasing demand for model explainability has accompanied the widespread adoption of transformers in various fields of applications. In this paper, we conduct a survey of the existing literature on the explainability of transformers. We provide a taxonomy of methods based on the [...] Read more.
An increasing demand for model explainability has accompanied the widespread adoption of transformers in various fields of applications. In this paper, we conduct a survey of the existing literature on the explainability of transformers. We provide a taxonomy of methods based on the combination of transformer components that are leveraged to arrive at the explanation. For each method, we describe its mechanism and survey its applications. We find out that attention-based methods, both alone and in conjunction with activation-based and gradient-based methods, are the most employed ones. A growing attention is also devoted to the deployment of visualization techniques to help the explanation process. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

25 pages, 7135 KiB  
Article
A Seamless Deep Learning Approach for Apple Detection, Depth Estimation, and Tracking Using YOLO Models Enhanced by Multi-Head Attention Mechanism
by Praveen Kumar Sekharamantry, Farid Melgani, Jonni Malacarne, Riccardo Ricci, Rodrigo de Almeida Silva and Jose Marcato Junior
Computers 2024, 13(3), 83; https://doi.org/10.3390/computers13030083 - 21 Mar 2024
Cited by 15 | Viewed by 3504
Abstract
Considering precision agriculture, recent technological developments have sparked the emergence of several new tools that can help to automate the agricultural process. For instance, accurately detecting and counting apples in orchards is essential for maximizing harvests and ensuring effective resource management. However, there [...] Read more.
Considering precision agriculture, recent technological developments have sparked the emergence of several new tools that can help to automate the agricultural process. For instance, accurately detecting and counting apples in orchards is essential for maximizing harvests and ensuring effective resource management. However, there are several intrinsic difficulties with traditional techniques for identifying and counting apples in orchards. To identify, recognize, and detect apples, apple target detection algorithms, such as YOLOv7, have shown a great deal of reflection and accuracy. But occlusions, electrical wiring, branches, and overlapping pose severe issues for precisely detecting apples. Thus, to overcome these issues and accurately recognize apples and find the depth of apples from drone-based videos in complicated backdrops, our proposed model combines a multi-head attention system with the YOLOv7 object identification framework. Furthermore, we provide the ByteTrack method for apple counting in real time, which guarantees effective monitoring of apples. To verify the efficacy of our suggested model, a thorough comparison assessment is performed with several current apple detection and counting techniques. The outcomes adequately proved the effectiveness of our strategy, which continuously surpassed competing methods to achieve exceptional accuracies of 0.92, 0.96, and 0.95 with respect to precision, recall, and F1 score, and a low MAPE of 0.027, respectively. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Graphical abstract

25 pages, 1648 KiB  
Article
The Integration of the Internet of Things, Artificial Intelligence, and Blockchain Technology for Advancing the Wine Supply Chain
by Nino Adamashvili, Nino Zhizhilashvili and Caterina Tricase
Computers 2024, 13(3), 72; https://doi.org/10.3390/computers13030072 - 8 Mar 2024
Cited by 19 | Viewed by 7373
Abstract
The study presents a comprehensive examination of the recent advancements in the field of wine production using the Internet of Things (IoT), Artificial Intelligence (AI), and Blockchain Technology (BCT). The paper aims to provide insights into the implementation of these technologies in the [...] Read more.
The study presents a comprehensive examination of the recent advancements in the field of wine production using the Internet of Things (IoT), Artificial Intelligence (AI), and Blockchain Technology (BCT). The paper aims to provide insights into the implementation of these technologies in the wine supply chain and to identify the potential benefits associated with their use. The study highlights the various applications of IoT, AI, and BCT in wine production, including vineyard management, wine quality control, and supply chain management. It also discusses the potential benefits of these technologies, such as improved efficiency, increased transparency, and reduced costs. The study concludes by presenting the framework proposed by the authors in order to overcome the challenges associated with the implementation of these technologies in the wine supply chain and suggests areas for future research. The proposed framework meets the challenges of lack of transparency, lack of ecosystem management in the wine industry and irresponsible spending associated with the lack of monitoring and prediction tools. Overall, the study provides valuable insights into the potential of IoT, AI, and BCT in optimizing the wine supply chain and offers a comprehensive review of the existing literature on the study subject. Full article
Show Figures

Figure 1

22 pages, 3218 KiB  
Article
Integrating the Internet of Things (IoT) in SPA Medicine: Innovations and Challenges in Digital Wellness
by Mario Casillo, Liliana Cecere, Francesco Colace, Angelo Lorusso and Domenico Santaniello
Computers 2024, 13(3), 67; https://doi.org/10.3390/computers13030067 - 6 Mar 2024
Cited by 14 | Viewed by 3294
Abstract
Integrating modern and innovative technologies such as the Internet of Things (IoT) and Machine Learning (ML) presents new opportunities in healthcare, especially in medical spa therapies. Once considered palliative, these therapies conducted using mineral/thermal water are now recognized as a targeted and specific [...] Read more.
Integrating modern and innovative technologies such as the Internet of Things (IoT) and Machine Learning (ML) presents new opportunities in healthcare, especially in medical spa therapies. Once considered palliative, these therapies conducted using mineral/thermal water are now recognized as a targeted and specific therapeutic modality. The peculiarity of these treatments lies in their simplicity of administration, which allows for prolonged treatments, often lasting weeks, with progressive and controlled therapeutic effects. Thanks to new technologies, it will be possible to continuously monitor the patient, both on-site and remotely, increasing the effectiveness of the treatment. In this context, wearable devices, such as smartwatches, facilitate non-invasive monitoring of vital signs by collecting precise data on several key parameters, such as heart rate or blood oxygenation level, and providing a perspective of detailed treatment progress. The constant acquisition of data thanks to the IoT, combined with the advanced analytics of ML technologies, allows for data collection and precise analysis, allowing real-time monitoring and personalized treatment adaptation. This article introduces an IoT-based framework integrated with ML techniques to monitor spa treatments, providing tailored customer management and more effective results. A preliminary experimentation phase was designed and implemented to evaluate the system’s performance through evaluation questionnaires. Encouraging preliminary results have shown that the innovative approach can enhance and highlight the therapeutic value of spa therapies and their significant contribution to personalized healthcare. Full article
(This article belongs to the Special Issue Sensors and Smart Cities 2023)
Show Figures

Figure 1

25 pages, 752 KiB  
Review
Security and Privacy of Technologies in Health Information Systems: A Systematic Literature Review
by Parisasadat Shojaei, Elena Vlahu-Gjorgievska and Yang-Wai Chow
Computers 2024, 13(2), 41; https://doi.org/10.3390/computers13020041 - 31 Jan 2024
Cited by 29 | Viewed by 40962
Abstract
Health information systems (HISs) have immense value for healthcare institutions, as they provide secure storage, efficient retrieval, insightful analysis, seamless exchange, and collaborative sharing of patient health information. HISs are implemented to meet patient needs, as well as to ensure the security and [...] Read more.
Health information systems (HISs) have immense value for healthcare institutions, as they provide secure storage, efficient retrieval, insightful analysis, seamless exchange, and collaborative sharing of patient health information. HISs are implemented to meet patient needs, as well as to ensure the security and privacy of medical data, including confidentiality, integrity, and availability, which are necessary to achieve high-quality healthcare services. This systematic literature review identifies various technologies and methods currently employed to enhance the security and privacy of medical data within HISs. Various technologies have been utilized to enhance the security and privacy of healthcare information, such as the IoT, blockchain, mobile health applications, cloud computing, and combined technologies. This study also identifies three key security aspects, namely, secure access control, data sharing, and data storage, and discusses the challenges faced in each aspect that must be enhanced to ensure the security and privacy of patient information in HISs. Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
Show Figures

Figure 1

23 pages, 1225 KiB  
Article
Error Pattern Discovery in Spellchecking Using Multi-Class Confusion Matrix Analysis for the Croatian Language
by Gordan Gledec, Mladen Sokele, Marko Horvat and Miljenko Mikuc
Computers 2024, 13(2), 39; https://doi.org/10.3390/computers13020039 - 29 Jan 2024
Cited by 5 | Viewed by 2342
Abstract
This paper introduces a novel approach to the creation and application of confusion matrices for error pattern discovery in spellchecking for the Croatian language. The experimental dataset has been derived from a corpus of mistyped words and user corrections collected since 2008 using [...] Read more.
This paper introduces a novel approach to the creation and application of confusion matrices for error pattern discovery in spellchecking for the Croatian language. The experimental dataset has been derived from a corpus of mistyped words and user corrections collected since 2008 using the Croatian spellchecker available at ispravi.me. The important role of confusion matrices in enhancing the precision of spellcheckers, particularly within the diverse linguistic context of the Croatian language, is investigated. Common causes of spelling errors, emphasizing the challenges posed by diacritic usage, have been identified and analyzed. This research contributes to the advancement of spellchecking technologies and provides a more comprehensive understanding of linguistic details, particularly in languages with diacritic-rich orthographies, like Croatian. The presented user-data-driven approach demonstrates the potential for custom spellchecking solutions, especially considering the ever-changing dynamics of language use in digital communication. Full article
Show Figures

Figure 1

23 pages, 6251 KiB  
Article
Augmented Reality Escape Classroom Game for Deep and Meaningful English Language Learning
by Angeliki Voreopoulou, Stylianos Mystakidis and Avgoustos Tsinakos
Computers 2024, 13(1), 24; https://doi.org/10.3390/computers13010024 - 16 Jan 2024
Cited by 12 | Viewed by 4089
Abstract
A significant volume of literature has extensively reported on and presented the benefits of employing escape classroom games (ECGs), on one hand, and on augmented reality (AR) in English language learning, on the other. However, there is little evidence on how AR-powered ECGs [...] Read more.
A significant volume of literature has extensively reported on and presented the benefits of employing escape classroom games (ECGs), on one hand, and on augmented reality (AR) in English language learning, on the other. However, there is little evidence on how AR-powered ECGs can enhance deep and meaningful foreign language learning. Hence, this study presents the design, development and user evaluation of an innovative augmented reality escape classroom game created for teaching English as a foreign language (EFL). The game comprises an imaginative guided group tour around the Globe Theatre in London that is being disrupted by Shakespeare’s ghost. The game was evaluated by following a qualitative research method that depicts the in-depth perspectives of ten in-service English language teachers. The data collection instruments included a 33-item questionnaire and semi-structured interviews. The findings suggest that this escape game is a suitable pedagogical tool for deep and meaningful language learning and that it can raise cultural awareness, while enhancing vocabulary retention and the development of receptive and productive skills in English. Students’ motivation and satisfaction levels toward language learning are estimated to remain high due to the game’s playful nature, its interactive elements, as well as the joyful atmosphere created through active communication, collaboration, creativity, critical thinking and peer work. This study provides guidelines and support for the design and development of similar augmented reality escape classroom games (ARECGs) to improve teaching practices and foreign language education. Full article
(This article belongs to the Special Issue Extended Reality (XR) Applications in Education 2023)
Show Figures

Graphical abstract

19 pages, 802 KiB  
Review
Securing Mobile Edge Computing Using Hybrid Deep Learning Method
by Olusola Adeniyi, Ali Safaa Sadiq, Prashant Pillai, Mohammad Aljaidi and Omprakash Kaiwartya
Computers 2024, 13(1), 25; https://doi.org/10.3390/computers13010025 - 16 Jan 2024
Cited by 18 | Viewed by 3611
Abstract
In recent years, Mobile Edge Computing (MEC) has revolutionized the landscape of the telecommunication industry by offering low-latency, high-bandwidth, and real-time processing. With this advancement comes a broad range of security challenges, the most prominent of which is Distributed Denial of Service (DDoS) [...] Read more.
In recent years, Mobile Edge Computing (MEC) has revolutionized the landscape of the telecommunication industry by offering low-latency, high-bandwidth, and real-time processing. With this advancement comes a broad range of security challenges, the most prominent of which is Distributed Denial of Service (DDoS) attacks, which threaten the availability and performance of MEC’s services. In most cases, Intrusion Detection Systems (IDSs), a security tool that monitors networks and systems for suspicious activity and notify administrators in real time of potential cyber threats, have relied on shallow Machine Learning (ML) models that are limited in their abilities to identify and mitigate DDoS attacks. This article highlights the drawbacks of current IDS solutions, primarily their reliance on shallow ML techniques, and proposes a novel hybrid Autoencoder–Multi-Layer Perceptron (AE–MLP) model for intrusion detection as a solution against DDoS attacks in the MEC environment. The proposed hybrid AE–MLP model leverages autoencoders’ feature extraction capabilities to capture intricate patterns and anomalies within network traffic data. This extracted knowledge is then fed into a Multi-Layer Perceptron (MLP) network, enabling deep learning techniques to further analyze and classify potential threats. By integrating both AE and MLP, the hybrid model achieves higher accuracy and robustness in identifying DDoS attacks while minimizing false positives. As a result of extensive experiments using the recently released NF-UQ-NIDS-V2 dataset, which contains a wide range of DDoS attacks, our results demonstrate that the proposed hybrid AE–MLP model achieves a high accuracy of 99.98%. Based on the results, the hybrid approach performs better than several similar techniques. Full article
Show Figures

Figure 1

21 pages, 2681 KiB  
Article
A Comparative Study on Recent Automatic Data Fusion Methods
by Luis Manuel Pereira, Addisson Salazar and Luis Vergara
Computers 2024, 13(1), 13; https://doi.org/10.3390/computers13010013 - 30 Dec 2023
Cited by 11 | Viewed by 4564
Abstract
Automatic data fusion is an important field of machine learning that has been increasingly studied. The objective is to improve the classification performance from several individual classifiers in terms of accuracy and stability of the results. This paper presents a comparative study on [...] Read more.
Automatic data fusion is an important field of machine learning that has been increasingly studied. The objective is to improve the classification performance from several individual classifiers in terms of accuracy and stability of the results. This paper presents a comparative study on recent data fusion methods. The fusion step can be applied at early and/or late stages of the classification procedure. Early fusion consists of combining features from different sources or domains to form the observation vector before the training of the individual classifiers. On the contrary, late fusion consists of combining the results from the individual classifiers after the testing stage. Late fusion has two setups, combination of the posterior probabilities (scores), which is called soft fusion, and combination of the decisions, which is called hard fusion. A theoretical analysis of the conditions for applying the three kinds of fusion (early, late, and late hard) is introduced. Thus, we propose a comparative analysis with different schemes of fusion, including weaknesses and strengths of the state-of-the-art methods studied from the following perspectives: sensors, features, scores, and decisions. Full article
Show Figures

Figure 1

27 pages, 1319 KiB  
Article
Sports Analytics and Text Mining NBA Data to Assess Recovery from Injuries and Their Economic Impact
by Vangelis Sarlis, George Papageorgiou and Christos Tjortjis
Computers 2023, 12(12), 261; https://doi.org/10.3390/computers12120261 - 16 Dec 2023
Cited by 8 | Viewed by 5879
Abstract
Injuries are an unfortunate part of professional sports. This study aims to explore the multi-dimensional impact of injuries in professional basketball, focusing on player performance, team dynamics, and economic outcomes. Employing advanced machine learning and text mining techniques on suitably preprocessed NBA data, [...] Read more.
Injuries are an unfortunate part of professional sports. This study aims to explore the multi-dimensional impact of injuries in professional basketball, focusing on player performance, team dynamics, and economic outcomes. Employing advanced machine learning and text mining techniques on suitably preprocessed NBA data, we examined the intricate interplay between injury and performance metrics. Our findings reveal that specific anatomical sub-areas, notably knees, ankles, and thighs, are crucial for athletic performance and injury prevention. The analysis revealed the significant economic burden that certain injuries impose on teams, necessitating comprehensive long-term strategies for injury management. The results provide valuable insights into the distribution of injuries and their varied effects, which are essential for developing effective prevention and economic strategies in basketball. By illuminating how injuries influence performance and recovery dynamics, this research offers comprehensive insights that are beneficial for NBA teams, healthcare professionals, medical staff, and trainers, paving the way for enhanced player care and optimized performance strategies. Full article
Show Figures

Figure 1

22 pages, 1700 KiB  
Article
Performance Comparison of Directed Acyclic Graph-Based Distributed Ledgers and Blockchain Platforms
by Felix Kahmann, Fabian Honecker, Julian Dreyer, Marten Fischer and Ralf Tönjes
Computers 2023, 12(12), 257; https://doi.org/10.3390/computers12120257 - 9 Dec 2023
Cited by 13 | Viewed by 4918
Abstract
Since the introduction of the first cryptocurrency, Bitcoin, in 2008, the gain in popularity of distributed ledger technologies (DLTs) has led to an increasing demand and, consequently, a larger number of network participants in general. Scaling blockchain-based solutions to cope with several thousand [...] Read more.
Since the introduction of the first cryptocurrency, Bitcoin, in 2008, the gain in popularity of distributed ledger technologies (DLTs) has led to an increasing demand and, consequently, a larger number of network participants in general. Scaling blockchain-based solutions to cope with several thousand transactions per second or with a growing number of nodes has always been a desirable goal for most developers. Enabling these performance metrics can lead to further acceptance of DLTs and even faster systems in general. With the introduction of directed acyclic graphs (DAGs) as the underlying data structure to store the transactions within the distributed ledger, major performance gains have been achieved. In this article, we review the most prominent directed acyclic graph platforms and evaluate their key performance indicators in terms of transaction throughput and network latency. The evaluation aims to show whether the theoretically improved scalability of DAGs also applies in practice. For this, we set up multiple test networks for each DAG and blockchain framework and conducted broad performance measurements to have a mutual basis for comparison between the different solutions. Using the transactions per second numbers of each technology, we created a side-by-side evaluation that allows for a direct scalability estimation of the systems. Our findings support the fact that, due to their internal, more parallelly oriented data structure, DAG-based solutions offer significantly higher transaction throughput in comparison to blockchain-based platforms. Although, due to their relatively early maturity state, fully DAG-based platforms need to further evolve in their feature set to reach the same level of programmability and spread as modern blockchain platforms. With our findings at hand, developers of modern digital storage systems are able to reasonably determine whether to use a DAG-based distributed ledger technology solution in their production environment, i.e., replacing a database system with a DAG platform. Furthermore, we provide two real-world application scenarios, one being smart grid communication and the other originating from trusted supply chain management, that benefit from the introduction of DAG-based technologies. Full article
Show Figures

Figure 1

17 pages, 703 KiB  
Article
Enhancing Web Application Security through Automated Penetration Testing with Multiple Vulnerability Scanners
by Khaled Abdulghaffar, Nebrase Elmrabit and Mehdi Yousefi
Computers 2023, 12(11), 235; https://doi.org/10.3390/computers12110235 - 15 Nov 2023
Cited by 7 | Viewed by 9662
Abstract
Penetration testers have increasingly adopted multiple penetration testing scanners to ensure the robustness of web applications. However, a notable limitation of many scanning techniques is their susceptibility to producing false positives. This paper presents a novel framework designed to automate the operation of [...] Read more.
Penetration testers have increasingly adopted multiple penetration testing scanners to ensure the robustness of web applications. However, a notable limitation of many scanning techniques is their susceptibility to producing false positives. This paper presents a novel framework designed to automate the operation of multiple Web Application Vulnerability Scanners (WAVS) within a single platform. The framework generates a combined vulnerabilities report using two algorithms: an automation algorithm and a novel combination algorithm that produces comprehensive lists of detected vulnerabilities. The framework leverages the capabilities of two web vulnerability scanners, Arachni and OWASP ZAP. The study begins with an extensive review of the existing scientific literature, focusing on open-source WAVS and exploring the OWASP 2021 guidelines. Following this, the framework development phase addresses the challenge of varying results obtained from different WAVS. This framework’s core objective is to combine the results of multiple WAVS into a consolidated vulnerability report, ultimately improving detection rates and overall security. The study demonstrates that the combined outcomes produced by the proposed framework exhibit greater accuracy compared to individual scanning results obtained from Arachni and OWASP ZAP. In summary, the study reveals that the Union List outperforms individual scanners, particularly regarding recall and F-measure. Consequently, adopting multiple vulnerability scanners is recommended as an effective strategy to bolster vulnerability detection in web applications. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

34 pages, 18856 KiB  
Article
Creating Location-Based Augmented Reality Games and Immersive Experiences for Touristic Destination Marketing and Education
by Alexandros Kleftodimos, Athanasios Evagelou, Stefanos Gkoutzios, Maria Matsiola, Michalis Vrigkas, Anastasia Yannacopoulou, Amalia Triantafillidou and Georgios Lappas
Computers 2023, 12(11), 227; https://doi.org/10.3390/computers12110227 - 7 Nov 2023
Cited by 8 | Viewed by 5603
Abstract
The aim of this paper is to present an approach that utilizes several mixed reality technologies for touristic promotion and education. More specifically, mixed reality applications and games were created to promote the mountainous areas of Western Macedonia, Greece, and to educate visitors [...] Read more.
The aim of this paper is to present an approach that utilizes several mixed reality technologies for touristic promotion and education. More specifically, mixed reality applications and games were created to promote the mountainous areas of Western Macedonia, Greece, and to educate visitors on various aspects of these destinations, such as their history and cultural heritage. Location-based augmented reality (AR) games were designed to guide the users to visit and explore the destinations, get informed, gather points and prizes by accomplishing specific tasks, and meet virtual characters that tell stories. Furthermore, an immersive lab was established to inform visitors about the region of interest through mixed reality content designed for entertainment and education. The lab visitors can experience content and games through virtual reality (VR) and augmented reality (AR) wearable devices. Likewise, 3D content can be viewed through special stereoscopic monitors. An evaluation of the lab experience was performed with a sample of 82 visitors who positively evaluated features of the immersive experience such as the level of satisfaction, immersion, educational usefulness, the intention to visit the mountainous destinations of Western Macedonia, intention to revisit the lab, and intention to recommend the experience to others. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

34 pages, 11710 KiB  
Article
BigDaM: Efficient Big Data Management and Interoperability Middleware for Seaports as Critical Infrastructures
by Anastasios Nikolakopoulos, Matilde Julian Segui, Andreu Belsa Pellicer, Michalis Kefalogiannis, Christos-Antonios Gizelis, Achilleas Marinakis, Konstantinos Nestorakis and Theodora Varvarigou
Computers 2023, 12(11), 218; https://doi.org/10.3390/computers12110218 - 27 Oct 2023
Cited by 7 | Viewed by 3694
Abstract
Over the last few years, the European Union (EU) has placed significant emphasis on the interoperability of critical infrastructures (CIs). One of the main CI transportation infrastructures are ports. The control systems managing such infrastructures are constantly evolving and handle diverse sets of [...] Read more.
Over the last few years, the European Union (EU) has placed significant emphasis on the interoperability of critical infrastructures (CIs). One of the main CI transportation infrastructures are ports. The control systems managing such infrastructures are constantly evolving and handle diverse sets of people, data, and processes. Additionally, interdependencies among different infrastructures can lead to discrepancies in data models that propagate and intensify across interconnected systems. This article introduces “BigDaM”, a Big Data Management framework for critical infrastructures. It is a cutting-edge data model that adheres to the latest technological standards and aims to consolidate APIs and services within highly complex CI infrastructures. Our approach takes a bottom-up perspective, treating each service interconnection as an autonomous entity that must align with the proposed common vocabulary and data model. By injecting strict guidelines into the service/component development’s lifecycle, we explicitly promote interoperability among the services within critical infrastructure ecosystems. This approach facilitates the exchange and reuse of data from a shared repository among developers, small and medium-sized enterprises (SMEs), and large vendors. Business challenges have also been taken into account, in order to link the generated data assets of CIs with the business world. The complete framework has been tested in the main EU ports, part of the transportation sector of CIs. Performance evaluation and the aforementioned testing is also being analyzed, highlighting the capabilities of the proposed approach. Full article
Show Figures

Figure 1

18 pages, 2672 KiB  
Article
Enhancing Automated Scoring of Math Self-Explanation Quality Using LLM-Generated Datasets: A Semi-Supervised Approach
by Ryosuke Nakamoto, Brendan Flanagan, Taisei Yamauchi, Yiling Dai, Kyosuke Takami and Hiroaki Ogata
Computers 2023, 12(11), 217; https://doi.org/10.3390/computers12110217 - 24 Oct 2023
Cited by 7 | Viewed by 4293
Abstract
In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, there are mounting opportunities to collect and utilize mathematical self-explanations. However, [...] Read more.
In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, there are mounting opportunities to collect and utilize mathematical self-explanations. However, these opportunities are met with challenges in automated evaluation. Automatic scoring of mathematical self-explanations is crucial for preprocessing tasks, including the categorization of learner responses, identification of common misconceptions, and the creation of tailored feedback and model solutions. Nevertheless, this task is hindered by the dearth of ample sample sets. Our research introduces a semi-supervised technique using the large language model (LLM), specifically its Japanese variant, to enrich datasets for the automated scoring of mathematical self-explanations. We rigorously evaluated the quality of self-explanations across five datasets, ranging from human-evaluated originals to ones devoid of original content. Our results show that combining LLM-based explanations with mathematical material significantly improves the model’s accuracy. Interestingly, there is an optimal limit to how many synthetic self-explanation data can benefit the system. Exceeding this limit does not further improve outcomes. This study thus highlights the need for careful consideration when integrating synthetic data into solutions, especially within the mathematics discipline. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

25 pages, 433 KiB  
Review
On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective
by Minxiao Wang, Ning Yang, Dulaj H. Gunasinghe and Ning Weng
Computers 2023, 12(10), 209; https://doi.org/10.3390/computers12100209 - 17 Oct 2023
Cited by 7 | Viewed by 4463
Abstract
Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associated with ML: adversarial attacks and distribution shifts. Although there [...] Read more.
Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associated with ML: adversarial attacks and distribution shifts. Although there has been a growing emphasis on researching the robustness of ML, current studies primarily concentrate on addressing specific challenges individually. These studies tend to target a particular aspect of robustness and propose innovative techniques to enhance that specific aspect. However, as a capability to respond to unexpected situations, the robustness of ML should be comprehensively built and maintained in every stage. In this paper, we aim to link the varying efforts throughout the whole ML workflow to guide the design of ML-based NIDSs with systematic robustness. Toward this goal, we conduct a methodical evaluation of the progress made thus far in enhancing the robustness of the targeted NIDS application task. Specifically, we delve into the robustness aspects of ML-based NIDSs against adversarial attacks and distribution shift scenarios. For each perspective, we organize the literature in robustness-related challenges and technical solutions based on the ML workflow. For instance, we introduce some advanced potential solutions that can improve robustness, such as data augmentation, contrastive learning, and robustness certification. According to our survey, we identify and discuss the ML robustness research gaps and future direction in the field of NIDS. Finally, we highlight that building and patching robustness throughout the life cycle of an ML-based NIDS is critical. Full article
(This article belongs to the Special Issue Big Data Analytic for Cyber Crime Investigation and Prevention 2023)
Show Figures

Figure 1

17 pages, 13529 KiB  
Article
Augmented Reality in Primary Education: An Active Learning Approach in Mathematics
by Christina Volioti, Christos Orovas, Theodosios Sapounidis, George Trachanas and Euclid Keramopoulos
Computers 2023, 12(10), 207; https://doi.org/10.3390/computers12100207 - 16 Oct 2023
Cited by 7 | Viewed by 3975
Abstract
Active learning, a student-centered approach, engages students in the learning process and requires them to solve problems using educational activities that enhance their learning outcomes. Augmented Reality (AR) has revolutionized the field of education by creating an intuitive environment where real and virtual [...] Read more.
Active learning, a student-centered approach, engages students in the learning process and requires them to solve problems using educational activities that enhance their learning outcomes. Augmented Reality (AR) has revolutionized the field of education by creating an intuitive environment where real and virtual objects interact, thereby facilitating the understanding of complex concepts. Consequently, this research proposes an application, called “Cooking Math”, that utilizes AR to promote active learning in sixth-grade elementary school mathematics. The application comprises various educational games, each presenting a real-life problem, particularly focused on cooking recipes. To evaluate the usability of the proposed AR application, a pilot study was conducted involving three groups: (a) 65 undergraduate philosophy and education students, (b) 74 undergraduate engineering students, and (c) 35 sixth-grade elementary school students. To achieve this, (a) the System Usability Scale (SUS) questionnaire was provided to all participants and (b) semi-structured interviews were organized to gather the participants’ perspectives. The SUS results were quite satisfactory. In addition, the interviews’ outcomes indicated that the elementary students displayed enthusiasm, the philosophy and education students emphasized the pedagogy value of such technology, while the engineering students suggested that further improvements were necessary to enhance the effectiveness of the learning experience. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education 2024)
Show Figures

Figure 1

18 pages, 2324 KiB  
Article
The Potential of Machine Learning for Wind Speed and Direction Short-Term Forecasting: A Systematic Review
by Décio Alves, Fábio Mendonça, Sheikh Shanawaz Mostafa and Fernando Morgado-Dias
Computers 2023, 12(10), 206; https://doi.org/10.3390/computers12100206 - 13 Oct 2023
Cited by 16 | Viewed by 4349
Abstract
Wind forecasting, which is essential for numerous services and safety, has significantly improved in accuracy due to machine learning advancements. This study reviews 23 articles from 1983 to 2023 on machine learning for wind speed and direction nowcasting. The wind prediction ranged from [...] Read more.
Wind forecasting, which is essential for numerous services and safety, has significantly improved in accuracy due to machine learning advancements. This study reviews 23 articles from 1983 to 2023 on machine learning for wind speed and direction nowcasting. The wind prediction ranged from 1 min to 1 week, with more articles at lower temporal resolutions. Most works employed neural networks, focusing recently on deep learning models. Among the reported performance metrics, the most prevalent were mean absolute error, mean squared error, and mean absolute percentage error. Considering these metrics, the mean performance of the examined works was 0.56 m/s, 1.10 m/s, and 6.72%, respectively. The results underscore the novel effectiveness of machine learning in predicting wind conditions using high-resolution time data and demonstrated that deep learning models surpassed traditional methods, improving the accuracy of wind speed and direction forecasts. Moreover, it was found that the inclusion of non-wind weather variables does not benefit the model’s overall performance. Further studies are recommended to predict both wind speed and direction using diverse spatial data points, and high-resolution data are recommended along with the usage of deep learning models. Full article
Show Figures

Figure 1

36 pages, 702 KiB  
Article
Determining Resampling Ratios Using BSMOTE and SVM-SMOTE for Identifying Rare Attacks in Imbalanced Cybersecurity Data
by Sikha S. Bagui, Dustin Mink, Subhash C. Bagui and Sakthivel Subramaniam
Computers 2023, 12(10), 204; https://doi.org/10.3390/computers12100204 - 11 Oct 2023
Cited by 11 | Viewed by 2682
Abstract
Machine Learning is widely used in cybersecurity for detecting network intrusions. Though network attacks are increasing steadily, the percentage of such attacks to actual network traffic is significantly less. And here lies the problem in training Machine Learning models to enable them to [...] Read more.
Machine Learning is widely used in cybersecurity for detecting network intrusions. Though network attacks are increasing steadily, the percentage of such attacks to actual network traffic is significantly less. And here lies the problem in training Machine Learning models to enable them to detect and classify malicious attacks from routine traffic. The ratio of actual attacks to benign data is significantly high and as such forms highly imbalanced datasets. In this work, we address this issue using data resampling techniques. Though there are several oversampling and undersampling techniques available, how these oversampling and undersampling techniques are most effectively used is addressed in this paper. Two oversampling techniques, Borderline SMOTE and SVM-SMOTE, are used for oversampling minority data and random undersampling is used for undersampling majority data. Both the oversampling techniques use KNN after selecting a random minority sample point, hence the impact of varying KNN values on the performance of the oversampling technique is also analyzed. Random Forest is used for classification of the rare attacks. This work is done on a widely used cybersecurity dataset, UNSW-NB15, and the results show that 10% oversampling gives better results for both BMSOTE and SVM-SMOTE. Full article
(This article belongs to the Special Issue Big Data Analytic for Cyber Crime Investigation and Prevention 2023)
Show Figures

Figure 1

29 pages, 2949 KiB  
Article
Exploring the Potential of Distributed Computing Continuum Systems
by Praveen Kumar Donta, Ilir Murturi, Victor Casamayor Pujol, Boris Sedlak and Schahram Dustdar
Computers 2023, 12(10), 198; https://doi.org/10.3390/computers12100198 - 2 Oct 2023
Cited by 26 | Viewed by 6797
Abstract
Computing paradigms have evolved significantly in recent decades, moving from large room-sized resources (processors and memory) to incredibly small computing nodes. Recently, the power of computing has attracted almost all current application fields. Currently, distributed computing continuum systems (DCCSs) are unleashing the era [...] Read more.
Computing paradigms have evolved significantly in recent decades, moving from large room-sized resources (processors and memory) to incredibly small computing nodes. Recently, the power of computing has attracted almost all current application fields. Currently, distributed computing continuum systems (DCCSs) are unleashing the era of a computing paradigm that unifies various computing resources, including cloud, fog/edge computing, the Internet of Things (IoT), and mobile devices into a seamless and integrated continuum. Its seamless infrastructure efficiently manages diverse processing loads and ensures a consistent user experience. Furthermore, it provides a holistic solution to meet modern computing needs. In this context, this paper presents a deeper understanding of DCCSs’ potential in today’s computing environment. First, we discuss the evolution of computing paradigms up to DCCS. The general architectures, components, and various computing devices are discussed, and the benefits and limitations of each computing paradigm are analyzed. After that, our discussion continues into various computing devices that constitute part of DCCS to achieve computational goals in current and futuristic applications. In addition, we delve into the key features and benefits of DCCS from the perspective of current computing needs. Furthermore, we provide a comprehensive overview of emerging applications (with a case study analysis) that desperately need DCCS architectures to perform their tasks. Finally, we describe the open challenges and possible developments that need to be made to DCCS to unleash its widespread potential for the majority of applications. Full article
(This article belongs to the Special Issue Artificial Intelligence in Industrial IoT Applications)
Show Figures

Figure 1

13 pages, 617 KiB  
Article
Comparison of Automated Machine Learning (AutoML) Tools for Epileptic Seizure Detection Using Electroencephalograms (EEG)
by Swetha Lenkala, Revathi Marry, Susmitha Reddy Gopovaram, Tahir Cetin Akinci and Oguzhan Topsakal
Computers 2023, 12(10), 197; https://doi.org/10.3390/computers12100197 - 29 Sep 2023
Cited by 8 | Viewed by 3211
Abstract
Epilepsy is a neurological disease characterized by recurrent seizures caused by abnormal electrical activity in the brain. One of the methods used to diagnose epilepsy is through electroencephalogram (EEG) analysis. EEG is a non-invasive medical test for quantifying electrical activity in the brain. [...] Read more.
Epilepsy is a neurological disease characterized by recurrent seizures caused by abnormal electrical activity in the brain. One of the methods used to diagnose epilepsy is through electroencephalogram (EEG) analysis. EEG is a non-invasive medical test for quantifying electrical activity in the brain. Applying machine learning (ML) to EEG data for epilepsy diagnosis has the potential to be more accurate and efficient. However, expert knowledge is required to set up the ML model with correct hyperparameters. Automated machine learning (AutoML) tools aim to make ML more accessible to non-experts and automate many ML processes to create a high-performing ML model. This article explores the use of automated machine learning (AutoML) tools for diagnosing epilepsy using electroencephalogram (EEG) data. The study compares the performance of three different AutoML tools, AutoGluon, Auto-Sklearn, and Amazon Sagemaker, on three different datasets from the UC Irvine ML Repository, Bonn EEG time series dataset, and Zenodo. Performance measures used for evaluation include accuracy, F1 score, recall, and precision. The results show that all three AutoML tools were able to generate high-performing ML models for the diagnosis of epilepsy. The generated ML models perform better when the training dataset is larger in size. Amazon Sagemaker and Auto-Sklearn performed better with smaller datasets. This is the first study to compare several AutoML tools and shows that AutoML tools can be utilized to create well-performing solutions for the diagnosis of epilepsy via processing hard-to-analyze EEG timeseries data. Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
Show Figures

Figure 1

17 pages, 556 KiB  
Article
Predictive Modeling of Student Dropout in MOOCs and Self-Regulated Learning
by Georgios Psathas, Theano K. Chatzidaki and Stavros N. Demetriadis
Computers 2023, 12(10), 194; https://doi.org/10.3390/computers12100194 - 27 Sep 2023
Cited by 10 | Viewed by 4090
Abstract
The primary objective of this study is to examine the factors that contribute to the early prediction of Massive Open Online Courses (MOOCs) dropouts in order to identify and support at-risk students. We utilize MOOC data of specific duration, with a guided study [...] Read more.
The primary objective of this study is to examine the factors that contribute to the early prediction of Massive Open Online Courses (MOOCs) dropouts in order to identify and support at-risk students. We utilize MOOC data of specific duration, with a guided study pace. The dataset exhibits class imbalance, and we apply oversampling techniques to ensure data balancing and unbiased prediction. We examine the predictive performance of five classic classification machine learning (ML) algorithms under four different oversampling techniques and various evaluation metrics. Additionally, we explore the influence of self-reported self-regulated learning (SRL) data provided by students and various other prominent features of MOOCs as potential indicators of early stage dropout prediction. The research questions focus on (1) the performance of the classic classification ML models using various evaluation metrics before and after different methods of oversampling, (2) which self-reported data may constitute crucial predictors for dropout propensity, and (3) the effect of the SRL factor on the dropout prediction performance. The main conclusions are: (1) prominent predictors, including employment status, frequency of chat tool usage, prior subject-related experiences, gender, education, and willingness to participate, exhibit remarkable efficacy in achieving high to excellent recall performance, particularly when specific combinations of algorithms and oversampling methods are applied, (2) self-reported SRL factor, combined with easily provided/self-reported features, performed well as a predictor in terms of recall when LR and SVM algorithms were employed, (3) it is crucial to test diverse machine learning algorithms and oversampling methods in predictive modeling. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

14 pages, 959 KiB  
Article
Video Summarization Based on Feature Fusion and Data Augmentation
by Theodoros Psallidas and Evaggelos Spyrou
Computers 2023, 12(9), 186; https://doi.org/10.3390/computers12090186 - 15 Sep 2023
Cited by 6 | Viewed by 2662
Abstract
During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, [...] Read more.
During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, and navigation across several multimedia collections and repositories, e.g., for finding videos that are relevant to a particular topic or interest, this ever-increasing content should be efficiently described by informative yet concise content representations. A common solution to this problem is the construction of a brief summary of a video, which could be presented to the user, instead of the full video, so that she/he could then decide whether to watch or ignore the whole video. Such summaries are ideally more expressive than other alternatives, such as brief textual descriptions or keywords. In this work, the video summarization problem is approached as a supervised classification task, which relies on feature fusion of audio and visual data. Specifically, the goal of this work is to generate dynamic video summaries, i.e., compositions of parts of the original video, which include its most essential video segments, while preserving the original temporal sequence. This work relies on annotated datasets on a per-frame basis, wherein parts of videos are annotated as being “informative” or “noninformative”, with the latter being excluded from the produced summary. The novelties of the proposed approach are, (a) prior to classification, a transfer learning strategy to use deep features from pretrained models is employed. These models have been used as input to the classifiers, making them more intuitive and robust to objectiveness, and (b) the training dataset was augmented by using other publicly available datasets. The proposed approach is evaluated using three datasets of user-generated videos, and it is demonstrated that deep features and data augmentation are able to improve the accuracy of video summaries based on human annotations. Moreover, it is domain independent, could be used on any video, and could be extended to rely on richer feature representations or include other data modalities. Full article
Show Figures

Figure 1

22 pages, 1783 KiB  
Article
FGPE+: The Mobile FGPE Environment and the Pareto-Optimized Gamified Programming Exercise Selection Model—An Empirical Evaluation
by Rytis Maskeliūnas, Robertas Damaševičius, Tomas Blažauskas, Jakub Swacha, Ricardo Queirós and José Carlos Paiva
Computers 2023, 12(7), 144; https://doi.org/10.3390/computers12070144 - 21 Jul 2023
Cited by 7 | Viewed by 4281
Abstract
This paper is poised to inform educators, policy makers and software developers about the untapped potential of PWAs in creating engaging, effective, and personalized learning experiences in the field of programming education. We aim to address a significant gap in the current understanding [...] Read more.
This paper is poised to inform educators, policy makers and software developers about the untapped potential of PWAs in creating engaging, effective, and personalized learning experiences in the field of programming education. We aim to address a significant gap in the current understanding of the potential advantages and underutilisation of Progressive Web Applications (PWAs) within the education sector, specifically for programming education. Despite the evident lack of recognition of PWAs in this arena, we present an innovative approach through the Framework for Gamification in Programming Education (FGPE). This framework takes advantage of the ubiquity and ease of use of PWAs, integrating it with a Pareto optimized gamified programming exercise selection model ensuring personalized adaptive learning experiences by dynamically adjusting the complexity, content, and feedback of gamified exercises in response to the learners’ ongoing progress and performance. This study examines the mobile user experience of the FGPE PLE in different countries, namely Poland and Lithuania, providing novel insights into its applicability and efficiency. Our results demonstrate that combining advanced adaptive algorithms with the convenience of mobile technology has the potential to revolutionize programming education. The FGPE+ course group outperformed the Moodle group in terms of the average perceived knowledge (M = 4.11, SD = 0.51). Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

20 pages, 985 KiB  
Article
Adaptive Gamification in Science Education: An Analysis of the Impact of Implementation and Adapted Game Elements on Students’ Motivation
by Alkinoos-Ioannis Zourmpakis, Michail Kalogiannakis and Stamatios Papadakis
Computers 2023, 12(7), 143; https://doi.org/10.3390/computers12070143 - 18 Jul 2023
Cited by 36 | Viewed by 14206
Abstract
In recent years, gamification has captured the attention of researchers and educators, particularly in science education, where students often express negative emotions. Gamification methods aim to motivate learners to participate in learning by incorporating intrinsic and extrinsic motivational factors. However, the effectiveness of [...] Read more.
In recent years, gamification has captured the attention of researchers and educators, particularly in science education, where students often express negative emotions. Gamification methods aim to motivate learners to participate in learning by incorporating intrinsic and extrinsic motivational factors. However, the effectiveness of gamification has yielded varying outcomes, prompting researchers to explore adaptive gamification as an alternative approach. Nevertheless, there needs to be more research on adaptive gamification approaches, particularly concerning motivation, which is the primary objective of gamification. In this study, we developed and tested an adaptive gamification environment based on specific motivational and psychological frameworks. This environment incorporated adaptive criteria, learning strategies, gaming elements, and all crucial aspects of science education for six classes of third-grade students in primary school. We employed a quantitative approach to gain insights into the motivational impact on students and their perception of the adaptive gamification application. We aimed to understand how each game element experienced by students influenced their motivation. Based on our findings, students were more motivated to learn science when using an adaptive gamification environment. Additionally, the adaptation process was largely successful, as students generally liked the game elements integrated into their lessons, indicating the effectiveness of the multidimensional framework employed in enhancing students’ experiences and engagement. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

25 pages, 4434 KiB  
Article
A Novel Dynamic Software-Defined Networking Approach to Neutralize Traffic Burst
by Aakanksha Sharma, Venki Balasubramanian and Joarder Kamruzzaman
Computers 2023, 12(7), 131; https://doi.org/10.3390/computers12070131 - 27 Jun 2023
Cited by 6 | Viewed by 2855
Abstract
Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or [...] Read more.
Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. Full article
(This article belongs to the Special Issue Software-Defined Internet of Everything)
Show Figures

Figure 1

22 pages, 1104 KiB  
Article
Exploring Clustering Techniques for Analyzing User Engagement Patterns in Twitter Data
by Andreas Kanavos, Ioannis Karamitsos and Alaa Mohasseb
Computers 2023, 12(6), 124; https://doi.org/10.3390/computers12060124 - 19 Jun 2023
Cited by 10 | Viewed by 4234
Abstract
Social media platforms have revolutionized information exchange and socialization in today’s world. Twitter, as one of the prominent platforms, enables users to connect with others and express their opinions. This study focuses on analyzing user engagement levels on Twitter using graph mining and [...] Read more.
Social media platforms have revolutionized information exchange and socialization in today’s world. Twitter, as one of the prominent platforms, enables users to connect with others and express their opinions. This study focuses on analyzing user engagement levels on Twitter using graph mining and clustering techniques. We measure user engagement based on various tweet attributes, including retweets, replies, and more. Specifically, we explore the strength of user connections in Twitter networks by examining the diversity of edges. Our approach incorporates graph mining models that assign different weights to evaluate the significance of each connection. Additionally, clustering techniques are employed to group users based on their engagement patterns and behaviors. Statistical analysis was conducted to assess the similarity between user profiles, as well as attributes, such as friendship, followings, and interactions within the Twitter social network. The findings highlight the discovery of closely linked user groups and the identification of distinct clusters based on engagement levels. This research emphasizes the importance of understanding both individual and group behaviors in comprehending user engagement dynamics on Twitter. Full article
Show Figures

Figure 1

30 pages, 667 KiB  
Article
Unbalanced Web Phishing Classification through Deep Reinforcement Learning
by Antonio Maci, Alessandro Santorsola, Antonio Coscia and Andrea Iannacone
Computers 2023, 12(6), 118; https://doi.org/10.3390/computers12060118 - 9 Jun 2023
Cited by 18 | Viewed by 3841
Abstract
Web phishing is a form of cybercrime aimed at tricking people into visiting malicious URLs to exfiltrate sensitive data. Since the structure of a malicious URL evolves over time, phishing detection mechanisms that can adapt to such variations are paramount. Furthermore, web phishing [...] Read more.
Web phishing is a form of cybercrime aimed at tricking people into visiting malicious URLs to exfiltrate sensitive data. Since the structure of a malicious URL evolves over time, phishing detection mechanisms that can adapt to such variations are paramount. Furthermore, web phishing detection is an unbalanced classification task, as legitimate URLs outnumber malicious ones in real-life cases. Deep learning (DL) has emerged as a promising technique to minimize concept drift to enhance web phishing detection. Deep reinforcement learning (DRL) combines DL with reinforcement learning (RL); that is, a sequential decision-making paradigm in which the problem to be addressed is expressed as a Markov decision process (MDP). Recent studies have proposed an ad hoc MDP formulation to tackle unbalanced classification tasks called the imbalanced classification Markov decision process (ICMDP). In this paper, we exploit the ICMDP to present a double deep Q-Network (DDQN)-based classifier to address the unbalanced web phishing classification problem. The proposed algorithm is evaluated on a Mendeley web phishing dataset, from which three different data imbalance scenarios are generated. Despite a significant training time, it results in better geometric mean, index of balanced accuracy, F1 score, and area under the ROC curve than other DL-based classifiers combined with data-level sampling techniques in all test cases. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

11 pages, 567 KiB  
Review
To Wallet or Not to Wallet: The Debate over Digital Health Information Storage
by Jasna Karacic Zanetti and Rui Nunes
Computers 2023, 12(6), 114; https://doi.org/10.3390/computers12060114 - 28 May 2023
Cited by 7 | Viewed by 3179
Abstract
The concept of the health wallet, a digital platform that consolidates health-related information, has garnered significant attention in the past year. Electronic health data storage and transmission have become increasingly prevalent in the healthcare industry, with the potential to revolutionize healthcare delivery. This [...] Read more.
The concept of the health wallet, a digital platform that consolidates health-related information, has garnered significant attention in the past year. Electronic health data storage and transmission have become increasingly prevalent in the healthcare industry, with the potential to revolutionize healthcare delivery. This paper emphasizes the significance of recognizing and addressing the ethical implications of digital health technologies and prioritizes ethical considerations in their development. The adoption of health wallets has theoretical contributions, including the development of personalized medicine through comprehensive data collection, reducing medical errors through consolidated information, and enabling research for the improvement of existing treatments and interventions. Health wallets also empower individuals to manage their own health by providing access to their health data, allowing them to make informed decisions. The findings herein emphasize the importance of informing patients about their rights to control their health data and have access to it while protecting their privacy and confidentiality. This paper stands out by presenting practical recommendations for healthcare organizations and policymakers to ensure the safe and effective implementation of health wallets. Full article
(This article belongs to the Special Issue e-health Pervasive Wireless Applications and Services (e-HPWAS'22))
Show Figures

Figure 1

13 pages, 1736 KiB  
Article
Harnessing the Power of User-Centric Artificial Intelligence: Customized Recommendations and Personalization in Hybrid Recommender Systems
by Christos Troussas, Akrivi Krouska, Antonios Koliarakis and Cleo Sgouropoulou
Computers 2023, 12(5), 109; https://doi.org/10.3390/computers12050109 - 22 May 2023
Cited by 13 | Viewed by 3921
Abstract
Recommender systems are widely used in various fields, such as e-commerce, entertainment, and education, to provide personalized recommendations to users based on their preferences and/or behavior. Τhis paper presents a novel approach to providing customized recommendations with the use of user-centric artificial intelligence. [...] Read more.
Recommender systems are widely used in various fields, such as e-commerce, entertainment, and education, to provide personalized recommendations to users based on their preferences and/or behavior. Τhis paper presents a novel approach to providing customized recommendations with the use of user-centric artificial intelligence. In greater detail, we introduce an enhanced collaborative filtering (CF) approach in order to develop hybrid recommender systems that personalize search results for users. The proposed CF enhancement incorporates user actions beyond explicit ratings to collect data and alleviate the issue of sparse data, resulting in high-quality recommendations. As a testbed for our research, a web-based digital library, incorporating the proposed algorithm, has been developed. Examples of operation of the use of the system are presented using cognitive walkthrough inspection, which demonstrates the effectiveness of the approach in producing personalized recommendations and improving user experience. Thus, the hybrid recommender system, which is incorporated in the digital library, has been evaluated, yielding promising results. Full article
Show Figures

Figure 1

15 pages, 3539 KiB  
Article
Peer-to-Peer Federated Learning for COVID-19 Detection Using Transformers
by Mohamed Chetoui and Moulay A. Akhloufi
Computers 2023, 12(5), 106; https://doi.org/10.3390/computers12050106 - 17 May 2023
Cited by 11 | Viewed by 2835
Abstract
The simultaneous advances in deep learning and the Internet of Things (IoT) have benefited distributed deep learning paradigms. Federated learning is one of the most promising frameworks, where a server works with local learners to train a global model. The intrinsic heterogeneity of [...] Read more.
The simultaneous advances in deep learning and the Internet of Things (IoT) have benefited distributed deep learning paradigms. Federated learning is one of the most promising frameworks, where a server works with local learners to train a global model. The intrinsic heterogeneity of IoT devices, or non-independent and identically distributed (Non-I.I.D.) data, combined with the unstable communication network environment, causes a bottleneck that slows convergence and degrades learning efficiency. Additionally, the majority of weight averaging-based model aggregation approaches raise questions about learning fairness. In this paper, we propose a peer-to-peer federated learning (P2PFL) framework based on Vision Transformers (ViT) models to help solve some of the above issues and classify COVID-19 vs. normal cases on Chest-X-Ray (CXR) images. Particularly, clients jointly iterate and aggregate the models in order to build a robust model. The experimental results demonstrate that the proposed approach is capable of significantly improving the performance of the model with an Area Under Curve (AUC) of 0.92 and 0.99 for hospital-1 and hospital-2, respectively. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

12 pages, 1109 KiB  
Article
Theoretical Models Explaining the Level of Digital Competence in Students
by Marcos Cabezas-González, Sonia Casillas-Martín and Ana García-Valcárcel Muñoz-Repiso
Computers 2023, 12(5), 100; https://doi.org/10.3390/computers12050100 - 4 May 2023
Cited by 5 | Viewed by 5910
Abstract
In the new global scene, digital skills are a key skill for students to seize new learning opportunities, train to meet the demands of the labor market, and compete in the global market, while also communicating effectively in their everyday and academic lives. [...] Read more.
In the new global scene, digital skills are a key skill for students to seize new learning opportunities, train to meet the demands of the labor market, and compete in the global market, while also communicating effectively in their everyday and academic lives. This article presents research aimed at relating the impact of personal variables on the digital competence of technical problem solving in Spanish students from 12 to 14 years old. A quantitative methodology with a cross-sectional design was employed. A sample of 772 students from 18 Spanish educational institutions was used. For data collection, an assessment test was designed (ECODIES®) based on a validated indicator model to evaluate learners’ digital competence (INCODIES®), taking as a model the European framework for the development of digital competence. Mediation models were used and theoretical reference models were created. The results allowed us to verify the influence of personal, technology use, and attitudinal variables in the improvement of digital skill in technical problem solving. The findings lead to the conclusion that gender, acquisition of digital devices, and regular use do not determine a better level of competence. Full article
Show Figures

Figure 1

26 pages, 3522 KiB  
Review
Developing Resilient Cyber-Physical Systems: A Review of State-of-the-Art Malware Detection Approaches, Gaps, and Future Directions
by M. Imran Malik, Ahmed Ibrahim, Peter Hannay and Leslie F. Sikos
Computers 2023, 12(4), 79; https://doi.org/10.3390/computers12040079 - 14 Apr 2023
Cited by 18 | Viewed by 5677
Abstract
Cyber-physical systems (CPSes) are rapidly evolving in critical infrastructure (CI) domains such as smart grid, healthcare, the military, and telecommunication. These systems are continually threatened by malicious software (malware) attacks by adversaries due to their improvised tactics and attack methods. A minor configuration [...] Read more.
Cyber-physical systems (CPSes) are rapidly evolving in critical infrastructure (CI) domains such as smart grid, healthcare, the military, and telecommunication. These systems are continually threatened by malicious software (malware) attacks by adversaries due to their improvised tactics and attack methods. A minor configuration change in a CPS through malware has devastating effects, which the world has seen in Stuxnet, BlackEnergy, Industroyer, and Triton. This paper is a comprehensive review of malware analysis practices currently being used and their limitations and efficacy in securing CPSes. Using well-known real-world incidents, we have covered the significant impacts when a CPS is compromised. In particular, we have prepared exhaustive hypothetical scenarios to discuss the implications of false positives on CPSes. To improve the security of critical systems, we believe that nature-inspired metaheuristic algorithms can effectively counter the overwhelming malware threats geared toward CPSes. However, our detailed review shows that these algorithms have not been adapted to their full potential to counter malicious software. Finally, the gaps identified through this research have led us to propose future research directions using nature-inspired algorithms that would help in bringing optimization by reducing false positives, thereby increasing the security of such systems. Full article
Show Figures

Figure 1

13 pages, 2736 KiB  
Article
Safety in the Laboratory—An Exit Game Lab Rally in Chemistry Education
by Manuel Krug and Johannes Huwer
Computers 2023, 12(3), 67; https://doi.org/10.3390/computers12030067 - 20 Mar 2023
Cited by 10 | Viewed by 4547
Abstract
The topic of safety in chemistry laboratories in schools is crucial, as severe accidents in labs occur worldwide, primarily due to poorly trained individuals and improper behavior. One reason for this could be that the topic is often dry and boring for students. [...] Read more.
The topic of safety in chemistry laboratories in schools is crucial, as severe accidents in labs occur worldwide, primarily due to poorly trained individuals and improper behavior. One reason for this could be that the topic is often dry and boring for students. One solution to this problem is engaging students more actively in the lesson using a game format. In this publication, we present an augmented-reality-supported exit game in the form of a laboratory rally and the results of a pilot study that examined the use of the rally in terms of technology acceptance and intrinsic motivation. The study involved 22 students from a general high school. The study results show a high level of technology acceptance for the augmented reality used, as well as good results in terms of the intrinsic motivation triggered by the lesson. Full article
Show Figures

Figure 1

16 pages, 6436 KiB  
Article
Pedestrian Detection with LiDAR Technology in Smart-City Deployments–Challenges and Recommendations
by Pedro Torres, Hugo Marques and Paulo Marques
Computers 2023, 12(3), 65; https://doi.org/10.3390/computers12030065 - 17 Mar 2023
Cited by 7 | Viewed by 4542
Abstract
This paper describes a real case implementation of an automatic pedestrian-detection solution, implemented in the city of Aveiro, Portugal, using affordable LiDAR technology and open, publicly available, pedestrian-detection frameworks based on machine-learning algorithms. The presented solution makes it possible to anonymously identify pedestrians, [...] Read more.
This paper describes a real case implementation of an automatic pedestrian-detection solution, implemented in the city of Aveiro, Portugal, using affordable LiDAR technology and open, publicly available, pedestrian-detection frameworks based on machine-learning algorithms. The presented solution makes it possible to anonymously identify pedestrians, and extract associated information such as position, walking velocity and direction in certain areas of interest such as pedestrian crossings or other points of interest in a smart-city context. All data computation (3D point-cloud processing) is performed at edge nodes, consisting of NVIDIA Jetson Nano and Xavier platforms, which ingest 3D point clouds from Velodyne VLP-16 LiDARs. High-performance real-time computation is possible at these edge nodes through CUDA-enabled GPU-accelerated computations. The MQTT protocol is used to interconnect publishers (edge nodes) with consumers (the smart-city platform). The results show that using currently affordable LiDAR sensors in a smart-city context, despite the advertising characteristics referring to having a range of up to 100 m, presents great challenges for the automatic detection of objects at these distances. The authors were able to efficiently detect pedestrians up to 15 m away, depending on the sensor height and tilt. Based on the implementation challenges, the authors present usage recommendations to get the most out of the used technologies. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

40 pages, 1804 KiB  
Review
Proactive Self-Healing Approaches in Mobile Edge Computing: A Systematic Literature Review
by Olusola Adeniyi, Ali Safaa Sadiq, Prashant Pillai, Mohammed Adam Taheir and Omprakash Kaiwartya
Computers 2023, 12(3), 63; https://doi.org/10.3390/computers12030063 - 13 Mar 2023
Cited by 6 | Viewed by 4756
Abstract
The widespread use of technology has made communication technology an indispensable part of daily life. However, the present cloud infrastructure is insufficient to meet the industry’s growing demands, and multi-access edge computing (MEC) has emerged as a solution by providing real-time computation closer [...] Read more.
The widespread use of technology has made communication technology an indispensable part of daily life. However, the present cloud infrastructure is insufficient to meet the industry’s growing demands, and multi-access edge computing (MEC) has emerged as a solution by providing real-time computation closer to the data source. Effective management of MEC is essential for providing high-quality services, and proactive self-healing is a promising approach that anticipates and executes remedial operations before faults occur. This paper aims to identify, evaluate, and synthesize studies related to proactive self-healing approaches in MEC environments. The authors conducted a systematic literature review (SLR) using four well-known digital libraries (IEEE Xplore, Web of Science, ProQuest, and Scopus) and one academic search engine (Google Scholar). The review retrieved 920 papers, and 116 primary studies were selected for in-depth analysis. The SLR results are categorized into edge resource management methods and self-healing methods and approaches in MEC. The paper highlights the challenges and open issues in MEC, such as offloading task decisions, resource allocation, and security issues, such as infrastructure and cyber attacks. Finally, the paper suggests future work based on the SLR findings. Full article
Show Figures

Figure 1

22 pages, 1278 KiB  
Review
Model Compression for Deep Neural Networks: A Survey
by Zhuo Li, Hengyi Li and Lin Meng
Computers 2023, 12(3), 60; https://doi.org/10.3390/computers12030060 - 12 Mar 2023
Cited by 118 | Viewed by 25600
Abstract
Currently, with the rapid development of deep learning, deep neural networks (DNNs) have been widely applied in various computer vision tasks. However, in the pursuit of performance, advanced DNN models have become more complex, which has led to a large memory footprint and [...] Read more.
Currently, with the rapid development of deep learning, deep neural networks (DNNs) have been widely applied in various computer vision tasks. However, in the pursuit of performance, advanced DNN models have become more complex, which has led to a large memory footprint and high computation demands. As a result, the models are difficult to apply in real time. To address these issues, model compression has become a focus of research. Furthermore, model compression techniques play an important role in deploying models on edge devices. This study analyzed various model compression methods to assist researchers in reducing device storage space, speeding up model inference, reducing model complexity and training costs, and improving model deployment. Hence, this paper summarized the state-of-the-art techniques for model compression, including model pruning, parameter quantization, low-rank decomposition, knowledge distillation, and lightweight model design. In addition, this paper discusses research challenges and directions for future work. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2023)
Show Figures

Figure 1

25 pages, 3925 KiB  
Article
Feasibility and Acceptance of Augmented and Virtual Reality Exergames to Train Motor and Cognitive Skills of Elderly
by Christos Goumopoulos, Emmanouil Drakakis and Dimitris Gklavakis
Computers 2023, 12(3), 52; https://doi.org/10.3390/computers12030052 - 27 Feb 2023
Cited by 12 | Viewed by 4573
Abstract
The GAME2AWE platform aims to provide a versatile tool for elderly fall prevention through exergames that integrate exercises, and simulate real-world environments and situations to train balance and reaction time using augmented and virtual reality technologies. In order to lay out the research [...] Read more.
The GAME2AWE platform aims to provide a versatile tool for elderly fall prevention through exergames that integrate exercises, and simulate real-world environments and situations to train balance and reaction time using augmented and virtual reality technologies. In order to lay out the research area of interest, a review of the literature on systems that provide exergames for the elderly utilizing such technologies was conducted. The proposed use of augmented reality exergames on mobile devices as a complement to the traditional Kinect-based approach is a method that has been examined in the past with younger individuals in the context of physical activity interventions, but has not been studied adequately as an exergame tool for the elderly. An evaluation study was conducted with seniors, using multiple measuring scales to assess aspects such as usability, tolerability, applicability, and technology acceptance. In particular, the Unified Theory of Acceptance and Use of Technology (UTAUT) model was used to assess acceptance and identify factors that influence the seniors’ intentions to use the game platform in the long term, while the correlation between UTAUT factors was also investigated. The results indicate a positive assessment of the above user experience aspects leveraging on both qualitative and quantitative collected data. Full article
(This article belongs to the Special Issue e-health Pervasive Wireless Applications and Services (e-HPWAS'22))
Show Figures

Figure 1

15 pages, 2582 KiB  
Article
A Performance Study of CNN Architectures for the Autonomous Detection of COVID-19 Symptoms Using Cough and Breathing
by Meysam Effati and Goldie Nejat
Computers 2023, 12(2), 44; https://doi.org/10.3390/computers12020044 - 17 Feb 2023
Cited by 8 | Viewed by 2540
Abstract
Deep learning (DL) methods have the potential to be used for detecting COVID-19 symptoms. However, the rationale for which DL method to use and which symptoms to detect has not yet been explored. In this paper, we present the first performance study which [...] Read more.
Deep learning (DL) methods have the potential to be used for detecting COVID-19 symptoms. However, the rationale for which DL method to use and which symptoms to detect has not yet been explored. In this paper, we present the first performance study which compares various convolutional neural network (CNN) architectures for the autonomous preliminary COVID-19 detection of cough and/or breathing symptoms. We compare and analyze residual networks (ResNets), visual geometry Groups (VGGs), Alex neural networks (AlexNet), densely connected networks (DenseNet), squeeze neural networks (SqueezeNet), and COVID-19 identification ResNet (CIdeR) architectures to investigate their classification performance. We uniquely train and validate both unimodal and multimodal CNN architectures using the EPFL and Cambridge datasets. Performance comparison across all modes and datasets showed that the VGG19 and DenseNet-201 achieved the highest unimodal and multimodal classification performance. VGG19 and DensNet-201 had high F1 scores (0.94 and 0.92) for unimodal cough classification on the Cambridge dataset, compared to the next highest F1 score for ResNet (0.79), with comparable F1 scores to ResNet for the larger EPFL cough dataset. They also had consistently high accuracy, recall, and precision. For multimodal detection, VGG19 and DenseNet-201 had the highest F1 scores (0.91) compared to the other CNN structures (≤0.90), with VGG19 also having the highest accuracy and recall. Our investigation provides the foundation needed to select the appropriate deep CNN method to utilize for non-contact early COVID-19 detection. Full article
(This article belongs to the Special Issue e-health Pervasive Wireless Applications and Services (e-HPWAS'22))
Show Figures

Figure 1

38 pages, 9944 KiB  
Article
Modeling Collaborative Behaviors in Energy Ecosystems
by Kankam O. Adu-Kankam and Luis M. Camarinha-Matos
Computers 2023, 12(2), 39; https://doi.org/10.3390/computers12020039 - 13 Feb 2023
Cited by 10 | Viewed by 2269
Abstract
The notions of a collaborative virtual power plant ecosystem (CVPP-E) and a cognitive household digital twin (CHDT) have been proposed as contributions to the efficient organization and management of households within renewable energy communities (RECs). CHDTs can be modeled as software agents that [...] Read more.
The notions of a collaborative virtual power plant ecosystem (CVPP-E) and a cognitive household digital twin (CHDT) have been proposed as contributions to the efficient organization and management of households within renewable energy communities (RECs). CHDTs can be modeled as software agents that are designed to possess some cognitive capabilities, enabling them to make autonomous decisions on behalf of their human owners based on the value system of their physical twin. Due to their cognitive and decision-making capabilities, these agents can exhibit some behavioral attributes, such as engaging in diverse collaborative actions aimed at achieving some common goals. These behavioral attributes can be directed to the promotion of sustainable energy consumption in the ecosystem. Along this line, this work demonstrates various collaborative practices that include: (1) collaborative roles played by the CVPP manager such as (a) opportunity seeking and goal formulation, (b) goal proposition/invitation to form a coalition or virtual organization, and (c) formation and dissolution of coalitions; and (2) collaborative roles played by CHDTs which include (a) acceptance or decline of an invitation based on (i) delegation/non-delegation and (ii) value system compatibility/non-compatibility, and (b) the sharing of common resources. This study adopts a simulation technique that involves the integration of multiple simulation methods such as system dynamics, agent-based, and discrete event simulation techniques in a single simulation environment. The outcome of this study confirms the potential of adding cognitive capabilities to CHDTs and further shows that these agents could exhibit certain collaborative attributes, enabling them to become suitable as rational decision-making agents in households. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

15 pages, 3255 KiB  
Review
Artificial Intelligence and Sentiment Analysis: A Review in Competitive Research
by Hamed Taherdoost and Mitra Madanchian
Computers 2023, 12(2), 37; https://doi.org/10.3390/computers12020037 - 7 Feb 2023
Cited by 95 | Viewed by 42226
Abstract
As part of a business strategy, effective competitive research helps businesses outperform their competitors and attract loyal consumers. To perform competitive research, sentiment analysis may be used to assess interest in certain themes, uncover market conditions, and study competitors. Artificial intelligence (AI) has [...] Read more.
As part of a business strategy, effective competitive research helps businesses outperform their competitors and attract loyal consumers. To perform competitive research, sentiment analysis may be used to assess interest in certain themes, uncover market conditions, and study competitors. Artificial intelligence (AI) has improved the performance of multiple areas, particularly sentiment analysis. Using AI, sentiment analysis is the process of recognizing emotions expressed in text. AI comprehends the tone of a statement, as opposed to merely recognizing whether particular words within a group of text have a negative or positive connotation. This article reviews papers (2012–2022) that discuss how competitive market research identifies and compares major market measurements that help distinguish the services and goods of the competitors. AI-powered sentiment analysis can be used to learn what the competitors’ customers think of them across all aspects of the businesses. Full article
Show Figures

Figure 1

21 pages, 6354 KiB  
Article
Monkeypox Outbreak Analysis: An Extensive Study Using Machine Learning Models and Time Series Analysis
by Ishaani Priyadarshini, Pinaki Mohanty, Raghvendra Kumar and David Taniar
Computers 2023, 12(2), 36; https://doi.org/10.3390/computers12020036 - 7 Feb 2023
Cited by 14 | Viewed by 5408
Abstract
The sudden unexpected rise in monkeypox cases worldwide has become an increasing concern. The zoonotic disease characterized by smallpox-like symptoms has already spread to nearly twenty countries and several continents and is labeled a potential pandemic by experts. monkeypox infections do not have [...] Read more.
The sudden unexpected rise in monkeypox cases worldwide has become an increasing concern. The zoonotic disease characterized by smallpox-like symptoms has already spread to nearly twenty countries and several continents and is labeled a potential pandemic by experts. monkeypox infections do not have specific treatments. However, since smallpox viruses are similar to monkeypox viruses administering antiviral drugs and vaccines against smallpox could be used to prevent and treat monkeypox. Since the disease is becoming a global concern, it is necessary to analyze its impact and population health. Analyzing key outcomes, such as the number of people infected, deaths, medical visits, hospitalizations, etc., could play a significant role in preventing the spread. In this study, we analyze the spread of the monkeypox virus across different countries using machine learning techniques such as linear regression (LR), decision trees (DT), random forests (RF), elastic net regression (EN), artificial neural networks (ANN), and convolutional neural networks (CNN). Our study shows that CNNs perform the best, and the performance of these models is evaluated using statistical parameters such as mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE), and R-squared error (R2). The study also presents a time-series-based analysis using autoregressive integrated moving averages (ARIMA) and seasonal auto-regressive integrated moving averages (SARIMA) models for measuring the events over time. Comprehending the spread can lead to understanding the risk, which may be used to prevent further spread and may enable timely and effective treatment. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

16 pages, 637 KiB  
Article
Explainable AI-Based DDOS Attack Identification Method for IoT Networks
by Chathuranga Sampath Kalutharage, Xiaodong Liu, Christos Chrysoulas, Nikolaos Pitropakis and Pavlos Papadopoulos
Computers 2023, 12(2), 32; https://doi.org/10.3390/computers12020032 - 3 Feb 2023
Cited by 44 | Viewed by 6925
Abstract
The modern digitized world is mainly dependent on online services. The availability of online systems continues to be seriously challenged by distributed denial of service (DDoS) attacks. The challenge in mitigating attacks is not limited to identifying DDoS attacks when they happen, but [...] Read more.
The modern digitized world is mainly dependent on online services. The availability of online systems continues to be seriously challenged by distributed denial of service (DDoS) attacks. The challenge in mitigating attacks is not limited to identifying DDoS attacks when they happen, but also identifying the streams of attacks. However, existing attack detection methods cannot accurately and efficiently detect DDoS attacks. To this end, we propose an explainable artificial intelligence (XAI)-based novel method to identify DDoS attacks. This method detects abnormal behaviours of network traffic flows by analysing the traffic at the network layer. Moreover, it chooses the most influential features for each anomalous instance with influence weight and then sets a threshold value for each feature. Hence, this DDoS attack detection method defines security policies based on each feature threshold value for application-layer-based, volumetric-based, and transport control protocol (TCP) state-exhaustion-based features. Since the proposed method is based on layer three traffic, it can identify DDoS attacks on both Internet of Things (IoT) and traditional networks. Extensive experiments were performed on the University of Sannio, Benevento Instrution Detection System (USB-IDS) dataset, which consists of different types of DDoS attacks to test the performance of the proposed solution. The results of the comparison show that the proposed method provides greater detection accuracy and attack certainty than the state-of-the-art methods. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

15 pages, 874 KiB  
Article
Supervised Machine Learning Models for Liver Disease Risk Prediction
by Elias Dritsas and Maria Trigka
Computers 2023, 12(1), 19; https://doi.org/10.3390/computers12010019 - 13 Jan 2023
Cited by 46 | Viewed by 9879
Abstract
The liver constitutes the largest gland in the human body and performs many different functions. It processes what a person eats and drinks and converts food into nutrients that need to be absorbed by the body. In addition, it filters out harmful substances [...] Read more.
The liver constitutes the largest gland in the human body and performs many different functions. It processes what a person eats and drinks and converts food into nutrients that need to be absorbed by the body. In addition, it filters out harmful substances from the blood and helps tackle infections. Exposure to viruses or dangerous chemicals can damage the liver. When this organ is damaged, liver disease can develop. Liver disease refers to any condition that causes damage to the liver and may affect its function. It is a serious condition that threatens human life and requires urgent medical attention. Early prediction of the disease using machine learning (ML) techniques will be the point of interest in this study. Specifically, in the content of this research work, various ML models and Ensemble methods were evaluated and compared in terms of Accuracy, Precision, Recall, F-measure and area under the curve (AUC) in order to predict liver disease occurrence. The experimental results showed that the Voting classifier outperforms the other models with an accuracy, recall, and F-measure of 80.1%, a precision of 80.4%, and an AUC equal to 88.4% after SMOTE with 10-fold cross-validation. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

26 pages, 2681 KiB  
Article
Capacitated Waste Collection Problem Solution Using an Open-Source Tool
by Adriano Santos Silva, Filipe Alves, José Luis Diaz de Tuesta, Ana Maria A. C. Rocha, Ana I. Pereira, Adrián M. T. Silva and Helder T. Gomes
Computers 2023, 12(1), 15; https://doi.org/10.3390/computers12010015 - 7 Jan 2023
Cited by 10 | Viewed by 3238
Abstract
Population in cities is growing worldwide, which puts the systems that offer basic services to citizens under pressure. Among these systems, the Municipal Solid Waste Management System (MSWMS) is also affected. Waste collection and transportation is the first task in an MSWMS, carried [...] Read more.
Population in cities is growing worldwide, which puts the systems that offer basic services to citizens under pressure. Among these systems, the Municipal Solid Waste Management System (MSWMS) is also affected. Waste collection and transportation is the first task in an MSWMS, carried out traditionally in most cases. This approach leads to inefficient resource and time expense since routes are prescheduled or defined upon drivers’ choices. The waste collection is recognized as an NP-hard problem that can be modeled as a Capacitated Waste Collection Problem (CWCP). Despite the good quality of works currently available in the literature, the execution time of algorithms is often forgotten, and faster algorithms are required to increase the feasibility of the solutions found. In this paper, we show the performance of the open-source Google OR-Tools to solve the CWCP in Bragança, Portugal (inland city). The three metaheuristics available in this tool were able to reduce significantly the cost associated with waste collection in less than 2 s of execution time. The result obtained in this work proves the applicability of the OR-Tools to be explored for waste collection problems considering bigger systems. Furthermore, the fast response can be useful for developing new platforms for dynamic vehicle routing problems that represent scenarios closer to the real one. We anticipate the proven efficacy of OR-Tools to solve CWCP as the starting point of developments toward applying optimization algorithms to solve real and dynamic problems. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

12 pages, 1489 KiB  
Article
CLCD-I: Cross-Language Clone Detection by Using Deep Learning with InferCode
by Mohammad A. Yahya and Dae-Kyoo Kim
Computers 2023, 12(1), 12; https://doi.org/10.3390/computers12010012 - 4 Jan 2023
Cited by 30 | Viewed by 3326
Abstract
Source code clones are common in software development as part of reuse practice. However, they are also often a source of errors compromising software maintainability. The existing work on code clone detection mainly focuses on clones in a single programming language. However, nowadays [...] Read more.
Source code clones are common in software development as part of reuse practice. However, they are also often a source of errors compromising software maintainability. The existing work on code clone detection mainly focuses on clones in a single programming language. However, nowadays software is increasingly developed on a multilanguage platform on which code is reused across different programming languages. Detecting code clones in such a platform is challenging and has not been studied much. In this paper, we present CLCD-I, a deep neural network-based approach for detecting cross-language code clones by using InferCode which is an embedding technique for source code. The design of our model is twofold: (a) taking as input InferCode embeddings of source code in two different programming languages and (b) forwarding them to a Siamese architecture for comparative processing. We compare the performance of CLCD-I with LSTM autoencoders and the existing approaches on cross-language code clone detection. The evaluation shows the CLCD-I outperforms LSTM autoencoders by 30% on average and the existing approaches by 15% on average. Full article
Show Figures

Graphical abstract

Back to TopTop