Next Issue
Volume 13, March
Previous Issue
Volume 13, January
 
 

Computers, Volume 13, Issue 2 (February 2024) – 25 articles

Cover Story (view full-size image): Collected software analysis data are often stored separately from the measured software repository and analyzed by third-party tools and services, keeping the data external and the repository lean. This limits reuse and further analyses considerably. We propose to use available functionality in continuous integration (CI) and the git API to store software analysis data within a software repository and integrate this process into GitHub. Specifically, we use the git object database and associate measured commits with their static source code metrics data and provide an integration into the CI using GitHub Actions. We demonstrate and evaluate the approach on open source projects and provide per-project software map visualizations for direct integration into project websites. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 44295 KiB  
Article
A U-Net Architecture for Inpainting Lightstage Normal Maps
by Hancheng Zuo and Bernard Tiddeman
Computers 2024, 13(2), 56; https://doi.org/10.3390/computers13020056 - 19 Feb 2024
Viewed by 1759
Abstract
In this paper, we investigate the inpainting of normal maps that were captured from a lightstage. Occlusion of parts of the face during performance capture can be caused by the movement of, e.g., arms, hair, or props. Inpainting is the process of interpolating [...] Read more.
In this paper, we investigate the inpainting of normal maps that were captured from a lightstage. Occlusion of parts of the face during performance capture can be caused by the movement of, e.g., arms, hair, or props. Inpainting is the process of interpolating missing areas of an image with plausible data. We build on previous works about general image inpainting that use generative adversarial networks (GANs). We extend our previous work on normal map inpainting to use a U-Net structured generator network. Our method takes into account the nature of the normal map data and so requires modification of the loss function. We use a cosine loss rather than the more common mean squared error loss when training the generator. Due to the small amount of training data available, even when using synthetic datasets, we require significant augmentation, which also needs to take account of the particular nature of the input data. Image flipping and inplane rotations need to properly flip and rotate the normal vectors. During training, we monitor key performance metrics including the average loss, structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR) of the generator, alongside the average loss and accuracy of the discriminator. Our analysis reveals that the proposed model generates high-quality, realistic inpainted normal maps, demonstrating the potential for application to performance capture. The results of this investigation provide a baseline on which future researchers can build with more advanced networks and comparison with inpainting of the source images used to generate the normal maps. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
Show Figures

Figure 1

13 pages, 3968 KiB  
Article
Electrocardiogram Signals Classification Using Deep-Learning-Based Incorporated Convolutional Neural Network and Long Short-Term Memory Framework
by Alaa Eleyan and Ebrahim Alboghbaish
Computers 2024, 13(2), 55; https://doi.org/10.3390/computers13020055 - 18 Feb 2024
Cited by 5 | Viewed by 3332
Abstract
Cardiovascular diseases (CVDs) like arrhythmia and heart failure remain the world’s leading cause of death. These conditions can be triggered by high blood pressure, diabetes, and simply the passage of time. The early detection of these heart issues, despite substantial advancements in artificial [...] Read more.
Cardiovascular diseases (CVDs) like arrhythmia and heart failure remain the world’s leading cause of death. These conditions can be triggered by high blood pressure, diabetes, and simply the passage of time. The early detection of these heart issues, despite substantial advancements in artificial intelligence (AI) and technology, is still a significant challenge. This research addresses this hurdle by developing a deep-learning-based system that is capable of predicting arrhythmias and heart failure from abnormalities in electrocardiogram (ECG) signals. The system leverages a model that combines long short-term memory (LSTM) networks with convolutional neural networks (CNNs). Extensive experiments were conducted using ECG data from both the MIT-BIH and BIDMC databases under two scenarios. The first scenario employed data from five distinct ECG classes, while the second focused on classifying data from three classes. The results from both scenarios demonstrated that the proposed deep-learning-based classification approach outperformed existing methods. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

27 pages, 859 KiB  
Article
Implementing Virtualization on Single-Board Computers: A Case Study on Edge Computing
by Georgios Lambropoulos, Sarandis Mitropoulos, Christos Douligeris and Leandros Maglaras
Computers 2024, 13(2), 54; https://doi.org/10.3390/computers13020054 - 18 Feb 2024
Cited by 3 | Viewed by 3050
Abstract
The widespread adoption of cloud computing has resulted in centralized datacenter structures; however, there is a requirement for smaller-scale distributed infrastructures to meet the demands for speed, responsiveness, and security for critical applications. Single-Board Computers (SBCs) present numerous advantages such as low power [...] Read more.
The widespread adoption of cloud computing has resulted in centralized datacenter structures; however, there is a requirement for smaller-scale distributed infrastructures to meet the demands for speed, responsiveness, and security for critical applications. Single-Board Computers (SBCs) present numerous advantages such as low power consumption, low cost, minimal heat emission, and high processing power, making them suitable for applications such as the Internet of Things (IoT), experimentation, and other advanced projects. This paper investigates the possibility of adopting virtualization technology on Single-Board Computers (SBCs) for the implementation of reliable and cost-efficient edge-computing environments.The results of this study are based on experimental implementations and testing conducted in the course of a case study performed on the edge infrastructure of a financial organization, where workload migration was achieved from a traditional to an SBC-based edge infrastructure. The performance of the two infrastructures was studied and compared during this process, providing important insights into the power efficiency gains, resource utilization, and overall suitability for the organization’s operational needs. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

16 pages, 1254 KiB  
Article
Investigating Color-Blind User-Interface Accessibility via Simulated Interfaces
by Amaan Jamil and Gyorgy Denes
Computers 2024, 13(2), 53; https://doi.org/10.3390/computers13020053 - 17 Feb 2024
Viewed by 2807
Abstract
Over 300 million people who live with color vision deficiency (CVD) have a decreased ability to distinguish between colors, limiting their ability to interact with websites and software packages. User-interface designers have taken various approaches to tackle the issue, with most offering a [...] Read more.
Over 300 million people who live with color vision deficiency (CVD) have a decreased ability to distinguish between colors, limiting their ability to interact with websites and software packages. User-interface designers have taken various approaches to tackle the issue, with most offering a high-contrast mode. The Web Content Accessibility Guidelines (WCAG) outline some best practices for maintaining accessibility that have been adopted and recommended by several governments; however, it is currently uncertain how this impacts perceived user functionality and if this could result in a reduced aesthetic look. In the absence of subjective data, we aim to investigate how a CVD observer might rate the functionality and aesthetics of existing UIs. However, the design of a comparative study of CVD vs. non-CVD populations is inherently hard; therefore, we build on the successful field of physiologically based CVD models and propose a novel simulation-based experimental protocol, where non-CVD observers rate the relative aesthetics and functionality of screenshots of 20 popular websites as seen in full color vs. with simulated CVD. Our results show that relative aesthetics and functionality correlate positively and that an operating-system-wide high-contrast mode can reduce both aesthetics and functionality. While our results are only valid in the context of simulated CVD screenshots, the approach has the benefit of being easily deployable, and can help to spot a number of common pitfalls in production. Finally, we propose a AAA–A classification of the interfaces we analyzed. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
Show Figures

Figure 1

23 pages, 1230 KiB  
Article
Interpretable Software Defect Prediction from Project Effort and Static Code Metrics
by Susmita Haldar and Luiz Fernando Capretz
Computers 2024, 13(2), 52; https://doi.org/10.3390/computers13020052 - 16 Feb 2024
Cited by 2 | Viewed by 2058
Abstract
Software defect prediction models enable test managers to predict defect-prone modules and assist with delivering quality products. A test manager would be willing to identify the attributes that can influence defect prediction and should be able to trust the model outcomes. The objective [...] Read more.
Software defect prediction models enable test managers to predict defect-prone modules and assist with delivering quality products. A test manager would be willing to identify the attributes that can influence defect prediction and should be able to trust the model outcomes. The objective of this research is to create software defect prediction models with a focus on interpretability. Additionally, it aims to investigate the impact of size, complexity, and other source code metrics on the prediction of software defects. This research also assesses the reliability of cross-project defect prediction. Well-known machine learning techniques, such as support vector machines, k-nearest neighbors, random forest classifiers, and artificial neural networks, were applied to publicly available PROMISE datasets. The interpretability of this approach was demonstrated by SHapley Additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) techniques. The developed interpretable software defect prediction models showed reliability on independent and cross-project data. Finally, the results demonstrate that static code metrics can contribute to the defect prediction models, and the inclusion of explainability assists in establishing trust in the developed models. Full article
Show Figures

Figure 1

15 pages, 14826 KiB  
Article
Achieving Better Energy Efficiency in Volume Analysis and Direct Volume Rendering Descriptor Computation
by Jacob D. Hauenstein and Timothy S. Newman
Computers 2024, 13(2), 51; https://doi.org/10.3390/computers13020051 - 13 Feb 2024
Viewed by 1462
Abstract
Approaches aimed at achieving improved energy efficiency for determination of descriptors—used in volumetric data analysis and one common mode of scientific visualisation—in one x86-class setting are described and evaluated. These approaches are evaluated against standard approaches for the computational setting. In all, six [...] Read more.
Approaches aimed at achieving improved energy efficiency for determination of descriptors—used in volumetric data analysis and one common mode of scientific visualisation—in one x86-class setting are described and evaluated. These approaches are evaluated against standard approaches for the computational setting. In all, six approaches for improved efficiency are considered. Four of them are computation-based. The other two are memory-based. The descriptors are classic gradient and curvature descriptors. In addition to their use in volume analyses, they are used in the classic ray-casting-based direct volume rendering (DVR), which is a particular application area of interest here. An ideal combination of the described approaches applied to gradient descriptor determination allowed them to to be computed with only 80% of the energy of a standard approach in the computational setting; energy efficiency was improved by a factor of 1.2. For curvature descriptor determination, the ideal combination of described approaches achieved a factor-of-two improvement in energy efficiency. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
Show Figures

Figure 1

15 pages, 905 KiB  
Article
Extended Reality as an Educational Resource in the Primary School Classroom: An Interview of Drawbacks and Opportunities
by José María Fernández-Batanero, Marta Montenegro-Rueda, José Fernández-Cerero and Eloy López-Meneses
Computers 2024, 13(2), 50; https://doi.org/10.3390/computers13020050 - 8 Feb 2024
Cited by 2 | Viewed by 1872
Abstract
The use of Extended Reality in Primary Education classrooms has emerged as a transformative element that enhances the teaching and learning process of students. In this context, examining the various effects that this tool can generate is essential to identify both the opportunities [...] Read more.
The use of Extended Reality in Primary Education classrooms has emerged as a transformative element that enhances the teaching and learning process of students. In this context, examining the various effects that this tool can generate is essential to identify both the opportunities and limitations that teachers face when incorporating this technology into their practices. The aim of this research is to analyse the impact of the use of Extended Reality as an educational resource in Primary Education, focusing on teachers’ perceptions. The information was collected through semi-structured interviews with 36 active teachers in Primary Education. The analysis of the data obtained identifies the benefits and functionalities offered by the implementation of Extended Reality in Primary Education classrooms, as well as the uncertainties and concerns that teachers have with the implementation of Extended Reality. The results highlight the significant opportunities that Extended Reality offers in the teaching–learning process, provided that teachers are adequately trained. Furthermore, this study offers valuable recommendations to guide future teachers and researchers in the successful integration of this technology into the educational process. Full article
(This article belongs to the Special Issue Extended Reality (XR) Applications in Education 2023)
Show Figures

Figure 1

19 pages, 1275 KiB  
Article
Leveraging Positive-Unlabeled Learning for Enhanced Black Spot Accident Identification on Greek Road Networks
by Vasileios Sevetlidis, George Pavlidis, Spyridon G. Mouroutsos and Antonios Gasteratos
Computers 2024, 13(2), 49; https://doi.org/10.3390/computers13020049 - 8 Feb 2024
Cited by 2 | Viewed by 2120
Abstract
Identifying accidents in road black spots is crucial for improving road safety. Traditional methodologies, although insightful, often struggle with the complexities of imbalanced datasets, while machine learning (ML) techniques have shown promise, our previous work revealed that supervised learning (SL) methods face challenges [...] Read more.
Identifying accidents in road black spots is crucial for improving road safety. Traditional methodologies, although insightful, often struggle with the complexities of imbalanced datasets, while machine learning (ML) techniques have shown promise, our previous work revealed that supervised learning (SL) methods face challenges in effectively distinguishing accidents that occur in black spots from those that do not. This paper introduces a novel approach that leverages positive-unlabeled (PU) learning, a technique we previously applied successfully in the domain of defect detection. The results of this work demonstrate a statistically significant improvement in key performance metrics, including accuracy, precision, recall, F1-score, and AUC, compared to SL methods. This study thus establishes PU learning as a more effective and robust approach for accident classification in black spots, particularly in scenarios with highly imbalanced datasets. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

13 pages, 4846 KiB  
Article
Linear Actuators in a Haptic Feedback Joystick System for Electric Vehicles
by Kamil Andrzej Daniel, Paweł Kowol and Grazia Lo Sciuto
Computers 2024, 13(2), 48; https://doi.org/10.3390/computers13020048 - 6 Feb 2024
Cited by 1 | Viewed by 1988
Abstract
Several strategies for navigation in unfamiliar environments have been explored, notably leveraging advanced sensors and control algorithms for obstacle recognition in autonomous vehicles. This study introduces a novel approach featuring a redesigned joystick equipped with stepper motors and linear drives, facilitating WiFi communication [...] Read more.
Several strategies for navigation in unfamiliar environments have been explored, notably leveraging advanced sensors and control algorithms for obstacle recognition in autonomous vehicles. This study introduces a novel approach featuring a redesigned joystick equipped with stepper motors and linear drives, facilitating WiFi communication with a four-wheel omnidirectional electric vehicle. The system’s drive units integrated into the joystick and the encompassing control algorithms are thoroughly examined, including analysis of stick deflection measurement and inter-component communication within the joystick assembly. Unlike conventional setups in which the joystick is tilted by the operator, two independent linear drives are employed to generate ample tensile force, effectively “overpowering” the operator’s input. Running on a Raspberry Pi, the software utilizes Python programming to enable joystick tilt control and to transmit orientation and axis deflection data to an Arduino unit. A fundamental haptic effect is achieved by elevating the minimum pressure required to deflect the joystick rod. Test measurements encompass detection of obstacles along the primary directions perpendicular to the electric vehicle’s trajectory, determination of the maximum achievable speed, and evaluation of the joystick’s maximum operational range within an illuminated environment. Full article
(This article belongs to the Special Issue Vehicular Networking and Intelligent Transportation Systems 2023)
Show Figures

Figure 1

14 pages, 751 KiB  
Article
Passenger Routing Algorithm for COVID-19 Spread Prevention by Minimising Overcrowding
by Dimitrios Tolikas, Evangelos D. Spyrou and Vassilios Kappatos
Computers 2024, 13(2), 47; https://doi.org/10.3390/computers13020047 - 5 Feb 2024
Viewed by 1544
Abstract
COVID-19 has become a pandemic which has resulted in measures being taken for the health and safety of people. The spreading of this disease is particularly evident in indoor spaces, which tend to get overcrowded with people. One such place is the airport [...] Read more.
COVID-19 has become a pandemic which has resulted in measures being taken for the health and safety of people. The spreading of this disease is particularly evident in indoor spaces, which tend to get overcrowded with people. One such place is the airport where a plethora of passengers gather in common places, such as coffee shops and duty-free shops as well as toilets and gates. Guiding the passengers to less overcrowded places within the airport may be a solution to reduce disease spread. In this paper, we suggest a passenger routing algorithm whereby the passengers are guided to less crowded places by using a weighting factor, which is minimised to accomplish the desired goal. We modeled a number of shops in an airport using the AnyLogic software and we tested the algorithm showing that the exposure time is less with routing and that people are appropriately spread out across the common spaces, thus preventing overcrowding. Finally, we added a real airport in Kavala, Greece to show the efficiency of our approach. Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
Show Figures

Figure 1

24 pages, 3460 KiB  
Article
ADHD Dog: A Virtual Reality Intervention Incorporating Behavioral and Sociocultural Theories with Gamification for Enhanced Regulation in Individuals with Attention Deficit Hyperactivity Disorder
by Nikolaos Sergis, Christos Troussas, Akrivi Krouska, Christina Tzortzi, Georgios Bardis and Cleo Sgouropoulou
Computers 2024, 13(2), 46; https://doi.org/10.3390/computers13020046 - 4 Feb 2024
Viewed by 3188
Abstract
The need for effective cognitive training methodologies has increased, particularly for individuals dealing with Attention Deficit Hyperactivity Disorder (ADHD). In response to this demand, Virtual Reality (VR) technology emerges as a promising tool to support cognitive functions. Addressing this imperative, our paper introduces [...] Read more.
The need for effective cognitive training methodologies has increased, particularly for individuals dealing with Attention Deficit Hyperactivity Disorder (ADHD). In response to this demand, Virtual Reality (VR) technology emerges as a promising tool to support cognitive functions. Addressing this imperative, our paper introduces ADHD Dog, a VR game designed to aid individuals with ADHD by harnessing the advancements in VR technology and cognitive science. Our approach integrates behavioral and sociocultural theories, alongside gamification, to foster player engagement and reinforce cognitive functions. The theories employed, including operant conditioning and social constructivism, are specifically chosen for their relevance to ADHD’s cognitive aspects and their potential to promote active and context-based engagement. ADHD Dog, grounded in the principles of neuroplasticity and behaviorist methods, distinguishes itself by utilizing technology to amplify cognitive functions, like impulse control, attention, and short-term memory. An evaluation by individuals with ADHD, psychologists and computer scientists yielded promising results, underscoring the significant contribution of blending narrative-driven gameplay with behavioral and sociocultural theories, along with gamification, to ADHD cognitive training. Full article
(This article belongs to the Special Issue Extended Reality (XR) Applications in Education 2023)
Show Figures

Figure 1

33 pages, 3530 KiB  
Review
Generic IoT for Smart Buildings and Field-Level Automation—Challenges, Threats, Approaches, and Solutions
by Andrzej Ożadowicz
Computers 2024, 13(2), 45; https://doi.org/10.3390/computers13020045 - 3 Feb 2024
Cited by 4 | Viewed by 3690
Abstract
Smart home and building systems are popular solutions that support maintaining comfort and safety and improve energy efficiency in buildings. However, dynamically developing distributed network technologies, in particular the Internet of Things (IoT), are increasingly entering the above-mentioned application areas of building automation, [...] Read more.
Smart home and building systems are popular solutions that support maintaining comfort and safety and improve energy efficiency in buildings. However, dynamically developing distributed network technologies, in particular the Internet of Things (IoT), are increasingly entering the above-mentioned application areas of building automation, offering new functional possibilities. The result of these processes is the emergence of many different solutions that combine field-level and information and communications technology (ICT) networks in various configurations and architectures. New paradigms are also emerging, such as edge and fog computing, providing support for local monitoring and control networks in the implementation of advanced functions and algorithms, including machine learning and artificial intelligence mechanisms. This paper collects state-of-the-art information in these areas, providing a systematic review of the literature and case studies with an analysis of selected development trends. The author systematized this information in the context of the potential development of building automation systems. Based on the conclusions of this analysis and discussion, a framework for the development of the Generic IoT paradigm in smart home and building applications has been proposed, along with a strengths, weaknesses, opportunities, and threats (SWOT) analysis of its usability. Future works are proposed as well. Full article
Show Figures

Figure 1

21 pages, 1649 KiB  
Article
Interference Management Based on Meta-Heuristic Algorithms in 5G Device-to-Device Communications
by Mohamed Kamel Benbraika, Okba Kraa, Yassine Himeur, Khaled Telli, Shadi Atalla and Wathiq Mansoor
Computers 2024, 13(2), 44; https://doi.org/10.3390/computers13020044 - 1 Feb 2024
Cited by 3 | Viewed by 1924
Abstract
Device-to-Device (D2D) communication is an emerging technology that is vital for the future of cellular networks, including 5G and beyond. Its potential lies in enhancing system throughput, offloading the network core, and improving spectral efficiency. Therefore, optimizing resource and power allocation to reduce [...] Read more.
Device-to-Device (D2D) communication is an emerging technology that is vital for the future of cellular networks, including 5G and beyond. Its potential lies in enhancing system throughput, offloading the network core, and improving spectral efficiency. Therefore, optimizing resource and power allocation to reduce co-channel interference is crucial for harnessing these benefits. In this paper, we conduct a comparative study of meta-heuristic algorithms, employing Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), Bee Life Algorithm (BLA), and a novel combination of matching techniques with BLA for joint channel and power allocation optimization. The simulation results highlight the effectiveness of bio-inspired algorithms in addressing these challenges. Moreover, the proposed amalgamation of the matching algorithm with BLA outperforms other meta-heuristic algorithms, namely, PSO, BLA, and GA, in terms of throughput, convergence speed, and achieving practical solutions. Full article
Show Figures

Figure 1

21 pages, 3058 KiB  
Article
A User-Centered Privacy Policy Management System for Automatic Consent on Cookie Banners
by Lorenzo Porcelli, Michele Mastroianni, Massimo Ficco and Francesco Palmieri
Computers 2024, 13(2), 43; https://doi.org/10.3390/computers13020043 - 1 Feb 2024
Viewed by 2253
Abstract
Despite growing concerns about privacy and an evolution in laws protecting users’ rights, there remains a gap between how industries manage data and how users can express their preferences. This imbalance often favors industries, forcing users to repeatedly define their privacy preferences each [...] Read more.
Despite growing concerns about privacy and an evolution in laws protecting users’ rights, there remains a gap between how industries manage data and how users can express their preferences. This imbalance often favors industries, forcing users to repeatedly define their privacy preferences each time they access a new website. This process contributes to the privacy paradox. We propose a user support tool named the User Privacy Preference Management System (UPPMS) that eliminates the need for users to handle intricate banners or deceptive patterns. We have set up a process to guide even a non-expert user in creating a standardized personal privacy policy, which is automatically applied to every visited website by interacting with cookie banners. The process of generating actions to apply the user’s policy leverages customized Large Language Models. Experiments demonstrate the feasibility of analyzing HTML code to understand and automatically interact with cookie banners, even implementing complex policies. Our proposal aims to address the privacy paradox related to cookie banners by reducing information overload and decision fatigue for users. It also simplifies user navigation by eliminating the need to repeatedly declare preferences in intricate cookie banners on every visited website, while protecting users from deceptive patterns. Full article
Show Figures

Figure 1

18 pages, 1773 KiB  
Article
Application of Immersive VR Serious Games in the Treatment of Schizophrenia Negative Symptoms
by Beatriz Miranda, Paula Alexandra Rego, Luís Romero and Pedro Miguel Moreira
Computers 2024, 13(2), 42; https://doi.org/10.3390/computers13020042 - 31 Jan 2024
Cited by 1 | Viewed by 1882
Abstract
Schizophrenia is a mental illness that requires the use of cognitive treatments to decrease symptoms in which the use of medication is less effective. Innovative strategies such as the use of Virtual Reality (VR) are being tested, but there is still a long [...] Read more.
Schizophrenia is a mental illness that requires the use of cognitive treatments to decrease symptoms in which the use of medication is less effective. Innovative strategies such as the use of Virtual Reality (VR) are being tested, but there is still a long way into developing solutions as effective as the current conventional forms of treatment. To study more effective forms of developing these systems, an immersive VR game with a tutorial and two levels of difficulty was developed. Tests were performed in twenty-one healthy subjects, showing promising results, indicating VR’s potential as a complementary approach to conventional treatments for schizophrenia. When properly applied, the use of VR could lead to more efficient and accessible treatments, potentially reducing its costs and reaching a broader population. Full article
(This article belongs to the Special Issue Serious Games and Applications for Health 2023)
Show Figures

Figure 1

25 pages, 752 KiB  
Review
Security and Privacy of Technologies in Health Information Systems: A Systematic Literature Review
by Parisasadat Shojaei, Elena Vlahu-Gjorgievska and Yang-Wai Chow
Computers 2024, 13(2), 41; https://doi.org/10.3390/computers13020041 - 31 Jan 2024
Cited by 8 | Viewed by 18363
Abstract
Health information systems (HISs) have immense value for healthcare institutions, as they provide secure storage, efficient retrieval, insightful analysis, seamless exchange, and collaborative sharing of patient health information. HISs are implemented to meet patient needs, as well as to ensure the security and [...] Read more.
Health information systems (HISs) have immense value for healthcare institutions, as they provide secure storage, efficient retrieval, insightful analysis, seamless exchange, and collaborative sharing of patient health information. HISs are implemented to meet patient needs, as well as to ensure the security and privacy of medical data, including confidentiality, integrity, and availability, which are necessary to achieve high-quality healthcare services. This systematic literature review identifies various technologies and methods currently employed to enhance the security and privacy of medical data within HISs. Various technologies have been utilized to enhance the security and privacy of healthcare information, such as the IoT, blockchain, mobile health applications, cloud computing, and combined technologies. This study also identifies three key security aspects, namely, secure access control, data sharing, and data storage, and discusses the challenges faced in each aspect that must be enhanced to ensure the security and privacy of patient information in HISs. Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
Show Figures

Figure 1

15 pages, 6919 KiB  
Article
CollabVR: VR Testing for Increasing Social Interaction between College Students
by Diego Johnson, Brayan Mamani and Cesar Salas
Computers 2024, 13(2), 40; https://doi.org/10.3390/computers13020040 - 29 Jan 2024
Viewed by 2105
Abstract
The impact of the COVID-19 pandemic on education has accelerated the shift in learning paradigms toward synchronous and asynchronous online approaches, significantly reducing students’ social interactions. This study introduces CollabVR, as a social virtual reality (SVR) platform designed to improve social interaction among [...] Read more.
The impact of the COVID-19 pandemic on education has accelerated the shift in learning paradigms toward synchronous and asynchronous online approaches, significantly reducing students’ social interactions. This study introduces CollabVR, as a social virtual reality (SVR) platform designed to improve social interaction among remote university students through extracurricular activities (ECAs). Leveraging technologies such as Unity3D for the development of the SVR environment, Photon Unity Networking for real-time participant connection, Oculus Quest 2 for immersive virtual reality experience, and AWS for efficient and scalable system performance, it aims to mitigate this social interaction deficit. The platform was tested using the sociability scale of Kreijns et al., comparing it with traditional online platforms. Results from a focus group in Lima, Peru, with students participating in online ECAs, demonstrated that CollabVR significantly improved participants perceived social interaction, with a mean of 4.65 ± 0.49 compared to traditional platforms with a mean of 2.35 ± 0.75, fostering a sense of community and improving communication. The study highlights the potential of CollabVR as a powerful tool to overcome socialization challenges in virtual learning environments, suggesting a more immersive and engaging approach to distance education. Full article
Show Figures

Figure 1

23 pages, 1225 KiB  
Article
Error Pattern Discovery in Spellchecking Using Multi-Class Confusion Matrix Analysis for the Croatian Language
by Gordan Gledec, Mladen Sokele, Marko Horvat and Miljenko Mikuc
Computers 2024, 13(2), 39; https://doi.org/10.3390/computers13020039 - 29 Jan 2024
Cited by 2 | Viewed by 1779
Abstract
This paper introduces a novel approach to the creation and application of confusion matrices for error pattern discovery in spellchecking for the Croatian language. The experimental dataset has been derived from a corpus of mistyped words and user corrections collected since 2008 using [...] Read more.
This paper introduces a novel approach to the creation and application of confusion matrices for error pattern discovery in spellchecking for the Croatian language. The experimental dataset has been derived from a corpus of mistyped words and user corrections collected since 2008 using the Croatian spellchecker available at ispravi.me. The important role of confusion matrices in enhancing the precision of spellcheckers, particularly within the diverse linguistic context of the Croatian language, is investigated. Common causes of spelling errors, emphasizing the challenges posed by diacritic usage, have been identified and analyzed. This research contributes to the advancement of spellchecking technologies and provides a more comprehensive understanding of linguistic details, particularly in languages with diacritic-rich orthographies, like Croatian. The presented user-data-driven approach demonstrates the potential for custom spellchecking solutions, especially considering the ever-changing dynamics of language use in digital communication. Full article
Show Figures

Figure 1

17 pages, 2043 KiB  
Article
EfficientNet Ensemble Learning: Identifying Ethiopian Medicinal Plant Species and Traditional Uses by Integrating Modern Technology with Ethnobotanical Wisdom
by Mulugeta Adibaru Kiflie, Durga Prasad Sharma, Mesfin Abebe Haile and Ramasamy Srinivasagan
Computers 2024, 13(2), 38; https://doi.org/10.3390/computers13020038 - 29 Jan 2024
Viewed by 2187
Abstract
Ethiopia is renowned for its rich biodiversity, supporting a diverse variety of medicinal plants with significant potential for therapeutic applications. In regions where modern healthcare facilities are scarce, traditional medicine emerges as a cost-effective and culturally aligned primary healthcare solution in developing countries. [...] Read more.
Ethiopia is renowned for its rich biodiversity, supporting a diverse variety of medicinal plants with significant potential for therapeutic applications. In regions where modern healthcare facilities are scarce, traditional medicine emerges as a cost-effective and culturally aligned primary healthcare solution in developing countries. In Ethiopia, the majority of the population, around 80%, and for a significant proportion of their livestock, approximately 90% continue to prefer traditional medicine as their primary healthcare option. Nevertheless, the precise identification of specific plant parts and their associated uses has posed a formidable challenge due to the intricate nature of traditional healing practices. To address this challenge, we employed a majority based ensemble deep learning approach to identify medicinal plant parts and uses of Ethiopian indigenous medicinal plant species. The primary objective of this research is to achieve the precise identification of the parts and uses of Ethiopian medicinal plant species. To design our proposed model, EfficientNetB0, EfficientNetB2, and EfficientNetB4 were used as benchmark models and applied as a majority vote-based ensemble technique. This research underscores the potential of ensemble deep learning and transfer learning methodologies to accurately identify the parts and uses of Ethiopian indigenous medicinal plant species. Notably, our proposed EfficientNet-based ensemble deep learning approach demonstrated remarkable accuracy, achieving a significant test and validation accuracy of 99.96%. Future endeavors will prioritize expanding the dataset, refining feature-extraction techniques, and creating user-friendly interfaces to overcome current dataset limitations. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

17 pages, 3301 KiB  
Article
Improved Recognition of Kurdish Sign Language Using Modified CNN
by Karwan Mahdi Hama Rawf, Ayub Othman Abdulrahman and Aree Ali Mohammed
Computers 2024, 13(2), 37; https://doi.org/10.3390/computers13020037 - 28 Jan 2024
Cited by 2 | Viewed by 2650
Abstract
The deaf society supports Sign Language Recognition (SLR) since it is used to educate individuals in communication, education, and socialization. In this study, the results of using the modified Convolutional Neural Network (CNN) technique to develop a model for real-time Kurdish sign recognition [...] Read more.
The deaf society supports Sign Language Recognition (SLR) since it is used to educate individuals in communication, education, and socialization. In this study, the results of using the modified Convolutional Neural Network (CNN) technique to develop a model for real-time Kurdish sign recognition are presented. Recognizing the Kurdish alphabet is the primary focus of this investigation. Using a variety of activation functions over several iterations, the model was trained and then used to make predictions on the KuSL2023 dataset. There are a total of 71,400 pictures in the dataset, drawn from two separate sources, representing the 34 sign languages and alphabets used by the Kurds. A large collection of real user images is used to evaluate the accuracy of the suggested strategy. A novel Kurdish Sign Language (KuSL) model for classification is presented in this research. Furthermore, the hand region must be identified in a picture with a complex backdrop, including lighting, ambience, and image color changes of varying intensities. Using a genuine public dataset, real-time classification, and personal independence while maintaining high classification accuracy, the proposed technique is an improvement over previous research on KuSL detection. The collected findings demonstrate that the performance of the proposed system offers improvements, with an average training accuracy of 99.05% for both classification and prediction models. Compared to earlier research on KuSL, these outcomes indicate very strong performance. Full article
Show Figures

Figure 1

15 pages, 3425 KiB  
Article
Forest Defender Fusion System for Early Detection of Forest Fires
by Manar Khalid Ibraheem Ibraheem, Mbarka Belhaj Mohamed and Ahmed Fakhfakh
Computers 2024, 13(2), 36; https://doi.org/10.3390/computers13020036 - 28 Jan 2024
Cited by 4 | Viewed by 1888
Abstract
In the past ten years, rates of forest fires around the world have increased significantly. Forest fires greatly affect the ecosystem by damaging vegetation. Forest fires are caused by several causes, including both human and natural causes. Human causes lie in intentional and [...] Read more.
In the past ten years, rates of forest fires around the world have increased significantly. Forest fires greatly affect the ecosystem by damaging vegetation. Forest fires are caused by several causes, including both human and natural causes. Human causes lie in intentional and irregular burning operations. Global warming is a major natural cause of forest fires. The early detection of forest fires reduces the rate of their spread to larger areas by speeding up their extinguishing with the help of equipment and materials for early detection. In this research, an early detection system for forest fires is proposed called Forest Defender Fusion. This system achieved high accuracy and long-term monitoring of the site by using the Intermediate Fusion VGG16 model and Enhanced Consumed Energy-Leach protocol (ECP-LEACH). The Intermediate Fusion VGG16 model receives RGB (red, green, blue) and IR (infrared) images from drones to detect forest fires. The Forest Defender Fusion System provides regulation of energy consumption in drones and achieves high detection accuracy so that forest fires are detected early. The detection model was trained on the FLAME 2 dataset and obtained an accuracy of 99.86%, superior to the rest of the models that track the input of RGB and IR images together. A simulation using the Python language to demonstrate the system in real time was performed. Full article
Show Figures

Figure 1

22 pages, 23271 KiB  
Article
The Role of Situatedness in Immersive Dam Visualization: Comparing Proxied with Immediate Approaches
by Nuno Verdelho Trindade, Pedro Leitão, Daniel Gonçalves, Sérgio Oliveira and Alfredo Ferreira
Computers 2024, 13(2), 35; https://doi.org/10.3390/computers13020035 - 27 Jan 2024
Viewed by 1657
Abstract
Dam safety control is a multifaceted activity that requires analysis, monitoring, and structural behavior prediction. It entails interpreting vast amounts of data from sensor networks integrated into dam structures. The application of extended reality technologies for situated immersive analysis allows data to be [...] Read more.
Dam safety control is a multifaceted activity that requires analysis, monitoring, and structural behavior prediction. It entails interpreting vast amounts of data from sensor networks integrated into dam structures. The application of extended reality technologies for situated immersive analysis allows data to be contextualized directly over the physical referent. Such types of visual contextualization have been known to improve analytical reasoning and decision making. This study presents DamVR, a virtual reality tool for off-site, proxied situated structural sensor data visualization. In addition to describing the tool’s features, it evaluates usability and usefulness with a group of 22 domain experts. It also compares its performance with an existing augmented reality tool for the on-site, immediate situated visualization of structural data. Participant responses to a survey reflect a positive assessment of the proxied situated approach’s usability and usefulness. This approach shows a decrease in performance (task completion time and errors) for more complex tasks but no significant differences in user experience scores when compared to the immediate situated approach. The findings indicate that while results may depend strongly on factors such as the realism of the virtual environment, the immediate physical referent offered some advantages over the proxied one in the contextualization of data. Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)
Show Figures

Figure 1

13 pages, 392 KiB  
Article
Least Squares Minimum Class Variance Support Vector Machines
by Michalis Panayides and Andreas Artemiou
Computers 2024, 13(2), 34; https://doi.org/10.3390/computers13020034 - 26 Jan 2024
Viewed by 1663
Abstract
In this paper, we propose a Support Vector Machine (SVM)-type algorithm, which is statistically faster among other common algorithms in the family of SVM algorithms. The new algorithm uses distributional information of each class and, therefore, combines the benefits of using the class [...] Read more.
In this paper, we propose a Support Vector Machine (SVM)-type algorithm, which is statistically faster among other common algorithms in the family of SVM algorithms. The new algorithm uses distributional information of each class and, therefore, combines the benefits of using the class variance in the optimization with the least squares approach, which gives an analytic solution to the minimization problem and, therefore, is computationally efficient. We demonstrate an important property of the algorithm which allows us to address the inversion of a singular matrix in the solution. We also demonstrate through real data experiments that we improve on the computational time without losing any of the accuracy when compared to previously proposed algorithms. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

23 pages, 13090 KiB  
Article
Integrated Visual Software Analytics on the GitHub Platform
by Willy Scheibel, Jasper Blum, Franziska Lauterbach, Daniel Atzberger and Jürgen Döllner
Computers 2024, 13(2), 33; https://doi.org/10.3390/computers13020033 - 25 Jan 2024
Cited by 1 | Viewed by 2376
Abstract
Readily available software analysis and analytics tools are often operated within external services, where the measured software analysis data are kept internally and no external access to the data is available. We propose an approach to integrate visual software analysis on the GitHub [...] Read more.
Readily available software analysis and analytics tools are often operated within external services, where the measured software analysis data are kept internally and no external access to the data is available. We propose an approach to integrate visual software analysis on the GitHub platform by leveraging GitHub Actions and the GitHub API, covering both analysis and visualization. The process is to perform software analysis for each commit, e.g., static source code complexity metrics, and augment the commit using the resulting data, stored as git objects within the same repository. We show that this approach is feasible by integrating it into 64 open source TypeScript projects. Furthermore, we analyze the impact on Continuous Integration (CI) run time and repository storage. The stored software analysis data are externally accessible to allow for visualization tools, such as software maps. The effort to integrate our approach is limited to enabling the analysis component within a project’s CI on GitHub and embed an HTML snippet into the project’s website for visualization. This enables a large amount of projects to have access to software analysis as well as provide means to communicate the current status of a project. Full article
Show Figures

Figure 1

36 pages, 11306 KiB  
Article
Damage Location Determination with Data Augmentation of Guided Ultrasonic Wave Features and Explainable Neural Network Approach for Integrated Sensor Systems
by Christoph Polle, Stefan Bosse and Axel S. Herrmann
Computers 2024, 13(2), 32; https://doi.org/10.3390/computers13020032 - 24 Jan 2024
Cited by 2 | Viewed by 1755
Abstract
Machine learning techniques such as deep learning have already been successfully applied in Structural Health Monitoring (SHM) for damage localization using Ultrasonic Guided Waves (UGW) at various temperatures. However, a common issue arises due to the time-consuming nature of collecting guided wave measurements [...] Read more.
Machine learning techniques such as deep learning have already been successfully applied in Structural Health Monitoring (SHM) for damage localization using Ultrasonic Guided Waves (UGW) at various temperatures. However, a common issue arises due to the time-consuming nature of collecting guided wave measurements at different temperatures, resulting in an insufficient amount of training data. Since SHM systems are predominantly employed in sensitive structures, there is a significant interest in utilizing methods and algorithms that are transparent and comprehensible. In this study, a method is presented to augment feature data by generating a large number of training features from a relatively limited set of measurements. In addition, robustness to environmental changes, e.g., temperature fluctuations, is improved. This is achieved by utilizing a known temperature compensation method called temperature scaling to determine the function of signal features as a function of temperature. These functions can then be used for data generation. To gain a better understanding of how the damage localization predictions are made, a known explainable neural network (XANN) architecture is employed and trained with the generated data. The trained XANN model was then used to examine and validate the artificially generated signal features and to improve the augmentation process. The presented method demonstrates a significant increase in the number of training data points. Furthermore, the use of the XANN architecture as a predictor model enables a deeper interpretation of the prediction methods employed by the network. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2023)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop