Previous Issue
Volume 13, June
 
 

Computers, Volume 13, Issue 7 (July 2024) – 18 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3199 KiB  
Article
Optimizing Convolutional Neural Networks for Image Classification on Resource-Constrained Microcontroller Units
by Susanne Brockmann and Tim Schlippe
Computers 2024, 13(7), 173; https://doi.org/10.3390/computers13070173 (registering DOI) - 15 Jul 2024
Abstract
Running machine learning algorithms for image classification locally on small, cheap, and low-power microcontroller units (MCUs) has advantages in terms of bandwidth, inference time, energy, reliability, and privacy for different applications. Therefore, TinyML focuses on deploying neural networks on MCUs with random access [...] Read more.
Running machine learning algorithms for image classification locally on small, cheap, and low-power microcontroller units (MCUs) has advantages in terms of bandwidth, inference time, energy, reliability, and privacy for different applications. Therefore, TinyML focuses on deploying neural networks on MCUs with random access memory sizes between 2 KB and 512 KB and read-only memory storage capacities between 32 KB and 2 MB. Models designed for high-end devices are usually ported to MCUs using model scaling factors provided by the model architecture’s designers. However, our analysis shows that this naive approach of substantially scaling down convolutional neural networks (CNNs) for image classification using such default scaling factors results in suboptimal performance. Consequently, in this paper we present a systematic strategy for efficiently scaling down CNN model architectures to run on MCUs. Moreover, we present our CNN Analyzer, a dashboard-based tool for determining optimal CNN model architecture scaling factors for the downscaling strategy by gaining layer-wise insights into the model architecture scaling factors that drive model size, peak memory, and inference time. Using our strategy, we were able to introduce additional new model architecture scaling factors for MobileNet v1, MobileNet v2, MobileNet v3, and ShuffleNet v2 and to optimize these model architectures. Our best model variation outperforms the MobileNet v1 version provided in the MLPerf Tiny Benchmark on the Visual Wake Words image classification task, reducing the model size by 20.5% while increasing the accuracy by 4.0%. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

14 pages, 5149 KiB  
Article
Implementation of Integrated Development Environment for Machine Vision-Based IEC 61131-3
by Sun Lim, Un-Hyeong Ham and Seong-Min Han
Computers 2024, 13(7), 172; https://doi.org/10.3390/computers13070172 - 15 Jul 2024
Viewed by 156
Abstract
IEC 61131-3 is an international standard for developing standardized software for automation and control systems. Machine vision systems are a prominent technology in the field of computer vision and are widely used in various industries, such as manufacturing, robotics, healthcare, and automotive, and [...] Read more.
IEC 61131-3 is an international standard for developing standardized software for automation and control systems. Machine vision systems are a prominent technology in the field of computer vision and are widely used in various industries, such as manufacturing, robotics, healthcare, and automotive, and are often combined with AI technologies. In industrial automation systems, software developed for defect detection or product classification typically involves separate systems for automation and machine vision programs, leading to increased system complexity and unnecessary resource wastage. To address these limitations, this study proposes an IEC 61131-3-based integrated development environment for programmable machine vision. We selected 11 APIs commonly used in machine vision systems, evaluated their functions in an IEC 61131-3 compliant development environment, and measured the performance of representative machine vision applications. This approach demonstrates the feasibility of developing PLC and machine vision programs within a single-controller system. We investigated the impact of controller performance on function execution. Full article
Show Figures

Figure 1

37 pages, 18036 KiB  
Article
Node Classification of Network Threats Leveraging Graph-Based Characterizations Using Memgraph
by Sadaf Charkhabi, Peyman Samimi, Sikha S. Bagui, Dustin Mink and Subhash C. Bagui
Computers 2024, 13(7), 171; https://doi.org/10.3390/computers13070171 - 15 Jul 2024
Viewed by 149
Abstract
This research leverages Memgraph, an open-source graph database, to analyze graph-based network data and apply Graph Neural Networks (GNNs) for a detailed classification of cyberattack tactics categorized by the MITRE ATT&CK framework. As part of graph characterization, the page rank, degree centrality, betweenness [...] Read more.
This research leverages Memgraph, an open-source graph database, to analyze graph-based network data and apply Graph Neural Networks (GNNs) for a detailed classification of cyberattack tactics categorized by the MITRE ATT&CK framework. As part of graph characterization, the page rank, degree centrality, betweenness centrality, and Katz centrality are presented. Node classification is utilized to categorize network entities based on their role in the traffic. Graph-theoretic features such as in-degree, out-degree, PageRank, and Katz centrality were used in node classification to ensure that the model captures the structure of the graph. The study utilizes the UWF-ZeekDataFall22 dataset, a newly created dataset which consists of labeled network logs from the University of West Florida’s Cyber Range. The uniqueness of this study is that it uses the power of combining graph-based characterization or analysis with machine learning to enhance the understanding and visualization of cyber threats, thereby improving the network security measures. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence 2024)
Show Figures

Figure 1

16 pages, 9862 KiB  
Article
Interactive Application as a Teaching Aid in Mechanical Engineering
by Peter Weis, Lukáš Smetanka, Slavomír Hrček and Matúš Vereš
Computers 2024, 13(7), 170; https://doi.org/10.3390/computers13070170 - 10 Jul 2024
Viewed by 351
Abstract
This paper examines the integration of interactive 3D applications into the teaching process in mechanical engineering education. An innovative interactive 3D application has been developed as a teaching aid for engineering students. The main advantage is its easy availability through a web browser [...] Read more.
This paper examines the integration of interactive 3D applications into the teaching process in mechanical engineering education. An innovative interactive 3D application has been developed as a teaching aid for engineering students. The main advantage is its easy availability through a web browser on mobile devices or desktop computers. It includes four explorable 3D gearbox models with assembly animations, linked technical information, and immersive virtual and augmented reality (AR) experiences. The benefits of using this application in the teaching process were monitored on a group of students at the end of the semester. Assessments conducted before and after the use of the interactive 3D application measured learning outcomes. Qualitative feedback from students was also collected. The results demonstrated significant improvements in engagement, spatial awareness, and understanding of gearbox principles compared to traditional methods. The versatility and accessibility of the application also facilitated self-directed learning, reducing the need for external resources. These findings indicate that interactive 3D tools have the potential to enhance student learning and engagement and to promote sustainable practices in engineering education. Future research could explore the scalability and applicability of these tools across different engineering disciplines and educational contexts. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education 2024)
Show Figures

Figure 1

26 pages, 17391 KiB  
Article
Internet of Things-Based Robust Green Smart Grid
by Rania A. Ahmed, M. Abdelraouf, Shaimaa Ahmed Elsaid, Mohammed ElAffendi, Ahmed A. Abd El-Latif, A. A. Shaalan and Abdelhamied A. Ateya
Computers 2024, 13(7), 169; https://doi.org/10.3390/computers13070169 - 8 Jul 2024
Viewed by 357
Abstract
Renewable energy sources play a critical role in all governments’ and organizations’ energy management and sustainability plans. The solar cell represents one such renewable energy resource, generating power in a population-free circumference. Integrating these renewable sources with the smart grids leads to the [...] Read more.
Renewable energy sources play a critical role in all governments’ and organizations’ energy management and sustainability plans. The solar cell represents one such renewable energy resource, generating power in a population-free circumference. Integrating these renewable sources with the smart grids leads to the generation of green smart grids. Smart grids are critical for modernizing electricity distribution by using new communication technologies that improve power system efficiency, reliability, and sustainability. Smart grids assist in balancing supply and demand by allowing for real-time monitoring and administration, as well as accommodating renewable energy sources and reducing outages. However, their execution presents considerable problems. High upfront expenditures and the need for substantial and reliable infrastructure changes present challenges. Despite these challenges, shifting to green smart grids is critical for a resilient and adaptable energy future that can fulfill changing consumer demands and environmental aims. To this end, this work considers developing a reliable Internet of Things (IoT)-based green smart grid. The proposed green grid integrates traditional grids with solar energy and provides a control unit between the generation and consumption parts of the grid. The work deploys intelligent IoT units to control energy demands and manage energy consumption effectively. The proposed framework deploys the paradigm of distributed edge computing in four levels to provide efficient data offloading and power management. The developed green grid outperformed traditional grids in terms of its reliability and energy efficiency. The proposed green grid reduces energy consumption over the distribution area by an average of 24.3% compared to traditional grids. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

24 pages, 13967 KiB  
Article
Transforming Digital Marketing with Generative AI
by Tasin Islam, Alina Miron, Monomita Nandy, Jyoti Choudrie, Xiaohui Liu and Yongmin Li
Computers 2024, 13(7), 168; https://doi.org/10.3390/computers13070168 - 8 Jul 2024
Viewed by 508
Abstract
The current marketing landscape faces challenges in content creation and innovation, relying heavily on manually created content and traditional channels like social media and search engines. While effective, these methods often lack the creativity and uniqueness needed to stand out in a competitive [...] Read more.
The current marketing landscape faces challenges in content creation and innovation, relying heavily on manually created content and traditional channels like social media and search engines. While effective, these methods often lack the creativity and uniqueness needed to stand out in a competitive market. To address this, we introduce MARK-GEN, a conceptual framework that utilises generative artificial intelligence (AI) models to transform marketing content creation. MARK-GEN provides a comprehensive, structured approach for businesses to employ generative AI in producing marketing materials, representing a new method in digital marketing strategies. We present two case studies within the fashion industry, demonstrating how MARK-GEN can generate compelling marketing content using generative AI technologies. This proposition paper builds on our previous technical developments in virtual try-on models, including image-based, multi-pose, and image-to-video techniques, and is intended for a broad audience, particularly those in business management. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

14 pages, 1849 KiB  
Article
Using Artificial Intelligence to Predict the Aerodynamic Properties of Wind Turbine Profiles
by Ziemowit Malecha and Adam Sobczyk
Computers 2024, 13(7), 167; https://doi.org/10.3390/computers13070167 - 8 Jul 2024
Viewed by 470
Abstract
This study describes the use of artificial intelligence to predict the aerodynamic properties of wind turbine profiles. The goal was to determine the lift coefficient for an airfoil using its geometry as input. Calculations based on XFoil were taken as a target for [...] Read more.
This study describes the use of artificial intelligence to predict the aerodynamic properties of wind turbine profiles. The goal was to determine the lift coefficient for an airfoil using its geometry as input. Calculations based on XFoil were taken as a target for the predictions. The lift coefficient for a single case scenario was set as a value to find by training an algorithm. Airfoil geometry data were collected from the UIUC Airfoil Data Site. Geometries in the coordinate format were converted to PARSEC parameters, which became a direct feature for the random forest regression algorithm. The training dataset included 60% of the base dataset records. The rest of the dataset was used to test the model. Five different datasets were tested. The results calculated for the test part of the base dataset were compared with the actual values of the lift coefficients. The developed prediction model obtained a coefficient of determination ranging from 0.83 to 0.87, which is a good prognosis for further research. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

22 pages, 4727 KiB  
Article
Hardware-Based Implementation of Algorithms for Data Replacement in Cache Memory of Processor Cores
by Larysa Titarenko, Vyacheslav Kharchenko, Vadym Puidenko, Artem Perepelitsyn and Alexander Barkalov
Computers 2024, 13(7), 166; https://doi.org/10.3390/computers13070166 - 5 Jul 2024
Viewed by 299
Abstract
Replacement policies have an important role in the functioning of the cache memory of processor cores. The implementation of a successful policy allows us to increase the performance of the processor core and the computer system as a whole. Replacement policies are most [...] Read more.
Replacement policies have an important role in the functioning of the cache memory of processor cores. The implementation of a successful policy allows us to increase the performance of the processor core and the computer system as a whole. Replacement policies are most often evaluated by the percentage of cache hits during the cycles of the processor bus when accessing the cache memory. The policies that focus on replacing the Least Recently Used (LRU) or Least Frequently Used (LFU) elements, whether instructions or data, are relevant for use. It should be noted that in the paging cache buffer, the above replacement policies can also be used to replace address information. The pseudo LRU (PLRU) policy introduces replacing based on approximate information about the age of the elements in the cache memory. The hardware implementation of any replacement policy algorithm is the circuit. This hardware part of the processor core has certain characteristics: the latency of the search process for a candidate element for replacement, the gate complexity, and the reliability. The characteristics of the PLRUt and PLRUm replacement policies are synthesized and investigated. Both are the varieties of the PLRU replacement policy, which is close to the LRU policy in terms of the percentage of cache hits. In the current study, the hardware implementation of these policies is evaluated, and the possibility of adaptation to each of the policies in the processor core according to a selected priority characteristic is analyzed. The dependency of the rise in the delay and gate complexity in the case of an increase in the associativity of the cache memory is shown. The advantage of the hardware implementation of the PLRUt algorithm in comparison with the PLRUm algorithm for higher values of associativity is shown. Full article
Show Figures

Figure 1

22 pages, 1911 KiB  
Article
Automation Bias and Complacency in Security Operation Centers
by Jack Tilbury and Stephen Flowerday
Computers 2024, 13(7), 165; https://doi.org/10.3390/computers13070165 - 3 Jul 2024
Cited by 1 | Viewed by 424
Abstract
The volume and complexity of alerts that security operation center (SOC) analysts must manage necessitate automation. Increased automation in SOCs amplifies the risk of automation bias and complacency whereby security analysts become over-reliant on automation, failing to seek confirmatory or contradictory information. To [...] Read more.
The volume and complexity of alerts that security operation center (SOC) analysts must manage necessitate automation. Increased automation in SOCs amplifies the risk of automation bias and complacency whereby security analysts become over-reliant on automation, failing to seek confirmatory or contradictory information. To identify automation characteristics that assist in the mitigation of automation bias and complacency, we investigated the current and proposed application areas of automation in SOCs and discussed its implications for security analysts. A scoping review of 599 articles from four databases was conducted. The final 48 articles were reviewed by two researchers for quality control and were imported into NVivo14. Thematic analysis was performed, and the use of automation throughout the incident response lifecycle was recognized, predominantly in the detection and response phases. Artificial intelligence and machine learning solutions are increasingly prominent in SOCs, yet support for the human-in-the-loop component is evident. The research culminates by contributing the SOC Automation Implementation Guidelines (SAIG), comprising functional and non-functional requirements for SOC automation tools that, if implemented, permit a mutually beneficial relationship between security analysts and intelligent machines. This is of practical value to human automation researchers and SOCs striving to optimize processes. Theoretically, a continued understanding of automation bias and its components is achieved. Full article
Show Figures

Figure 1

27 pages, 6430 KiB  
Article
Integrity and Privacy Assurance Framework for Remote Healthcare Monitoring Based on IoT
by Salah Hamza Alharbi, Ali Musa Alzahrani, Toqeer Ali Syed and Saad Said Alqahtany
Computers 2024, 13(7), 164; https://doi.org/10.3390/computers13070164 - 3 Jul 2024
Viewed by 484
Abstract
Remote healthcare monitoring (RHM) has become a pivotal component of modern healthcare, offering a crucial lifeline to numerous patients. Ensuring the integrity and privacy of the data generated and transmitted by IoT devices is of paramount importance. The integration of blockchain technology and [...] Read more.
Remote healthcare monitoring (RHM) has become a pivotal component of modern healthcare, offering a crucial lifeline to numerous patients. Ensuring the integrity and privacy of the data generated and transmitted by IoT devices is of paramount importance. The integration of blockchain technology and smart contracts has emerged as a pioneering solution to fortify the security of internet of things (IoT) data transmissions within the realm of healthcare monitoring. In today’s healthcare landscape, the IoT plays a pivotal role in remotely monitoring and managing patients’ well-being. Furthermore, blockchain’s decentralized and immutable ledger ensures that all IoT data transactions are securely recorded, timestamped, and resistant to unauthorized modifications. This heightened level of data security is critical in healthcare, where the integrity and privacy of patient information are nonnegotiable. This research endeavors to harness the power of blockchain and smart contracts to establish a robust and tamper-proof framework for healthcare IoT data. Employing smart contracts, which are self-executing agreements programmed with predefined rules, enables us to automate and validate data transactions within the IoT ecosystem. These contracts execute automatically when specific conditions are met, eliminating the need for manual intervention and oversight. This automation not only streamlines the process of data processing but also enhances its accuracy and reliability by reducing the risk of human error. Additionally, smart contracts provide a transparent and tamper-proof mechanism for verifying the validity of transactions, thereby mitigating the risk of fraudulent activities. By leveraging smart contracts, organizations can ensure the integrity and efficiency of data transactions within the IoT ecosystem, leading to improved trust, transparency, and security. Our experiments demonstrate the application of a blockchain approach to secure transmissions in IoT for RHM, as will be illustrated in the paper. This showcases the practical applicability of blockchain technology in real-world scenarios. Full article
(This article belongs to the Section Blockchain Infrastructures and Enabled Applications)
Show Figures

Figure 1

25 pages, 437 KiB  
Article
Enhancing the Security of Classical Communication with Post-Quantum Authenticated-Encryption Schemes for the Quantum Key Distribution
by Farshad Rahimi Ghashghaei, Yussuf Ahmed, Nebrase Elmrabit and Mehdi Yousefi
Computers 2024, 13(7), 163; https://doi.org/10.3390/computers13070163 - 1 Jul 2024
Viewed by 603
Abstract
This research aims to establish a secure system for key exchange by using post-quantum cryptography (PQC) schemes in the classic channel of quantum key distribution (QKD). Modern cryptography faces significant threats from quantum computers, which can solve classical problems rapidly. PQC schemes address [...] Read more.
This research aims to establish a secure system for key exchange by using post-quantum cryptography (PQC) schemes in the classic channel of quantum key distribution (QKD). Modern cryptography faces significant threats from quantum computers, which can solve classical problems rapidly. PQC schemes address critical security challenges in QKD, particularly in authentication and encryption, to ensure the reliable communication across quantum and classical channels. The other objective of this study is to balance security and communication speed among various PQC algorithms in different security levels, specifically CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon, which are finalists in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography Standardization project. The quantum channel of QKD is simulated with Qiskit, which is a comprehensive and well-supported tool in the field of quantum computing. By providing a detailed analysis of the performance of these three algorithms with Rivest–Shamir–Adleman (RSA), the results will guide companies and organizations in selecting an optimal combination for their QKD systems to achieve a reliable balance between efficiency and security. Our findings demonstrate that the implemented PQC schemes effectively address security challenges posed by quantum computers, while keeping the the performance similar to RSA. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

13 pages, 803 KiB  
Article
Bridging the Gap between Project-Oriented and Exercise-Oriented Automatic Assessment Tools
by Bruno Pereira Cipriano, Bernardo Baltazar, Nuno Fachada, Athanasios Vourvopoulos and Pedro Alves
Computers 2024, 13(7), 162; https://doi.org/10.3390/computers13070162 - 30 Jun 2024
Viewed by 447
Abstract
In this study, we present the DP Plugin for IntelliJ IDEA, designed to extend the Drop Project (DP) Automatic Assessment Tool (AAT) by making it more suitable for handling small exercises in exercise-based learning environments. Our aim was to address the limitations of [...] Read more.
In this study, we present the DP Plugin for IntelliJ IDEA, designed to extend the Drop Project (DP) Automatic Assessment Tool (AAT) by making it more suitable for handling small exercises in exercise-based learning environments. Our aim was to address the limitations of DP in supporting small assignments while retaining its strengths in project-based learning. The plugin leverages DP’s REST API to streamline the submission process, integrating assignment instructions and feedback directly within the IDE. A student survey conducted during the 2022/23 academic year revealed a positive reception, highlighting benefits such as time efficiency and ease of use. Students also provided valuable feedback, leading to various improvements that have since been integrated into the plugin. Despite these promising results, the study is limited by the relatively small percentage of survey respondents. Our findings suggest that an IDE plugin can significantly improve the usability of project-oriented AATs for small exercises, informing the development of future educational tools suitable for mixed project-based and exercise-based learning environments. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

17 pages, 1621 KiB  
Article
Modeling Autonomous Vehicle Responses to Novel Observations Using Hierarchical Cognitive Representations Inspired Active Inference
by Sheida Nozari, Ali Krayani, Pablo Marin, Lucio Marcenaro, David Martin Gomez and Carlo Regazzoni
Computers 2024, 13(7), 161; https://doi.org/10.3390/computers13070161 - 28 Jun 2024
Viewed by 431
Abstract
Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making [...] Read more.
Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making processes similar to the human brain, focusing on the agent’s preferences and the principle of free energy. This approach is combined with imitation learning to enhance the vehicle’s ability to adapt to new observations and make human-like decisions. The research involved developing a multi-modal self-awareness architecture for autonomous driving systems and testing this model in driving scenarios, including abnormal observations. The results demonstrated the model’s effectiveness in enabling the vehicle to make safe decisions, particularly in unobserved or dynamic environments. The study concludes that the integration of active inference with imitation learning significantly improves the performance of autonomous vehicles, offering a promising direction for future developments in intelligent transportation systems. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2023)
24 pages, 501 KiB  
Article
An NLP-Based Exploration of Variance in Student Writing and Syntax: Implications for Automated Writing Evaluation
by Maria Goldshtein, Amin G. Alhashim and Rod D. Roscoe
Computers 2024, 13(7), 160; https://doi.org/10.3390/computers13070160 - 25 Jun 2024
Viewed by 461
Abstract
In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers’ syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they [...] Read more.
In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers’ syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they must be able to consider similar kinds of variance. The current study employed natural language processing (NLP) to explore variance in syntactic complexity and sophistication across clusters characterized in a large corpus (n = 36,207) of middle school and high school argumentative essays. Using NLP tools, k-means clustering, and discriminant function analysis (DFA), we observed that student writers employed four distinct syntactic patterns: (1) familiar and descriptive language, (2) consistently simple noun phrases, (3) variably complex noun phrases, and (4) moderate complexity with less familiar language. Importantly, each pattern spanned the full range of writing quality; there were no syntactic patterns consistently evaluated as “good” or “bad”. These findings support the need for nuanced approaches in automated writing assessment while informing ways that AWE can participate in that process. Future AWE research can and should explore similar variability across other detectable elements of writing (e.g., vocabulary, cohesion, discursive cues, and sentiment) via diverse modeling methods. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
23 pages, 8049 KiB  
Article
Enhanced Security Access Control Using Statistical-Based Legitimate or Counterfeit Identification System
by Aisha Edrah and Abdelkader Ouda
Computers 2024, 13(7), 159; https://doi.org/10.3390/computers13070159 - 22 Jun 2024
Viewed by 719
Abstract
With our increasing reliance on technology, there is a growing demand for efficient and seamless access control systems. Smartphone-centric biometric methods offer a diverse range of potential solutions capable of verifying users and providing an additional layer of security to prevent unauthorized access. [...] Read more.
With our increasing reliance on technology, there is a growing demand for efficient and seamless access control systems. Smartphone-centric biometric methods offer a diverse range of potential solutions capable of verifying users and providing an additional layer of security to prevent unauthorized access. To ensure the security and accuracy of smartphone-centric biometric identification, it is crucial that the phone reliably identifies its legitimate owner. Once the legitimate holder has been successfully determined, the phone can effortlessly provide real-time identity verification for various applications. To achieve this, we introduce a novel smartphone-integrated detection and control system called Identification: Legitimate or Counterfeit (ILC), which utilizes gait cycle analysis. The ILC system employs the smartphone’s accelerometer sensor, along with advanced statistical methods, to detect the user’s gait pattern, enabling real-time identification of the smartphone owner. This approach relies on statistical analysis of measurements obtained from the accelerometer sensor, specifically, peaks extracted from the X-axis data. Subsequently, the derived feature’s probability distribution function (PDF) is computed and compared to the known user’s PDF. The calculated probability verifies the similarity between the distributions, and a decision is made with 92.18% accuracy based on a predetermined verification threshold. Full article
15 pages, 606 KiB  
Article
Personalized Classifier Selection for EEG-Based BCIs
by Javad Rahimipour Anaraki, Antonina Kolokolova and Tom Chau
Computers 2024, 13(7), 158; https://doi.org/10.3390/computers13070158 - 21 Jun 2024
Viewed by 501
Abstract
The most important component of an Electroencephalogram (EEG) Brain–Computer Interface (BCI) is its classifier, which translates EEG signals in real time into meaningful commands. The accuracy and speed of the classifier determine the utility of the BCI. However, there is significant intra- and [...] Read more.
The most important component of an Electroencephalogram (EEG) Brain–Computer Interface (BCI) is its classifier, which translates EEG signals in real time into meaningful commands. The accuracy and speed of the classifier determine the utility of the BCI. However, there is significant intra- and inter-subject variability in EEG data, complicating the choice of the best classifier for different individuals over time. There is a keen need for an automatic approach to selecting a personalized classifier suited to an individual’s current needs. To this end, we have developed a systematic methodology for individual classifier selection, wherein the structural characteristics of an EEG dataset are used to predict a classifier that will perform with high accuracy. The method was evaluated using motor imagery EEG data from Physionet. We confirmed that our approach could consistently predict a classifier whose performance was no worse than the single-best-performing classifier across the participants. Furthermore, Kullback–Leibler divergences between reference distributions and signal amplitude and class label distributions emerged as the most important characteristics for classifier prediction, suggesting that classifier choice depends heavily on the morphology of signal amplitude densities and the degree of class imbalance in an EEG dataset. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
24 pages, 949 KiB  
Article
Advancing Skin Cancer Prediction Using Ensemble Models
by Priya Natha and Pothuraju RajaRajeswari
Computers 2024, 13(7), 157; https://doi.org/10.3390/computers13070157 - 21 Jun 2024
Viewed by 509
Abstract
There are many different kinds of skin cancer, and an early and precise diagnosis is crucial because skin cancer is both frequent and deadly. The key to effective treatment is accurately classifying the various skin cancers, which have unique traits. Dermoscopy and other [...] Read more.
There are many different kinds of skin cancer, and an early and precise diagnosis is crucial because skin cancer is both frequent and deadly. The key to effective treatment is accurately classifying the various skin cancers, which have unique traits. Dermoscopy and other advanced imaging techniques have enhanced early detection by providing detailed images of lesions. However, accurately interpreting these images to distinguish between benign and malignant tumors remains a difficult task. Improved predictive modeling techniques are necessary due to the frequent occurrence of erroneous and inconsistent outcomes in the present diagnostic processes. Machine learning (ML) models have become essential in the field of dermatology for the automated identification and categorization of skin cancer lesions using image data. The aim of this work is to develop improved skin cancer predictions by using ensemble models, which combine numerous machine learning approaches to maximize their combined strengths and reduce their individual shortcomings. This paper proposes a fresh and special approach for ensemble model optimization for skin cancer classification: the Max Voting method. We trained and assessed five different ensemble models using the ISIC 2018 and HAM10000 datasets: AdaBoost, CatBoost, Random Forest, Gradient Boosting, and Extra Trees. Their combined predictions enhance the overall performance with the Max Voting method. Moreover, the ensemble models were fed with feature vectors that were optimally generated from the image data by a genetic algorithm (GA). We show that, with an accuracy of 95.80%, the Max Voting approach significantly improves the predictive performance when compared to the five ensemble models individually. Obtaining the best results for F1-measure, recall, and precision, the Max Voting method turned out to be the most dependable and robust. The novel aspect of this work is that skin cancer lesions are more robustly and reliably classified using the Max Voting technique. Several pre-trained machine learning models’ benefits are combined in this approach. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
21 pages, 4836 KiB  
Article
Chef Dalle: Transforming Cooking with Multi-Model Multimodal AI
by Brendan Hannon, Yulia Kumar, J. Jenny Li and Patricia Morreale
Computers 2024, 13(7), 156; https://doi.org/10.3390/computers13070156 - 21 Jun 2024
Viewed by 906
Abstract
In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application [...] Read more.
In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application integrates voice-to-text conversion via Whisper and ingredient image recognition through GPT-Vision. It employs an advanced recipe filtering system that utilizes user-provided ingredients to fetch recipes, which are then evaluated through multi-model AI through integrations of OpenAI, Google Gemini, Claude, and/or Anthropic APIs to deliver highly personalized recommendations. These methods enable users to interact with the system using voice, text, or images, accommodating various dietary restrictions and preferences. Furthermore, the utilization of DALL-E 3 for generating recipe images enhances user engagement. User feedback mechanisms allow for the refinement of future recommendations, demonstrating the system’s adaptability. Chef Dalle showcases potential applications ranging from home kitchens to grocery stores and restaurant menu customization, addressing accessibility and promoting healthier eating habits. This paper underscores the significance of multimodal HCI in enhancing culinary experiences, setting a precedent for future developments in the field. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop