Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 934 KiB  
Article
Optimization of PFMEA Team Composition in the Automotive Industry Using the IPF-RADAR Approach
by Nikola Komatina and Dragan Marinković
Algorithms 2025, 18(6), 342; https://doi.org/10.3390/a18060342 - 4 Jun 2025
Cited by 3 | Viewed by 489
Abstract
In the automotive industry, the implementation of Process Failure Mode and Effect Analysis (PFMEA) is conducted by a PFMEA team comprising employees who are connected to the production process or a specific product. Core PFMEA team members are actively engaged in PFMEA execution [...] Read more.
In the automotive industry, the implementation of Process Failure Mode and Effect Analysis (PFMEA) is conducted by a PFMEA team comprising employees who are connected to the production process or a specific product. Core PFMEA team members are actively engaged in PFMEA execution through meetings, analysis, and the implementation of corrective actions. Although the current handbook provides guidelines on the potential composition of the PFMEA team, it does not strictly define its members, allowing companies the flexibility to determine the team structure independently. This study aims to identify the core PFMEA team members by adhering to criteria based on the recommended knowledge and competencies outlined in the current handbook. By applying the RAnking based on the Distances and Range (RADAR) approach, extended with Interval-Valued Pythagorean Fuzzy Numbers (IVPFNs), a ranking of potential candidates was conducted. A case study was performed in a Tier-1 supplier company within the automotive supply chain. Full article
Show Figures

Figure 1

22 pages, 3570 KiB  
Article
High-Performance Computing and Parallel Algorithms for Urban Water Demand Forecasting
by Georgios Myllis, Alkiviadis Tsimpiris, Stamatios Aggelopoulos and Vasiliki G. Vrana
Algorithms 2025, 18(4), 182; https://doi.org/10.3390/a18040182 - 22 Mar 2025
Cited by 2 | Viewed by 878
Abstract
This paper explores the application of parallel algorithms and high-performance computing (HPC) in the processing and forecasting of large-scale water demand data. Building upon prior work, which identified the need for more robust and scalable forecasting models, this study integrates parallel computing frameworks [...] Read more.
This paper explores the application of parallel algorithms and high-performance computing (HPC) in the processing and forecasting of large-scale water demand data. Building upon prior work, which identified the need for more robust and scalable forecasting models, this study integrates parallel computing frameworks such as Apache Spark for distributed data processing, Message Passing Interface (MPI) for fine-grained parallel execution, and CUDA-enabled GPUs for deep learning acceleration. These advancements significantly improve model training and deployment speed, enabling near-real-time data processing. Apache Spark’s in-memory computing and distributed data handling optimize data preprocessing and model execution, while MPI provides enhanced control over custom parallel algorithms, ensuring high performance in complex simulations. By leveraging these techniques, urban water utilities can implement scalable, efficient, and reliable forecasting solutions critical for sustainable water resource management in increasingly complex environments. Additionally, expanding these models to larger datasets and diverse regional contexts will be essential for validating their robustness and applicability in different urban settings. Addressing these challenges will help bridge the gap between theoretical advancements and practical implementation, ensuring that HPC-driven forecasting models provide actionable insights for real-world water management decision-making. Full article
Show Figures

Figure 1

19 pages, 2026 KiB  
Review
Quantum Computing and Machine Learning in Medical Decision-Making: A Comprehensive Review
by James C. L. Chow
Algorithms 2025, 18(3), 156; https://doi.org/10.3390/a18030156 - 9 Mar 2025
Cited by 14 | Viewed by 4773
Abstract
Medical decision-making is increasingly integrating quantum computing (QC) and machine learning (ML) to analyze complex datasets, improve diagnostics, and enable personalized treatments. While QC holds the potential to accelerate optimization, drug discovery, and genomic analysis as hardware capabilities advance, current implementations remain limited [...] Read more.
Medical decision-making is increasingly integrating quantum computing (QC) and machine learning (ML) to analyze complex datasets, improve diagnostics, and enable personalized treatments. While QC holds the potential to accelerate optimization, drug discovery, and genomic analysis as hardware capabilities advance, current implementations remain limited compared to classical computing in many practical applications. Meanwhile, ML has already demonstrated significant success in medical imaging, predictive modeling, and decision support. Their convergence, particularly through quantum machine learning (QML), presents opportunities for future advancements in processing high-dimensional healthcare data and improving clinical outcomes. This review examines the foundational concepts, key applications, and challenges of these technologies in healthcare, explores their potential synergy in solving clinical problems, and outlines future directions for quantum-enhanced ML in medical decision-making. Full article
Show Figures

Figure 1

11 pages, 2233 KiB  
Article
Knowledge Discovery in Predicting Martensite Start Temperature of Medium-Carbon Steels by Artificial Neural Networks
by Xiao-Song Wang, Anoop Kumar Maurya, Muhammad Ishtiaq, Sung-Gyu Kang and Nagireddy Gari Subba Reddy
Algorithms 2025, 18(2), 116; https://doi.org/10.3390/a18020116 - 19 Feb 2025
Cited by 2 | Viewed by 1000
Abstract
Martensite start (Ms) temperature is a critical parameter in the production of parts and structural steels and plays a vital role in heat treatment processes to achieve desired properties. However, it is often challenging to estimate accurately through experience alone. This study introduces [...] Read more.
Martensite start (Ms) temperature is a critical parameter in the production of parts and structural steels and plays a vital role in heat treatment processes to achieve desired properties. However, it is often challenging to estimate accurately through experience alone. This study introduces a model that predicts the Ms temperature of medium-carbon steels based on their chemical compositions using the artificial neural network (ANN) method and compares the results with those from previous empirical formulae. The results indicate that the ANN model surpasses conventional methods in predicting the Ms temperature of medium-carbon steel, achieving an average absolute error of −0.93 degrees and −0.097% in mean percentage error. Furthermore, this research provides an accurate method or tool with which to present the quantitative effect of alloying elements on the Ms temperature of medium-carbon steels. This approach is straightforward, visually interpretable, and highly accurate, making it valuable for materials design and prediction of material properties. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science)
Show Figures

Figure 1

19 pages, 7491 KiB  
Article
Performance Investigation of Active, Semi-Active and Passive Suspension Using Quarter Car Model
by Kyle Samaroo, Abdul Waheed Awan, Siva Marimuthu, Muhammad Naveed Iqbal, Kamran Daniel and Noman Shabbir
Algorithms 2025, 18(2), 100; https://doi.org/10.3390/a18020100 - 10 Feb 2025
Cited by 2 | Viewed by 1726
Abstract
In this paper, a semi-active and fully active suspension system using a PID controller were designed and tuned in MATLAB/Simulink to achieve simultaneous optimisation of comfort and road holding ability. This was performed in order to quantify and observe the trends of both [...] Read more.
In this paper, a semi-active and fully active suspension system using a PID controller were designed and tuned in MATLAB/Simulink to achieve simultaneous optimisation of comfort and road holding ability. This was performed in order to quantify and observe the trends of both the semi-active and active suspension, which can then influence the choice of controlled suspension systems used for different applications. The response of the controlled suspensions was compared to a traditional passive setup in terms of the sprung mass displacement and acceleration, tyre deflection, and suspension working space for three different road profile inputs. It was found that across all road profiles, the usage of a semi-active or fully active suspension system offered notable improvements over a passive suspension in terms of comfort and road-holding ability. Specifically, the rms sprung mass displacement was reduced by a maximum of 44% and 56% over the passive suspension when using the semi-active and fully active suspension, respectively. Notably, in terms of sprung mass acceleration, the semi-active suspension offered better performance with a 65% reduction in the passive rms sprung mass acceleration compared to a 40% reduction for the fully active suspension. The tyre deflection of the passive suspension was also reduced by a maximum of 6% when using either the semi-active or fully active suspension. Furthermore, both the semi-active and fully active suspensions increased the suspension working space by 17% and 9%, respectively, over the passive suspension system, which represents a decreased level of performance. In summary, the choice between a semi-active or fully active suspension should be carefully considered based on the level of ride comfort and handling performance that is needed and the suspension working space that is available in the particular application. However, the results of this paper show that the performance gap between the semi-active and fully active suspension is quite small, and the semi-active suspension is mostly able to match and sometimes outperform the fully active suspension n in certain metrics. When considering other factors, such as weight, power requirements, and complexity, the semi-active suspension represents a better choice over the fully active suspension, in the author’s opinion. As such, future work will look at utilising more robust control methods and tuning procedures that may further improve the performance of the semi-active suspension. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 20326 KiB  
Article
GATransformer: A Graph Attention Network-Based Transformer Model to Generate Explainable Attentions for Brain Tumor Detection
by Sara Tehsin, Inzamam Mashood Nasir and Robertas Damaševičius
Algorithms 2025, 18(2), 89; https://doi.org/10.3390/a18020089 - 6 Feb 2025
Cited by 8 | Viewed by 2566
Abstract
Brain tumors profoundly affect human health owing to their intricacy and the difficulties associated with early identification and treatment. Precise diagnosis is essential for effective intervention; nevertheless, the resemblance among tumor forms often complicates the identification of brain tumor types, particularly in the [...] Read more.
Brain tumors profoundly affect human health owing to their intricacy and the difficulties associated with early identification and treatment. Precise diagnosis is essential for effective intervention; nevertheless, the resemblance among tumor forms often complicates the identification of brain tumor types, particularly in the early stages. The latest deep learning systems offer very high classification accuracy but lack explainability to help patients understand the prediction process. GATransformer, a graph attention network (GAT)-based Transformer, uses the attention mechanism, GAT, and Transformer to identify and preserve key neural network channels. The channel attention module extracts deeper properties from weight-channel connections to improve model representation. Integrating these elements results in a reduction in model size and enhancement in computing efficiency, while preserving adequate model performance. The proposed model is assessed using two publicly accessible datasets, FigShare and Kaggle, and is cross-validated using the BraTS2019 and BraTS2020 datasets, demonstrating high accuracy and explainability. Notably, GATransformer generates interpretable attention maps, visually highlighting tumor regions to aid clinical understanding in medical imaging. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

14 pages, 3305 KiB  
Article
Pneumonia Disease Detection Using Chest X-Rays and Machine Learning
by Cathryn Usman, Saeed Ur Rehman, Anwar Ali, Adil Mehmood Khan and Baseer Ahmad
Algorithms 2025, 18(2), 82; https://doi.org/10.3390/a18020082 - 3 Feb 2025
Cited by 3 | Viewed by 3756
Abstract
Pneumonia is a deadly disease affecting millions worldwide, caused by microorganisms and environmental factors. It leads to lung fluid build-up, making breathing difficult, and is a leading cause of death. Early detection and treatment are crucial for preventing severe outcomes. Chest X-rays are [...] Read more.
Pneumonia is a deadly disease affecting millions worldwide, caused by microorganisms and environmental factors. It leads to lung fluid build-up, making breathing difficult, and is a leading cause of death. Early detection and treatment are crucial for preventing severe outcomes. Chest X-rays are commonly used for diagnoses due to their accessibility and low costs; however, detecting pneumonia through X-rays is challenging. Automated methods are needed, and machine learning can solve complex computer vision problems in medical imaging. This research develops a robust machine learning model for the early detection of pneumonia using chest X-rays, leveraging advanced image processing techniques and deep learning algorithms that accurately identify pneumonia patterns, enabling prompt diagnosis and treatment. The research develops a CNN model from the ground up and a ResNet-50 pretrained model This study uses the RSNA pneumonia detection challenge original dataset comprising 26,684 chest array images collected from unique patients (56% male, 44% females) to build a machine learning model for the early detection of pneumonia. The data are made up of pneumonia (31.6%) and non-pneumonia (68.8%), providing an effective foundation for the model training and evaluation. A reduced size of the dataset was used to examine the impact of data size and both versions were tested with and without the use of augmentation. The models were compared with existing works, the model’s effectiveness in detecting pneumonia was compared with one another, and the impact of augmentation and the dataset size on the performance of the models was examined. The overall best accuracy achieved was that of the CNN model from scratch, with no augmentation, an accuracy of 0.79, a precision of 0.76, a recall of 0.73, and an F1 score of 0.74. However, the pretrained model, with lower overall accuracy, was found to be more generalizable. Full article
Show Figures

Figure 1

27 pages, 1293 KiB  
Article
Optimizing Apache Spark MLlib: Predictive Performance of Large-Scale Models for Big Data Analytics
by Leonidas Theodorakopoulos, Aristeidis Karras and George A. Krimpas
Algorithms 2025, 18(2), 74; https://doi.org/10.3390/a18020074 - 1 Feb 2025
Cited by 17 | Viewed by 1936
Abstract
In this study, we analyze the performance of the machine learning operators in Apache Spark MLlib for K-Means, Random Forest Regression, and Word2Vec. We used a multi-node Spark cluster along with collected detailed execution metrics computed from the data of diverse datasets and [...] Read more.
In this study, we analyze the performance of the machine learning operators in Apache Spark MLlib for K-Means, Random Forest Regression, and Word2Vec. We used a multi-node Spark cluster along with collected detailed execution metrics computed from the data of diverse datasets and parameter settings. The data were used to train predictive models that had up to 98% accuracy in forecasting performance. By building actionable predictive models, our research provides a unique treatment for key hyperparameter tuning, scalability, and real-time resource allocation challenges. Specifically, the practical value of traditional models in optimizing Apache Spark MLlib workflows was shown, achieving up to 30% resource savings and a 25% reduction in processing time. These models enable system optimization, reduce the amount of computational overheads, and boost the overall performance of big data applications. Ultimately, this work not only closes significant gaps in predictive performance modeling, but also paves the way for real-time analytics over a distributed environment. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

25 pages, 6410 KiB  
Article
Intelligent Multi-Fault Diagnosis for a Simplified Aircraft Fuel System
by Jiajin Li, Steve King and Ian Jennions
Algorithms 2025, 18(2), 73; https://doi.org/10.3390/a18020073 - 1 Feb 2025
Cited by 2 | Viewed by 1253
Abstract
Machine learning (ML) techniques are increasingly used to diagnose faults in aerospace applications, but diagnosing multiple faults in aircraft fuel systems (AFSs) remains challenging due to complex component interactions. This paper evaluates the accuracy and introduces an innovative approach to quantify and compare [...] Read more.
Machine learning (ML) techniques are increasingly used to diagnose faults in aerospace applications, but diagnosing multiple faults in aircraft fuel systems (AFSs) remains challenging due to complex component interactions. This paper evaluates the accuracy and introduces an innovative approach to quantify and compare the interpretability of four ML classification methods—artificial neural networks (ANNs), support vector machines (SVMs), decision trees (DTs), and logistic regressions (LRs)—for diagnosing fault combinations present in AFSs. While the ANN achieved the highest diagnostic accuracy at 90%, surpassing other methods, its interpretability was limited. By contrast, the decision tree model showed an 82% consistency between global explanations and engineering insights, highlighting its advantage in interpretability despite the lower accuracy. Interpretability was assessed using two widely accepted tools, LIME and SHAP, alongside engineering understanding. These findings underscore a trade-off between prediction accuracy and interpretability, which is critical for trust in ML applications in aerospace. Although an ANN can deliver high diagnostic accuracy, a decision tree offers more transparent results, facilitating better alignment with engineering expectations even at a slight cost to accuracy. Full article
Show Figures

Figure 1

28 pages, 9307 KiB  
Article
Application Framework and Optimal Features for UAV-Based Earthquake-Induced Structural Displacement Monitoring
by Ruipu Ji, Shokrullah Sorosh, Eric Lo, Tanner J. Norton, John W. Driscoll, Falko Kuester, Andre R. Barbosa, Barbara G. Simpson and Tara C. Hutchinson
Algorithms 2025, 18(2), 66; https://doi.org/10.3390/a18020066 - 26 Jan 2025
Cited by 4 | Viewed by 3468
Abstract
Unmanned aerial vehicle (UAV) vision-based sensing has become an emerging technology for structural health monitoring (SHM) and post-disaster damage assessment of civil infrastructure. This article proposes a framework for monitoring structural displacement under earthquakes by reprojecting image points obtained courtesy of UAV-captured videos [...] Read more.
Unmanned aerial vehicle (UAV) vision-based sensing has become an emerging technology for structural health monitoring (SHM) and post-disaster damage assessment of civil infrastructure. This article proposes a framework for monitoring structural displacement under earthquakes by reprojecting image points obtained courtesy of UAV-captured videos to the 3-D world space based on the world-to-image point correspondences. To identify optimal features in the UAV imagery, geo-reference targets with various patterns were installed on a test building specimen, which was then subjected to earthquake shaking. A feature point tracking-based algorithm for square checkerboard patterns and a Hough Transform-based algorithm for concentric circular patterns are developed to ensure reliable detection and tracking of image features. Photogrammetry techniques are applied to reconstruct the 3-D world points and extract structural displacements. The proposed methodology is validated by monitoring the displacements of a full-scale 6-story mass timber building during a series of shake table tests. Reasonable accuracy is achieved in that the overall root-mean-square errors of the tracking results are at the millimeter level compared to ground truth measurements from analog sensors. Insights on optimal features for monitoring structural dynamic response are discussed based on statistical analysis of the error characteristics for the various reference target patterns used to track the structural displacements. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Graphical abstract

16 pages, 239 KiB  
Article
SMOTE vs. SMOTEENN: A Study on the Performance of Resampling Algorithms for Addressing Class Imbalance in Regression Models
by Gazi Husain, Daniel Nasef, Rejath Jose, Jonathan Mayer, Molly Bekbolatova, Timothy Devine and Milan Toma
Algorithms 2025, 18(1), 37; https://doi.org/10.3390/a18010037 - 10 Jan 2025
Cited by 13 | Viewed by 4989
Abstract
Class imbalance is a prevalent challenge in machine learning that arises from skewed data distributions in one class over another, causing models to prioritize the majority class and underperform on the minority classes. This bias can significantly undermine accurate predictions in real-world scenarios, [...] Read more.
Class imbalance is a prevalent challenge in machine learning that arises from skewed data distributions in one class over another, causing models to prioritize the majority class and underperform on the minority classes. This bias can significantly undermine accurate predictions in real-world scenarios, highlighting the importance of the robust handling of imbalanced data for dependable results. This study examines one such scenario of real-time monitoring systems for fall risk assessment in bedridden patients where class imbalance may compromise the effectiveness of machine learning. It compares the effectiveness of two resampling techniques, the Synthetic Minority Oversampling Technique (SMOTE) and SMOTE combined with Edited Nearest Neighbors (SMOTEENN), in mitigating class imbalance and improving predictive performance. Using a controlled sampling strategy across various instance levels, the performance of both methods in conjunction with decision tree regression, gradient boosting regression, and Bayesian regression models was evaluated. The results indicate that SMOTEENN consistently outperforms SMOTE in terms of accuracy and mean squared error across all sample sizes and models. SMOTEENN also demonstrates healthier learning curves, suggesting improved generalization capabilities, particularly for a sampling strategy with a given number of instances. Furthermore, cross-validation analysis reveals that SMOTEENN achieves higher mean accuracy and lower standard deviation compared to SMOTE, indicating more stable and reliable performance. These findings suggest that SMOTEENN is a more effective technique for handling class imbalance, potentially contributing to the development of more accurate and generalizable predictive models in various applications. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Graphical abstract

27 pages, 553 KiB  
Systematic Review
Integrating Artificial Intelligence, Internet of Things, and Sensor-Based Technologies: A Systematic Review of Methodologies in Autism Spectrum Disorder Detection
by Georgios Bouchouras and Konstantinos Kotis
Algorithms 2025, 18(1), 34; https://doi.org/10.3390/a18010034 - 9 Jan 2025
Cited by 7 | Viewed by 3182
Abstract
This paper presents a systematic review of the emerging applications of artificial intelligence (AI), Internet of Things (IoT), and sensor-based technologies in the diagnosis of autism spectrum disorder (ASD). The integration of these technologies has led to promising advances in identifying unique behavioral, [...] Read more.
This paper presents a systematic review of the emerging applications of artificial intelligence (AI), Internet of Things (IoT), and sensor-based technologies in the diagnosis of autism spectrum disorder (ASD). The integration of these technologies has led to promising advances in identifying unique behavioral, physiological, and neuroanatomical markers associated with ASD. Through an examination of recent studies, we explore how technologies such as wearable sensors, eye-tracking systems, virtual reality environments, neuroimaging, and microbiome analysis contribute to a holistic approach to ASD diagnostics. The analysis reveals how these technologies facilitate non-invasive, real-time assessments across diverse settings, enhancing both diagnostic accuracy and accessibility. The findings underscore the transformative potential of AI, IoT, and sensor-based driven tools in providing personalized and continuous ASD detection, advocating for data-driven approaches that extend beyond traditional methodologies. Ultimately, this review emphasizes the role of technology in improving ASD diagnostic processes, paving the way for targeted and individualized assessments. Full article
Show Figures

Graphical abstract

14 pages, 252 KiB  
Article
Impossibility Results for Byzantine-Tolerant State Observation, Synchronization, and Graph Computation Problems
by Ajay D. Kshemkalyani and Anshuman Misra
Algorithms 2025, 18(1), 26; https://doi.org/10.3390/a18010026 - 5 Jan 2025
Cited by 1 | Viewed by 847
Abstract
This paper considers the solvability of several fundamental problems in asynchronous message-passing distributed systems in the presence of Byzantine processes using distributed algorithms. These problems are the following: mutual exclusion, global snapshot recording, termination detection, deadlock detection, predicate detection, causal ordering, spanning tree [...] Read more.
This paper considers the solvability of several fundamental problems in asynchronous message-passing distributed systems in the presence of Byzantine processes using distributed algorithms. These problems are the following: mutual exclusion, global snapshot recording, termination detection, deadlock detection, predicate detection, causal ordering, spanning tree construction, minimum spanning tree construction, all–all shortest paths computation, and maximal independent set computation. In a distributed algorithm, each process has access only to its local variables and incident edge parameters. We show the impossibility of solving these fundamental problems by proving that they require a solution to the causality determination problem which has been shown to be unsolvable in asynchronous message-passing distributed systems. Full article
(This article belongs to the Special Issue Graph Theory and Algorithmic Applications: Theoretical Developments)
23 pages, 205579 KiB  
Article
DDL R-CNN: Dynamic Direction Learning R-CNN for Rotated Object Detection
by Weixian Su and Donglin Jing
Algorithms 2025, 18(1), 21; https://doi.org/10.3390/a18010021 - 4 Jan 2025
Cited by 3 | Viewed by 1954
Abstract
Current remote sensing (RS) detectors often rely on predefined anchor boxes with fixed angles to handle the multi-directional variations of targets. This approach makes it challenging to accurately select regions of interest and extract features that align with the direction of the targets. [...] Read more.
Current remote sensing (RS) detectors often rely on predefined anchor boxes with fixed angles to handle the multi-directional variations of targets. This approach makes it challenging to accurately select regions of interest and extract features that align with the direction of the targets. Most existing regression methods also adopt angle regression to match the attributes of remote sensing detectors. Due to the inconsistent regression direction and massive anchor boxes with a high aspect ratio, the extracted target features change greatly, the loss function changes drastically, and the training is unstable. However, existing RS detectors and regression techniques have not been able to effectively balance the precision of directional feature extraction with the complexity of the models. To address these challenges, this paper introduces a novel approach known as Dynamic Direction Learning R-CNN (DDL R-CNN), which comprises a dynamic direction learning (DDL) module and a boundary center region offset generation network (BC-ROPN). The DDL module pre-extracts the directional features of targets to provide a coarse estimation of their angles and the corresponding weights. This information is used to generate rotationally aligned anchor boxes that better model the directional features of the targets. BC-ROPN represents an innovative method for anchor box regression. It utilizes the central features of the maximum bounding rectangle’s width and height, along with the coarse angle estimation and weights derived from DDL module, to refine the orientation of the anchor box. Our method has been proven to surpass existing rotating detection networks in extensive testing across two widely used remote sensing detection datasets, namely UCAS-AOD and HRSC2016. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Graphical abstract

25 pages, 1936 KiB  
Article
A Scalable Framework for Sensor Data Ingestion and Real-Time Processing in Cloud Manufacturing
by Massimo Pacella, Antonio Papa, Gabriele Papadia and Emiliano Fedeli
Algorithms 2025, 18(1), 22; https://doi.org/10.3390/a18010022 - 4 Jan 2025
Cited by 6 | Viewed by 2364
Abstract
Cloud Manufacturing enables the integration of geographically distributed manufacturing resources through advanced Cloud Computing and IoT technologies. This paradigm promotes the development of scalable and adaptable production systems. However, existing frameworks face challenges related to scalability, resource orchestration, and data security, particularly in [...] Read more.
Cloud Manufacturing enables the integration of geographically distributed manufacturing resources through advanced Cloud Computing and IoT technologies. This paradigm promotes the development of scalable and adaptable production systems. However, existing frameworks face challenges related to scalability, resource orchestration, and data security, particularly in rapidly evolving decentralized manufacturing settings. This study presents a novel nine-layer architecture designed specifically to address these issues. Central to this framework is the use of Apache Kafka for robust, high-throughput data ingestion, and Apache Spark Streaming to enhance real-time data processing. This framework is underpinned by a microservice-based architecture that ensures a high scalability and reduced latency. Experimental validation using sensor data from the UCI Machine Learning Repository demonstrated substantial improvements in processing efficiency and throughput compared with conventional frameworks. Key components, such as RabbitMQ, contribute to low-latency performance, whereas Kafka ensures data durability and supports real-time application. Additionally, the in-memory data processing of Spark Streaming enables rapid and dynamic data analysis, yielding actionable insights. The experimental results highlight the potential of the framework to enhance operational efficiency, resource utilization, and data security, offering a resilient solution suited to the demands of modern industrial applications. This study underscores the contribution of the framework to advancing Cloud Manufacturing by providing detailed insights into its performance, scalability, and applicability to contemporary manufacturing ecosystems. Full article
Show Figures

Figure 1

19 pages, 521 KiB  
Review
A Review on Inverse Kinematics, Control and Planning for Robotic Manipulators With and Without Obstacles via Deep Neural Networks
by Ana Calzada-Garcia, Juan G. Victores, Francisco J. Naranjo-Campos and Carlos Balaguer
Algorithms 2025, 18(1), 23; https://doi.org/10.3390/a18010023 - 4 Jan 2025
Cited by 5 | Viewed by 5027
Abstract
Robotic manipulators are highly valuable tools that have become widespread in the industry, as they can achieve great precision and velocity in pick and place as well as processing tasks. However, to unlock their complete potential, some problems such as inverse kinematics (IK) [...] Read more.
Robotic manipulators are highly valuable tools that have become widespread in the industry, as they can achieve great precision and velocity in pick and place as well as processing tasks. However, to unlock their complete potential, some problems such as inverse kinematics (IK) need to be solved: given a Cartesian target, a method is needed to find the right configuration for the robot to reach that point. Another issue that needs to be addressed when dealing with robotic manipulators is the obstacle avoidance problem. Workspaces are usually cluttered and the manipulator should be able to avoid colliding with objects that could damage it, as well as with itself. Two alternatives exist to do this: a controller can be designed that computes the best action for each moment given the manipulator’s state, or a sequence of movements can be planned to be executed by the robot. Classical approaches to all these problems, such as numeric or analytical methods, can produce precise results but take a high computation time and do not always converge. Learning-based methods have gained considerable attention in tackling the IK problem, as well as motion planning and control. These methods can reduce the computational cost and provide results for every situation avoiding singularities. This article presents a literature review of the advances made in the past five years in the use of Deep Neural Networks (DNN) for IK with regard to control and planning with and without obstacles for rigid robotic manipulators. The literature has been organized in several categories depending on the type of DNN used to solve the problem. The main contributions of each reference are reviewed and the best results are presented in summary tables. Full article
(This article belongs to the Special Issue Optimization Methods for Advanced Manufacturing)
Show Figures

Figure 1

20 pages, 3157 KiB  
Article
Verifying Mutual Exclusion Algorithms with Non-Atomic Registers
by Libero Nigro
Algorithms 2024, 17(12), 536; https://doi.org/10.3390/a17120536 - 22 Nov 2024
Cited by 1 | Viewed by 2313
Abstract
The work described in this paper develops a formal method for modeling and exhaustive verification of mutual exclusion algorithms. The process is based on timed automata and the Uppaal model checker. The technique was successfully applied to several mutual exclusion algorithms, mainly under [...] Read more.
The work described in this paper develops a formal method for modeling and exhaustive verification of mutual exclusion algorithms. The process is based on timed automata and the Uppaal model checker. The technique was successfully applied to several mutual exclusion algorithms, mainly under the atomic memory model, when the read and write operations on memory cells (registers) are atomic or indivisible. The original contribution of this paper consists of a generalization of the approach to support modeling mutual exclusion algorithms with non-atomic registers, where multiple read operations can occur on a register simultaneously to a write operation on the same register, thus giving rise to the flickering phenomenon or multiple write operations can occur at the same time on the same register, hence determining the scrambling phenomenon. The paper first clarifies some consistency rules of non-atomic registers. Then, the developed Uppaal-based method for specifying and verifying mutual exclusion algorithms is presented. The method is applied to the correctness assessment of a sample mutual exclusion solution. After that, non-atomic register consistency rules are rendered in Uppaal to be embedded in the specification methodology. The paper goes on by presenting different mutual exclusion algorithms that are studied using non-atomic registers. Algorithms are also investigated in the context of a tournament tree organization that can provide standard and efficient mutual exclusion solutions for N>2 processes. The paper compares the proposed techniques for handling non-atomic registers and reports about their application to many other mutual exclusion solutions. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

75 pages, 1896 KiB  
Article
Complete Subhedge Projection for Stepwise Hedge Automata
by Antonio Al Serhali and Joachim Niehren
Algorithms 2024, 17(8), 339; https://doi.org/10.3390/a17080339 - 2 Aug 2024
Viewed by 2127
Abstract
We demonstrate how to evaluate stepwise hedge automata (Shas) with subhedge projection while completely projecting irrelevant subhedges. Since this requires passing finite state information top-down, we introduce the notion of downward stepwise hedge automata. We use them to define in-memory and [...] Read more.
We demonstrate how to evaluate stepwise hedge automata (Shas) with subhedge projection while completely projecting irrelevant subhedges. Since this requires passing finite state information top-down, we introduce the notion of downward stepwise hedge automata. We use them to define in-memory and streaming evaluators with complete subhedge projection for Shas. We then tune the evaluators so that they can decide on membership at the earliest time point. We apply our algorithms to the problem of answering regular XPath queries on Xml streams. Our experiments show that complete subhedge projection of Shas can indeed speed up earliest query answering on Xml streams so that it becomes competitive with the best existing streaming tools for XPath queries. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

11 pages, 250 KiB  
Article
Hardness and Approximability of Dimension Reduction on the Probability Simplex
by Roberto Bruno
Algorithms 2024, 17(7), 296; https://doi.org/10.3390/a17070296 - 6 Jul 2024
Cited by 1 | Viewed by 2727
Abstract
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this [...] Read more.
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this paper, we consider the following dimensionality reduction instance: Given an n-dimensional probability distribution p and an integer m<n, we aim to find the m-dimensional probability distribution q that is the closest to p, using the Kullback–Leibler divergence as the measure of closeness. We prove that the problem is strongly NP-hard, and we present an approximation algorithm for it. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers from IWOCA 2024)
14 pages, 312 KiB  
Article
Parsing Unranked Tree Languages, Folded Once
by Martin Berglund, Henrik Björklund and Johanna Björklund
Algorithms 2024, 17(6), 268; https://doi.org/10.3390/a17060268 - 19 Jun 2024
Viewed by 1215
Abstract
A regular unranked tree folding consists of a regular unranked tree language and a folding operation that merges (i.e., folds) selected nodes of a tree to form a graph; the combination is a formal device for representing graph languages. If, in the [...] Read more.
A regular unranked tree folding consists of a regular unranked tree language and a folding operation that merges (i.e., folds) selected nodes of a tree to form a graph; the combination is a formal device for representing graph languages. If, in the process of folding, the order among edges is discarded so that the result is an unordered graph, then two applications of a fold operation are enough to make the associated parsing problem NP-complete. However, if the order is kept, then the problem is solvable in non-uniform polynomial time. In this paper, we address the remaining case, where only one fold operation is applied, but the order among the edges is discarded. We show that, under these conditions, the problem is solvable in non-uniform polynomial time. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

13 pages, 346 KiB  
Article
Minimizing Query Frequency to Bound Congestion Potential for Moving Entities at a Fixed Target Time
by William Evans and David Kirkpatrick
Algorithms 2024, 17(6), 246; https://doi.org/10.3390/a17060246 - 6 Jun 2024
Viewed by 1457
Abstract
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings [...] Read more.
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings such as collision avoidance, may be unavoidable. However, the associated difficulties are compounded if there is uncertainty about the precise location of entities, giving rise to potential encroachment and, more generally, potential congestion within the full collection. We adopt a model in which entities can be queried for their current location (at some cost) and the uncertainty region associated with an entity grows in proportion to the time since that entity was last queried. The goal is to maintain low potential congestion, measured in terms of the (dynamic) intersection graph of uncertainty regions, at specified (possibly all) times, using the lowest possible query cost. Previous work in the same uncertainty model addressed the problem of minimizing the congestion potential of point entities using location queries of some bounded frequency. It was shown that it is possible to design query schemes that are O(1)-competitive, in terms of worst-case congestion potential, with other, even clairvoyant query schemes (that exploit knowledge of the trajectories of all entities), subject to the same bound on query frequency. In this paper, we initiate the treatment of a more general problem with the complementary optimization objective: minimizing the query frequency, measured as the reciprocal of the minimum time between queries (granularity), while guaranteeing a fixed bound on congestion potential of entities with positive extent at one specified target time. This complementary objective necessitates quite different schemes and analyses. Nevertheless, our results parallel those of the earlier papers, specifically tight competitive bounds on required query frequency. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

20 pages, 526 KiB  
Systematic Review
Prime Number Sieving—A Systematic Review with Performance Analysis
by Mircea Ghidarcea and Decebal Popescu
Algorithms 2024, 17(4), 157; https://doi.org/10.3390/a17040157 - 14 Apr 2024
Cited by 5 | Viewed by 4705
Abstract
The systematic generation of prime numbers has been almost ignored since the 1990s, when most of the IT research resources related to prime numbers migrated to studies on the use of very large primes for cryptography, and little effort was made to further [...] Read more.
The systematic generation of prime numbers has been almost ignored since the 1990s, when most of the IT research resources related to prime numbers migrated to studies on the use of very large primes for cryptography, and little effort was made to further the knowledge regarding techniques like sieving. At present, sieving techniques are mostly used for didactic purposes, and no real advances seem to be made in this domain. This systematic review analyzes the theoretical advances in sieving that have occurred up to the present. The research followed the PRISMA 2020 guidelines and was conducted using three established databases: Web of Science, IEEE Xplore and Scopus. Our methodical review aims to provide an extensive overview of the progress in prime sieving—unfortunately, no significant advancements in this field were identified in the last 20 years. Full article
Show Figures

Figure 1

22 pages, 1016 KiB  
Article
Multi-Objective BiLevel Optimization by Bayesian Optimization
by Vedat Dogan and Steven Prestwich
Algorithms 2024, 17(4), 146; https://doi.org/10.3390/a17040146 - 30 Mar 2024
Cited by 2 | Viewed by 4265
Abstract
In a multi-objective optimization problem, a decision maker has more than one objective to optimize. In a bilevel optimization problem, there are the following two decision-makers in a hierarchy: a leader who makes the first decision and a follower who reacts, each aiming [...] Read more.
In a multi-objective optimization problem, a decision maker has more than one objective to optimize. In a bilevel optimization problem, there are the following two decision-makers in a hierarchy: a leader who makes the first decision and a follower who reacts, each aiming to optimize their own objective. Many real-world decision-making processes have various objectives to optimize at the same time while considering how the decision-makers affect each other. When both features are combined, we have a multi-objective bilevel optimization problem, which arises in manufacturing, logistics, environmental economics, defence applications and many other areas. Many exact and approximation-based techniques have been proposed, but because of the intrinsic nonconvexity and conflicting multiple objectives, their computational cost is high. We propose a hybrid algorithm based on batch Bayesian optimization to approximate the upper-level Pareto-optimal solution set. We also extend our approach to handle uncertainty in the leader’s objectives via a hypervolume improvement-based acquisition function. Experiments show that our algorithm is more efficient than other current methods while successfully approximating Pareto-fronts. Full article
Show Figures

Figure 1

16 pages, 314 KiB  
Article
Closest Farthest Widest
by Kenneth Lange
Algorithms 2024, 17(3), 95; https://doi.org/10.3390/a17030095 - 22 Feb 2024
Cited by 1 | Viewed by 2588
Abstract
The current paper proposes and tests algorithms for finding the diameter of a compact convex set and the farthest point in the set to another point. For these two nonconvex problems, I construct Frank–Wolfe and projected gradient ascent algorithms. Although these algorithms are [...] Read more.
The current paper proposes and tests algorithms for finding the diameter of a compact convex set and the farthest point in the set to another point. For these two nonconvex problems, I construct Frank–Wolfe and projected gradient ascent algorithms. Although these algorithms are guaranteed to go uphill, they can become trapped by local maxima. To avoid this defect, I investigate a homotopy method that gradually deforms a ball into the target set. Motivated by the Frank–Wolfe algorithm, I also find the support function of the intersection of a convex cone and a ball centered at the origin and elaborate a known bisection algorithm for calculating the support function of a convex sublevel set. The Frank–Wolfe and projected gradient algorithms are tested on five compact convex sets: (a) the box whose coordinates range between −1 and 1, (b) the intersection of the unit ball and the non-negative orthant, (c) the probability simplex, (d) the Manhattan-norm unit ball, and (e) a sublevel set of the elastic net penalty. Frank–Wolfe and projected gradient ascent are about equally fast on these test problems. Ignoring homotopy, the Frank–Wolfe algorithm is more reliable. However, homotopy allows projected gradient ascent to recover from its failures. Full article
14 pages, 975 KiB  
Article
What Is a Causal Graph?
by Philip Dawid
Algorithms 2024, 17(3), 93; https://doi.org/10.3390/a17030093 - 21 Feb 2024
Cited by 1 | Viewed by 4267
Abstract
This article surveys the variety of ways in which a directed acyclic graph (DAG) can be used to represent a problem of probabilistic causality. For each of these ways, we describe the relevant formal or informal semantics governing that representation. It is suggested [...] Read more.
This article surveys the variety of ways in which a directed acyclic graph (DAG) can be used to represent a problem of probabilistic causality. For each of these ways, we describe the relevant formal or informal semantics governing that representation. It is suggested that the cleanest such representation is that embodied in an augmented DAG, which contains nodes for non-stochastic intervention indicators in addition to the usual nodes for domain variables. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Reasoning)
Show Figures

Figure 1

20 pages, 351 KiB  
Article
A Novel Higher-Order Numerical Scheme for System of Nonlinear Load Flow Equations
by Fiza Zafar, Alicia Cordero, Husna Maryam and Juan R. Torregrosa
Algorithms 2024, 17(2), 86; https://doi.org/10.3390/a17020086 - 18 Feb 2024
Viewed by 2469
Abstract
Power flow problems can be solved in a variety of ways by using the Newton–Raphson approach. The nonlinear power flow equations depend upon voltages Vi and phase angle δ. An electrical power system is obtained by taking the partial derivatives of [...] Read more.
Power flow problems can be solved in a variety of ways by using the Newton–Raphson approach. The nonlinear power flow equations depend upon voltages Vi and phase angle δ. An electrical power system is obtained by taking the partial derivatives of load flow equations which contain active and reactive powers. In this paper, we present an efficient seventh-order iterative scheme to obtain the solutions of nonlinear system of equations, with only three steps in its formulation. Then, we illustrate the computational cost for different operations such as matrix–matrix multiplication, matrix–vector multiplication, and LU-decomposition, which is then used to calculate the cost of our proposed method and is compared with the cost of already seventh-order methods. Furthermore, we elucidate the applicability of our newly developed scheme in an electrical power system. The two-bus, three-bus, and four-bus power flow problems are then solved by using load flow equations that describe the applicability of the new schemes. Full article
Show Figures

Figure 1

63 pages, 3409 KiB  
Review
Survey of Recent Applications of the Chaotic Lozi Map
by René Lozi
Algorithms 2023, 16(10), 491; https://doi.org/10.3390/a16100491 - 22 Oct 2023
Cited by 11 | Viewed by 6683
Abstract
Since its original publication in 1978, Lozi’s chaotic map has been thoroughly explored and continues to be. Hundreds of publications have analyzed its particular structure and applied its properties in many fields (e.g., improvement of physical devices, electrical components such as memristors, cryptography, [...] Read more.
Since its original publication in 1978, Lozi’s chaotic map has been thoroughly explored and continues to be. Hundreds of publications have analyzed its particular structure and applied its properties in many fields (e.g., improvement of physical devices, electrical components such as memristors, cryptography, optimization, evolutionary algorithms, synchronization, control, secure communications, AI with swarm intelligence, chimeras, solitary states, etc.) through algorithms such as the COLM algorithm (Chaotic Optimization algorithm based on Lozi Map), Particle Swarm Optimization (PSO), and Differential Evolution (DE). In this article, we present a survey based on dozens of articles on the use of this map in algorithms aimed at real applications or applications exploring new directions of dynamical systems such as chimeras and solitary states. Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
Show Figures

Figure 1

20 pages, 1849 KiB  
Review
Artificial Intelligence for Management Information Systems: Opportunities, Challenges, and Future Directions
by Stela Stoykova and Nikola Shakev
Algorithms 2023, 16(8), 357; https://doi.org/10.3390/a16080357 - 26 Jul 2023
Cited by 14 | Viewed by 25823
Abstract
The aim of this paper is to present a systematic literature review of the existing research, published between 2006 and 2023, in the field of artificial intelligence for management information systems. Of the 3946 studies that were considered by the authors, 60 primary [...] Read more.
The aim of this paper is to present a systematic literature review of the existing research, published between 2006 and 2023, in the field of artificial intelligence for management information systems. Of the 3946 studies that were considered by the authors, 60 primary studies were selected for analysis. The analysis shows that most research is focused on the application of AI for intelligent process automation, with an increasing number of studies focusing on predictive analytics and natural language processing. With respect to the platforms used by AI researchers, the study finds that cloud-based solutions are preferred over on-premises ones. A new research trend of deploying AI applications at the edge of industrial networks and utilizing federated learning is also identified. The need to focus research efforts on developing guidelines and frameworks in terms of ethics, data privacy, and security for AI adoption in MIS is highlighted. Developing a unified digital business strategy and overcoming barriers to user–AI engagement are some of the identified challenges to obtaining business value from AI integration. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

36 pages, 6469 KiB  
Article
Physics-Informed Deep Learning for Traffic State Estimation: A Survey and the Outlook
by Xuan Di, Rongye Shi, Zhaobin Mo and Yongjie Fu
Algorithms 2023, 16(6), 305; https://doi.org/10.3390/a16060305 - 17 Jun 2023
Cited by 29 | Viewed by 7683
Abstract
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNNs), has been booming in science and engineering fields. One key [...] Read more.
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNNs), has been booming in science and engineering fields. One key challenge of applying PIDL to various domains and problems lies in the design of a computational graph that integrates physics and DNNs. In other words, how the physics is encoded into DNNs and how the physics and data components are represented. In this paper, we offer an overview of a variety of architecture designs of PIDL computational graphs and how these structures are customized to traffic state estimation (TSE), a central problem in transportation engineering. When observation data, problem type, and goal vary, we demonstrate potential architectures of PIDL computational graphs and compare these variants using the same real-world dataset. Full article
Show Figures

Figure 1

17 pages, 2718 KiB  
Review
Enhancing Social Media Platforms with Machine Learning Algorithms and Neural Networks
by Hamed Taherdoost
Algorithms 2023, 16(6), 271; https://doi.org/10.3390/a16060271 - 29 May 2023
Cited by 15 | Viewed by 13165
Abstract
Network analysis aids management in reducing overall expenditures and maintenance workload. Social media platforms frequently use neural networks to suggest material that corresponds with user preferences. Machine learning is one of many methods for social network analysis. Machine learning algorithms operate on a [...] Read more.
Network analysis aids management in reducing overall expenditures and maintenance workload. Social media platforms frequently use neural networks to suggest material that corresponds with user preferences. Machine learning is one of many methods for social network analysis. Machine learning algorithms operate on a collection of observable features that are taken from user data. Machine learning and neural network-based systems represent a topic of study that spans several fields. Computers can now recognize the emotions behind particular content uploaded by users to social media networks thanks to machine learning. This study examines research on machine learning and neural networks, with an emphasis on social analysis in the context of the current literature. Full article
(This article belongs to the Special Issue Machine Learning in Social Network Analytics)
Show Figures

Figure 1

24 pages, 8508 KiB  
Article
From Activity Recognition to Simulation: The Impact of Granularity on Production Models in Heavy Civil Engineering
by Anne Fischer, Alexandre Beiderwellen Bedrikow, Iris D. Tommelein, Konrad Nübel and Johannes Fottner
Algorithms 2023, 16(4), 212; https://doi.org/10.3390/a16040212 - 18 Apr 2023
Cited by 12 | Viewed by 5094
Abstract
As in manufacturing with its Industry 4.0 transformation, the enormous potential of artificial intelligence (AI) is also being recognized in the construction industry. Specifically, the equipment-intensive construction industry can benefit from using AI. AI applications can leverage the data recorded by the numerous [...] Read more.
As in manufacturing with its Industry 4.0 transformation, the enormous potential of artificial intelligence (AI) is also being recognized in the construction industry. Specifically, the equipment-intensive construction industry can benefit from using AI. AI applications can leverage the data recorded by the numerous sensors on machines and mirror them in a digital twin. Analyzing the digital twin can help optimize processes on the construction site and increase productivity. We present a case from special foundation engineering: the machine production of bored piles. We introduce a hierarchical classification for activity recognition and apply a hybrid deep learning model based on convolutional and recurrent neural networks. Then, based on the results from the activity detection, we use discrete-event simulation to predict construction progress. We highlight the difficulty of defining the appropriate modeling granularity. While activity detection requires equipment movement, simulation requires knowledge of the production flow. Therefore, we present a flow-based production model that can be captured in a modularized process catalog. Overall, this paper aims to illustrate modeling using digital-twin technologies to increase construction process improvement in practice. Full article
Show Figures

Figure 1

21 pages, 558 KiB  
Article
Model-Robust Estimation of Multiple-Group Structural Equation Models
by Alexander Robitzsch
Algorithms 2023, 16(4), 210; https://doi.org/10.3390/a16040210 - 17 Apr 2023
Cited by 8 | Viewed by 3286
Abstract
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an [...] Read more.
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an SEM as a function of a discrete grouping variable. Multiple-group SEM is employed to compare structural relationships between groups. In this article, estimation approaches for the multiple-group are reviewed. We focus on comparing different estimation strategies in the presence of local model misspecifications (i.e., model errors). In detail, maximum likelihood and weighted least-squares estimation approaches are compared with a newly proposed robust Lp loss function and regularized maximum likelihood estimation. The latter methods are referred to as model-robust estimators because they show some resistance to model errors. In particular, we focus on the performance of the different estimators in the presence of unmodelled residual error correlations and measurement noninvariance (i.e., group-specific item intercepts). The performance of the different estimators is compared in two simulation studies and an empirical example. It turned out that the robust loss function approach is computationally much less demanding than regularized maximum likelihood estimation but resulted in similar statistical performance. Full article
(This article belongs to the Special Issue Statistical learning and Its Applications)
Show Figures

Figure 1

15 pages, 3252 KiB  
Article
An Adversarial DBN-LSTM Method for Detecting and Defending against DDoS Attacks in SDN Environments
by Lei Chen, Zhihao Wang, Ru Huo and Tao Huang
Algorithms 2023, 16(4), 197; https://doi.org/10.3390/a16040197 - 5 Apr 2023
Cited by 20 | Viewed by 3759
Abstract
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distributed denial of service (DDoS) attacks from three malicious parties. Moreover, some attackers [...] Read more.
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distributed denial of service (DDoS) attacks from three malicious parties. Moreover, some attackers try to fool the classification/prediction mechanism by crafting the input data to create adversarial attacks, which is hard to defend for ML-based Network Intrusion Detection Systems (NIDSs). This paper proposes an adversarial DBN-LSTM method for detecting and defending against DDoS attacks in SDN environments, which applies generative adversarial networks (GAN) as well as deep belief networks and long short-term memory (DBN-LSTM) to make the system less sensitive to adversarial attacks and faster feature extraction. We conducted the experiments using the public dataset CICDDoS 2019. The experimental results demonstrated that our method efficiently detected up-to-date common types of DDoS attacks compared to other approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence in Intrusion Detection Systems)
Show Figures

Figure 1

19 pages, 1087 KiB  
Article
A Deep Analysis of Brain Tumor Detection from MR Images Using Deep Learning Networks
by Md Ishtyaq Mahmud, Muntasir Mamun and Ahmed Abdelgawad
Algorithms 2023, 16(4), 176; https://doi.org/10.3390/a16040176 - 23 Mar 2023
Cited by 208 | Viewed by 28898
Abstract
Creating machines that behave and work in a way similar to humans is the objective of artificial intelligence (AI). In addition to pattern recognition, planning, and problem-solving, computer activities with artificial intelligence include other activities. A group of algorithms called “deep learning” is [...] Read more.
Creating machines that behave and work in a way similar to humans is the objective of artificial intelligence (AI). In addition to pattern recognition, planning, and problem-solving, computer activities with artificial intelligence include other activities. A group of algorithms called “deep learning” is used in machine learning. With the aid of magnetic resonance imaging (MRI), deep learning is utilized to create models for the detection and categorization of brain tumors. This allows for the quick and simple identification of brain tumors. Brain disorders are mostly the result of aberrant brain cell proliferation, which can harm the structure of the brain and ultimately result in malignant brain cancer. The early identification of brain tumors and the subsequent appropriate treatment may lower the death rate. In this study, we suggest a convolutional neural network (CNN) architecture for the efficient identification of brain tumors using MR images. This paper also discusses various models such as ResNet-50, VGG16, and Inception V3 and conducts a comparison between the proposed architecture and these models. To analyze the performance of the models, we considered different metrics such as the accuracy, recall, loss, and area under the curve (AUC). As a result of analyzing different models with our proposed model using these metrics, we concluded that the proposed model performed better than the others. Using a dataset of 3264 MR images, we found that the CNN model had an accuracy of 93.3%, an AUC of 98.43%, a recall of 91.19%, and a loss of 0.25. We may infer that the proposed model is reliable for the early detection of a variety of brain tumors after comparing it to the other models. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application II)
Show Figures

Figure 1

13 pages, 1355 KiB  
Article
Model Parallelism Optimization for CNN FPGA Accelerator
by Jinnan Wang, Weiqin Tong and Xiaoli Zhi
Algorithms 2023, 16(2), 110; https://doi.org/10.3390/a16020110 - 14 Feb 2023
Cited by 12 | Viewed by 4709
Abstract
Convolutional neural networks (CNNs) have made impressive achievements in image classification and object detection. For hardware with limited resources, it is not easy to achieve CNN inference with a large number of parameters without external storage. Model parallelism is an effective way to [...] Read more.
Convolutional neural networks (CNNs) have made impressive achievements in image classification and object detection. For hardware with limited resources, it is not easy to achieve CNN inference with a large number of parameters without external storage. Model parallelism is an effective way to reduce resource usage by distributing CNN inference among several devices. However, parallelizing a CNN model is not easy, because CNN models have an essentially tightly-coupled structure. In this work, we propose a novel model parallelism method to decouple the CNN structure with group convolution and a new channel shuffle procedure. Our method could eliminate inter-device synchronization while reducing the memory footprint of each device. Using the proposed model parallelism method, we designed a parallel FPGA accelerator for the classic CNN model ShuffleNet. This accelerator was further optimized with features such as aggregate read and kernel vectorization to fully exploit the hardware-level parallelism of the FPGA. We conducted experiments with ShuffleNet on two FPGA boards, each of which had an Intel Arria 10 GX1150 and 16GB DDR3 memory. The experimental results showed that when using two devices, ShuffleNet achieved a 1.42× speed increase and reduced its memory footprint by 34%, as compared to its non-parallel counterpart, while maintaining accuracy. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

28 pages, 953 KiB  
Article
Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning
by George Tzougas and Konstantin Kutzkov
Algorithms 2023, 16(2), 99; https://doi.org/10.3390/a16020099 - 9 Feb 2023
Cited by 12 | Viewed by 7854
Abstract
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks [...] Read more.
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neural network-based models in total. For illustrative purposes, logistic regression and the alternative neural network-based models we propose are employed for a binary classification exercise concerning the occurrence of at least one claim in a French motor third-party insurance portfolio. Finally, the model interpretability issue was addressed via the local interpretable model-agnostic explanations approach. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

30 pages, 3724 KiB  
Review
Defect Detection Methods for Industrial Products Using Deep Learning Techniques: A Review
by Alireza Saberironaghi, Jing Ren and Moustafa El-Gindy
Algorithms 2023, 16(2), 95; https://doi.org/10.3390/a16020095 - 8 Feb 2023
Cited by 145 | Viewed by 35912
Abstract
Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences [...] Read more.
Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences in lighting conditions. As a solution to this problem, deep learning has recently emerged, motivated by two main factors: accessibility to computing power and the rapid digitization of society, which enables the creation of large databases of labeled samples. This review paper aims to briefly summarize and analyze the current state of research on detecting defects using machine learning methods. First, deep learning-based detection of surface defects on industrial products is discussed from three perspectives: supervised, semi-supervised, and unsupervised. Secondly, the current research status of deep learning defect detection methods for X-ray images is discussed. Finally, we summarize the most common challenges and their potential solutions in surface defect detection, such as unbalanced sample identification, limited sample size, and real-time processing. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

14 pages, 2116 KiB  
Article
Effective Heart Disease Prediction Using Machine Learning Techniques
by Chintan M. Bhatt, Parth Patel, Tarang Ghetia and Pier Luigi Mazzeo
Algorithms 2023, 16(2), 88; https://doi.org/10.3390/a16020088 - 6 Feb 2023
Cited by 317 | Viewed by 84463
Abstract
The diagnosis and prognosis of cardiovascular disease are crucial medical tasks to ensure correct classification, which helps cardiologists provide proper treatment to the patient. Machine learning applications in the medical niche have increased as they can recognize patterns from data. Using machine learning [...] Read more.
The diagnosis and prognosis of cardiovascular disease are crucial medical tasks to ensure correct classification, which helps cardiologists provide proper treatment to the patient. Machine learning applications in the medical niche have increased as they can recognize patterns from data. Using machine learning to classify cardiovascular disease occurrence can help diagnosticians reduce misdiagnosis. This research develops a model that can correctly predict cardiovascular diseases to reduce the fatality caused by cardiovascular diseases. This paper proposes a method of k-modes clustering with Huang starting that can improve classification accuracy. Models such as random forest (RF), decision tree classifier (DT), multilayer perceptron (MP), and XGBoost (XGB) are used. GridSearchCV was used to hypertune the parameters of the applied model to optimize the result. The proposed model is applied to a real-world dataset of 70,000 instances from Kaggle. Models were trained on data that were split in 80:20 and achieved accuracy as follows: decision tree: 86.37% (with cross-validation) and 86.53% (without cross-validation), XGBoost: 86.87% (with cross-validation) and 87.02% (without cross-validation), random forest: 87.05% (with cross-validation) and 86.92% (without cross-validation), multilayer perceptron: 87.28% (with cross-validation) and 86.94% (without cross-validation). The proposed models have AUC (area under the curve) values: decision tree: 0.94, XGBoost: 0.95, random forest: 0.95, multilayer perceptron: 0.95. The conclusion drawn from this underlying research is that multilayer perceptron with cross-validation has outperformed all other algorithms in terms of accuracy. It achieved the highest accuracy of 87.28%. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
Show Figures

Figure 1

42 pages, 655 KiB  
Review
Inverse Reinforcement Learning as the Algorithmic Basis for Theory of Mind: Current Methods and Open Problems
by Jaime Ruiz-Serra and Michael S. Harré
Algorithms 2023, 16(2), 68; https://doi.org/10.3390/a16020068 - 19 Jan 2023
Cited by 9 | Viewed by 8270
Abstract
Theory of mind (ToM) is the psychological construct by which we model another’s internal mental states. Through ToM, we adjust our own behaviour to best suit a social context, and therefore it is essential to our everyday interactions with others. In adopting an [...] Read more.
Theory of mind (ToM) is the psychological construct by which we model another’s internal mental states. Through ToM, we adjust our own behaviour to best suit a social context, and therefore it is essential to our everyday interactions with others. In adopting an algorithmic (rather than a psychological or neurological) approach to ToM, we gain insights into cognition that will aid us in building more accurate models for the cognitive and behavioural sciences, as well as enable artificial agents to be more proficient in social interactions as they become more embedded in our everyday lives. Inverse reinforcement learning (IRL) is a class of machine learning methods by which to infer the preferences (rewards as a function of state) of a decision maker from its behaviour (trajectories in a Markov decision process). IRL can provide a computational approach for ToM, as recently outlined by Jara-Ettinger, but this will require a better understanding of the relationship between ToM concepts and existing IRL methods at the algorthmic level. Here, we provide a review of prominent IRL algorithms and their formal descriptions, and discuss the applicability of IRL concepts as the algorithmic basis of a ToM in AI. Full article
(This article belongs to the Special Issue Advancements in Reinforcement Learning Algorithms)
Show Figures

Figure 1

15 pages, 1356 KiB  
Article
A Discrete Partially Observable Markov Decision Process Model for the Maintenance Optimization of Oil and Gas Pipelines
by Ezra Wari, Weihang Zhu and Gino Lim
Algorithms 2023, 16(1), 54; https://doi.org/10.3390/a16010054 - 12 Jan 2023
Cited by 8 | Viewed by 3587
Abstract
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion growth, and implementing optimal maintenance policies. This paper proposes [...] Read more.
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion growth, and implementing optimal maintenance policies. This paper proposes a partially observable Markov decision process (POMDP) model for optimizing maintenance based on the corrosion progress, which is monitored by an inline inspection to assess the extent of pipeline corrosion. The states are defined by dividing the deterioration range equally, whereas the actions are determined based on the specific states and pipeline attributes. Monte Carlo simulation and a pure birth Markov process method are used for computing the transition matrix. The cost of maintenance and failure are considered when calculating the rewards. The inline inspection methods and tool measurement errors may cause reading distortion, which is used to formulate the observations and the observation function. The model is demonstrated with two numerical examples constructed based on problems and parameters in the literature. The result shows that the proposed model performs well with the added advantage of integrating measurement errors and recommending actions for multiple-state situations. Overall, this discrete model can serve the maintenance decision-making process by better representing the stochastic features. Full article
(This article belongs to the Special Issue Algorithms in Monte Carlo Methods)
Show Figures

Figure 1

13 pages, 378 KiB  
Article
Solving the Parallel Drone Scheduling Traveling Salesman Problem via Constraint Programming
by Roberto Montemanni and Mauro Dell’Amico
Algorithms 2023, 16(1), 40; https://doi.org/10.3390/a16010040 - 8 Jan 2023
Cited by 22 | Viewed by 4165
Abstract
Drones are currently seen as a viable way of improving the distribution of parcels in urban and rural environments, while working in coordination with traditional vehicles, such as trucks. In this paper, we consider the parallel drone scheduling traveling salesman problem, where a [...] Read more.
Drones are currently seen as a viable way of improving the distribution of parcels in urban and rural environments, while working in coordination with traditional vehicles, such as trucks. In this paper, we consider the parallel drone scheduling traveling salesman problem, where a set of customers requiring a delivery is split between a truck and a fleet of drones, with the aim of minimizing the total time required to serve all the customers. We propose a constraint programming model for the problem, discuss its implementation and present the results of an experimental program on the instances previously cited in the literature to validate exact and heuristic algorithms. We were able to decrease the cost (the time required to serve customers) for some of the instances and, for the first time, to provide a demonstrated optimal solution for all the instances considered. These results show that constraint programming can be a very effective tool for attacking optimization problems with traveling salesman components, such as the one discussed. Full article
Show Figures

Figure 1

122 pages, 1505 KiB  
Systematic Review
Sybil in the Haystack: A Comprehensive Review of Blockchain Consensus Mechanisms in Search of Strong Sybil Attack Resistance
by Moritz Platt and Peter McBurney
Algorithms 2023, 16(1), 34; https://doi.org/10.3390/a16010034 - 6 Jan 2023
Cited by 51 | Viewed by 19798
Abstract
Consensus algorithms are applied in the context of distributed computer systems to improve their fault tolerance. The explosive development of distributed ledger technology following the proposal of ‘Bitcoin’ led to a sharp increase in research activity in this area. Specifically, public and permissionless [...] Read more.
Consensus algorithms are applied in the context of distributed computer systems to improve their fault tolerance. The explosive development of distributed ledger technology following the proposal of ‘Bitcoin’ led to a sharp increase in research activity in this area. Specifically, public and permissionless networks require robust leader selection strategies resistant to Sybil attacks in which malicious attackers present bogus identities to induce byzantine faults. Our goal is to analyse the entire breadth of works in this area systematically, thereby uncovering trends and research directions regarding Sybil attack resistance in today’s blockchain systems to benefit the designs of the future. Through a systematic literature review, we condense an immense set of research records (N = 21,799) to a relevant subset (N = 483). We categorise these mechanisms by their Sybil attack resistance characteristics, leader selection methodology, and incentive scheme. Mechanisms with strong Sybil attack resistance commonly adopt the principles underlying ‘Proof-of-Work’ or ‘Proof-of-Stake’ while mechanisms with limited resistance often use reputation systems or physical world linking. We find that only a few fundamental paradigms exist that can resist Sybil attacks in a permissionless setting but discover numerous innovative mechanisms that can deliver weaker protection in system scenarios with smaller attack surfaces. Full article
(This article belongs to the Special Issue Blockchain Consensus Algorithms)
Show Figures

Figure 1

18 pages, 2875 KiB  
Article
Image-to-Image Translation-Based Data Augmentation for Improving Crop/Weed Classification Models for Precision Agriculture Applications
by L. G. Divyanth, D. S. Guru, Peeyush Soni, Rajendra Machavaram, Mohammad Nadimi and Jitendra Paliwal
Algorithms 2022, 15(11), 401; https://doi.org/10.3390/a15110401 - 30 Oct 2022
Cited by 39 | Viewed by 9345
Abstract
Applications of deep-learning models in machine visions for crop/weed identification have remarkably upgraded the authenticity of precise weed management. However, compelling data are required to obtain the desired result from this highly data-driven operation. This study aims to curtail the effort needed to [...] Read more.
Applications of deep-learning models in machine visions for crop/weed identification have remarkably upgraded the authenticity of precise weed management. However, compelling data are required to obtain the desired result from this highly data-driven operation. This study aims to curtail the effort needed to prepare very large image datasets by creating artificial images of maize (Zea mays) and four common weeds (i.e., Charlock, Fat Hen, Shepherd’s Purse, and small-flowered Cranesbill) through conditional Generative Adversarial Networks (cGANs). The fidelity of these synthetic images was tested through t-distributed stochastic neighbor embedding (t-SNE) visualization plots of real and artificial images of each class. The reliability of this method as a data augmentation technique was validated through classification results based on the transfer learning of a pre-defined convolutional neural network (CNN) architecture—the AlexNet; the feature extraction method came from the deepest pooling layer of the same network. Machine learning models based on a support vector machine (SVM) and linear discriminant analysis (LDA) were trained using these feature vectors. The F1 scores of the transfer learning model increased from 0.97 to 0.99, when additionally supported by an artificial dataset. Similarly, in the case of the feature extraction technique, the classification F1-scores increased from 0.93 to 0.96 for SVM and from 0.94 to 0.96 for the LDA model. The results show that image augmentation using generative adversarial networks (GANs) can improve the performance of crop/weed classification models with the added advantage of reduced time and manpower. Furthermore, it has demonstrated that generative networks could be a great tool for deep-learning applications in agriculture. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

24 pages, 2504 KiB  
Review
A Survey on Fault Diagnosis of Rolling Bearings
by Bo Peng, Ying Bi, Bing Xue, Mengjie Zhang and Shuting Wan
Algorithms 2022, 15(10), 347; https://doi.org/10.3390/a15100347 - 26 Sep 2022
Cited by 73 | Viewed by 8125
Abstract
The failure of a rolling bearing may cause the shutdown of mechanical equipment and even induce catastrophic accidents, resulting in tremendous economic losses and a severely negative impact on society. Fault diagnosis of rolling bearings becomes an important topic with much attention from [...] Read more.
The failure of a rolling bearing may cause the shutdown of mechanical equipment and even induce catastrophic accidents, resulting in tremendous economic losses and a severely negative impact on society. Fault diagnosis of rolling bearings becomes an important topic with much attention from researchers and industrial pioneers. There are an increasing number of publications on this topic. However, there is a lack of a comprehensive survey of existing works from the perspectives of fault detection and fault type recognition in rolling bearings using vibration signals. Therefore, this paper reviews recent fault detection and fault type recognition methods using vibration signals. First, it provides an overview of fault diagnosis of rolling bearings and typical fault types. Then, existing fault diagnosis methods are categorized into fault detection methods and fault type recognition methods, which are separately revised and discussed. Finally, a summary of existing datasets, limitations/challenges of existing methods, and future directions are presented to provide more guidance for researchers who are interested in this field. Overall, this survey paper conducts a review and analysis of the methods used to diagnose rolling bearing faults and provide comprehensive guidance for researchers in this field. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
Show Figures

Figure 1

19 pages, 2574 KiB  
Article
GA−Reinforced Deep Neural Network for Net Electric Load Forecasting in Microgrids with Renewable Energy Resources for Scheduling Battery Energy Storage Systems
by Chaoran Zheng, Mohsen Eskandari, Ming Li and Zeyue Sun
Algorithms 2022, 15(10), 338; https://doi.org/10.3390/a15100338 - 21 Sep 2022
Cited by 24 | Viewed by 4551
Abstract
The large−scale integration of wind power and PV cells into electric grids alleviates the problem of an energy crisis. However, this is also responsible for technical and management problems in the power grid, such as power fluctuation, scheduling difficulties, and reliability reduction. The [...] Read more.
The large−scale integration of wind power and PV cells into electric grids alleviates the problem of an energy crisis. However, this is also responsible for technical and management problems in the power grid, such as power fluctuation, scheduling difficulties, and reliability reduction. The microgrid concept has been proposed to locally control and manage a cluster of local distributed energy resources (DERs) and loads. If the net load power can be accurately predicted, it is possible to schedule/optimize the operation of battery energy storage systems (BESSs) through economic dispatch to cover intermittent renewables. However, the load curve of the microgrid is highly affected by various external factors, resulting in large fluctuations, which makes the prediction problematic. This paper predicts the net electric load of the microgrid using a deep neural network to realize a reliable power supply as well as reduce the cost of power generation. Considering that the backpropagation (BP) neural network has a good approximation effect as well as a strong adaptation ability, the load prediction model of the BP deep neural network is established. However, there are some defects in the BP neural network, such as the prediction effect, which is not precise enough and easily falls into a locally optimal solution. Hence, a genetic algorithm (GA)−reinforced deep neural network is introduced. By optimizing the weight and threshold of the BP network, the deficiency of the BP neural network algorithm is improved so that the prediction effect is realized and optimized. The results reveal that the error reduction in the mean square error (MSE) of the GA–BP neural network prediction is 2.0221, which is significantly smaller than the 30.3493 of the BP neural network prediction. Additionally, the error reduction is 93.3%. The error reductions of the root mean square error (RMSE) and mean absolute error (MAE) are 74.18% and 51.2%, respectively. Full article
Show Figures

Figure 1

23 pages, 3231 KiB  
Article
Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)
by Harshkumar Mehta and Kalpdrum Passi
Algorithms 2022, 15(8), 291; https://doi.org/10.3390/a15080291 - 17 Aug 2022
Cited by 64 | Viewed by 12100
Abstract
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. [...] Read more.
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. As a part of this research study, two datasets were taken to demonstrate hate speech detection using XAI. Data preprocessing was performed to clean data of any inconsistencies, clean the text of the tweets, tokenize and lemmatize the text, etc. Categorical variables were also simplified in order to generate a clean dataset for training purposes. Exploratory data analysis was performed on the datasets to uncover various patterns and insights. Various pre-existing models were applied to the Google Jigsaw dataset such as decision trees, k-nearest neighbors, multinomial naïve Bayes, random forest, logistic regression, and long short-term memory (LSTM), among which LSTM achieved an accuracy of 97.6%. Explainable methods such as LIME (local interpretable model—agnostic explanations) were applied to the HateXplain dataset. Variants of BERT (bidirectional encoder representations from transformers) model such as BERT + ANN (artificial neural network) with an accuracy of 93.55% and BERT + MLP (multilayer perceptron) with an accuracy of 93.67% were created to achieve a good performance in terms of explainability using the ERASER (evaluating rationales and simple English reasoning) benchmark. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

23 pages, 14110 KiB  
Article
Design of Multi-Objective-Based Artificial Intelligence Controller for Wind/Battery-Connected Shunt Active Power Filter
by Srilakshmi Koganti, Krishna Jyothi Koganti and Surender Reddy Salkuti
Algorithms 2022, 15(8), 256; https://doi.org/10.3390/a15080256 - 25 Jul 2022
Cited by 36 | Viewed by 6126
Abstract
Nowadays, the integration of renewable energy sources such as solar, wind, etc. into the grid is recommended to reduce losses and meet demands. The application of power electronics devices (PED) to control non-linear, unbalanced loads leads to power quality (PQ) issues. This work [...] Read more.
Nowadays, the integration of renewable energy sources such as solar, wind, etc. into the grid is recommended to reduce losses and meet demands. The application of power electronics devices (PED) to control non-linear, unbalanced loads leads to power quality (PQ) issues. This work presents a hybrid controller for the self-tuning filter (STF)-based Shunt active power filter (SHAPF), integrated with a wind power generation system (WPGS) and a battery storage system (BS). The SHAPF comprises a three-phase voltage source inverter, coupled via a DC-Link. The proposed neuro-fuzzy inference hybrid controller (NFIHC) utilizes both the properties of Fuzzy Logic (FL) and artificial neural network (ANN) controllers and maintains constant DC-Link voltage. The phase synchronization was generated by a self-tuning filter (STF) for the effective working of SHAPF during unbalanced and distorted supply voltages. In addition, STF also does the work of low-pass filters (LPFs) and HPFs (high-pass filters) for splitting the Fundamental component (FC) and Harmonic component (HC) of the current. The control of SHAPF works on d-q theory with the advantage of eliminating low-pass filters (LPFs) and phase-locked loop (PLL). The prime objective of the projected work is to regulate the DC-Link voltage during wind uncertainties and load variations, and minimize the total harmonic distortion (THD) in the current waveforms, thereby improving the power factor (PF).Test studies with various combinations of balanced/unbalanced loads, wind velocity variations, and supply voltage were used to evaluate the suggested method’s superior performance. In addition, the comparative analysis was carried out with those of the existing controllers such as conventional proportional-integral (PI), ANN, and FL. Full article
Show Figures

Figure 1

28 pages, 1344 KiB  
Review
Overview of Distributed Machine Learning Techniques for 6G Networks
by Eugenio Muscinelli, Swapnil Sadashiv Shinde and Daniele Tarchi
Algorithms 2022, 15(6), 210; https://doi.org/10.3390/a15060210 - 15 Jun 2022
Cited by 32 | Viewed by 6558
Abstract
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to [...] Read more.
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to serve many heterogeneous wireless devices. Various machine learning (ML) techniques are expected to be deployed over the intelligent 6G wireless network that provide solutions to highly complex networking problems. In order to do this, various 6G nodes and devices are expected to generate tons of data through external sensors, and data analysis will be needed. With such massive and distributed data, and various innovations in computing hardware, distributed ML techniques are expected to play an important role in 6G. Though they have several advantages over the centralized ML techniques, implementing the distributed ML algorithms over resource-constrained wireless environments can be challenging. Therefore, it is important to select a proper ML algorithm based upon the characteristics of the wireless environment and the resource requirements of the learning process. In this work, we survey the recently introduced distributed ML techniques with their characteristics and possible benefits by focusing our attention on the most influential papers in the area. We finally give our perspective on the main challenges and advantages for telecommunication networks, along with the main scenarios that could eventuate. Full article
(This article belongs to the Special Issue Algorithms for Communication Networks)
Show Figures

Figure 1

22 pages, 22712 KiB  
Article
Improved JPS Path Optimization for Mobile Robots Based on Angle-Propagation Theta* Algorithm
by Yuan Luo, Jiakai Lu, Qiong Qin and Yanyu Liu
Algorithms 2022, 15(6), 198; https://doi.org/10.3390/a15060198 - 8 Jun 2022
Cited by 16 | Viewed by 4647
Abstract
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path [...] Read more.
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path optimization strategy of the JPS algorithm by combining the viewable angle of the Angle-Propagation Theta* (AP Theta*) algorithm, and it proposes the AP-JPS algorithm based on an any-angle pathfinding strategy. First, based on the JPS algorithm, this paper proposes a vision triangle judgment method to optimize the generated path by selecting the successor search point. Secondly, the idea of the node viewable angle in the AP Theta* algorithm is introduced to modify the line of sight (LOS) reachability detection between two nodes. Finally, the paths are optimized using a seventh-order polynomial based on minimum snap, so that the AP-JPS algorithm generates paths that better match the actual robot motion. The feasibility and effectiveness of this method are proved by simulation experiments and comparison with other algorithms. The results show that the path planning algorithm in this paper obtains paths with good smoothness in environments with different obstacle densities and different map sizes. In the algorithm comparison experiments, it can be seen that the AP-JPS algorithm reduces the path by 1.61–4.68% and the total turning angle of the path by 58.71–84.67% compared with the JPS algorithm. The AP-JPS algorithm reduces the computing time by 98.59–99.22% compared with the AP-Theta* algorithm. Full article
Show Figures

Figure 1

22 pages, 848 KiB  
Review
A Survey on Network Optimization Techniques for Blockchain Systems
by Robert Antwi, James Dzisi Gadze, Eric Tutu Tchao, Axel Sikora, Henry Nunoo-Mensah, Andrew Selasi Agbemenu, Kwame Opunie-Boachie Obour Agyekum, Justice Owusu Agyemang, Dominik Welte and Eliel Keelson
Algorithms 2022, 15(6), 193; https://doi.org/10.3390/a15060193 - 4 Jun 2022
Cited by 24 | Viewed by 7883
Abstract
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as [...] Read more.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

Back to TopTop