Next Issue
Volume 16, September
Previous Issue
Volume 16, July
 
 

Algorithms, Volume 16, Issue 8 (August 2023) – 40 articles

Cover Story (view full-size image): Vision-based human activity recognition is crucial in video analytics. Recent strides in deep learning improved action detection, yet at times sacrifice robustness for computational efficiency. This challenge of balancing efficiency and robustness hampers complex action recognition in real time on edge devices. This paper introduces a dual-focused, efficient, and robust approach that is suitable for edge devices. The proposed approach leverages saliency-sensitive spatial-temporal features for action identification. The proposed DA-R3DCNN efficiently extracts vital human-centric features via unified attention and dual layers, bolstered by 3D convolutions. The results indicate that the proposed method achieves up to 74× frames per second (FPS) improvement over contemporary methods, confirming its real-time applicability for activity recognition. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
38 pages, 28537 KiB  
Article
Balancing Project Schedule, Cost, and Value under Uncertainty: A Reinforcement Learning Approach
by Claudio Szwarcfiter, Yale T. Herer and Avraham Shtub
Algorithms 2023, 16(8), 395; https://doi.org/10.3390/a16080395 - 21 Aug 2023
Viewed by 1584
Abstract
Industrial projects are plagued by uncertainties, often resulting in both time and cost overruns. This research introduces an innovative approach, employing Reinforcement Learning (RL), to address three distinct project management challenges within a setting of uncertain activity durations. The primary objective is to [...] Read more.
Industrial projects are plagued by uncertainties, often resulting in both time and cost overruns. This research introduces an innovative approach, employing Reinforcement Learning (RL), to address three distinct project management challenges within a setting of uncertain activity durations. The primary objective is to identify stable baseline schedules. The first challenge encompasses the multimode lean project management problem, wherein the goal is to maximize a project’s value function while adhering to both due date and budget chance constraints. The second challenge involves the chance-constrained critical chain buffer management problem in a multimode context. Here, the aim is to minimize the project delivery date while considering resource constraints and duration-chance constraints. The third challenge revolves around striking a balance between the project value and its net present value (NPV) within a resource-constrained multimode environment. To tackle these three challenges, we devised mathematical programming models, some of which were solved optimally. Additionally, we developed competitive RL-based algorithms and verified their performance against established benchmarks. Our RL algorithms consistently generated schedules that compared favorably with the benchmarks, leading to higher project values and NPVs and shorter schedules while staying within the stakeholders’ risk thresholds. The potential beneficiaries of this research are project managers and decision-makers who can use this approach to generate an efficient frontier of optimal project plans. Full article
(This article belongs to the Special Issue Self-Learning and Self-Adapting Algorithms in Machine Learning)
Show Figures

Figure 1

21 pages, 565 KiB  
Article
Bundle Enrichment Method for Nonsmooth Difference of Convex Programming Problems
by Manlio Gaudioso, Sona Taheri, Adil M. Bagirov and Napsu Karmitsa
Algorithms 2023, 16(8), 394; https://doi.org/10.3390/a16080394 - 21 Aug 2023
Viewed by 934
Abstract
The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise [...] Read more.
The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. Full article
Show Figures

Figure 1

23 pages, 3701 KiB  
Article
A Comparative Study of Swarm Intelligence Metaheuristics in UKF-Based Neural Training Applied to the Identification and Control of Robotic Manipulator
by Juan F. Guerra, Ramon Garcia-Hernandez, Miguel A. Llama and Victor Santibañez
Algorithms 2023, 16(8), 393; https://doi.org/10.3390/a16080393 - 21 Aug 2023
Cited by 2 | Viewed by 1284
Abstract
This work presents a comprehensive comparative analysis of four prominent swarm intelligence (SI) optimization algorithms: Ant Lion Optimizer (ALO), Bat Algorithm (BA), Grey Wolf Optimizer (GWO), and Moth Flame Optimization (MFO). When compared under the same conditions with other SI algorithms, the Particle [...] Read more.
This work presents a comprehensive comparative analysis of four prominent swarm intelligence (SI) optimization algorithms: Ant Lion Optimizer (ALO), Bat Algorithm (BA), Grey Wolf Optimizer (GWO), and Moth Flame Optimization (MFO). When compared under the same conditions with other SI algorithms, the Particle Swarm Optimization (PSO) stands out. First, the Unscented Kalman Filter (UKF) parameters to be optimized are selected, and then each SI optimization algorithm is executed within an off-line simulation. Once the UKF initialization parameters P0, Q0, and R0 are obtained, they are applied in real-time in the decentralized neural block control (DNBC) scheme for the trajectory tracking task of a 2-DOF robot manipulator. Finally, the results are compared according to the criteria performance evaluation using each algorithm, along with CPU cost. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms)
Show Figures

Graphical abstract

19 pages, 2769 KiB  
Article
Testing a New “Decrypted” Algorithm for Plantower Sensors Measuring PM2.5: Comparison with an Alternative Algorithm
by Lance Wallace
Algorithms 2023, 16(8), 392; https://doi.org/10.3390/a16080392 - 17 Aug 2023
Cited by 1 | Viewed by 953
Abstract
Recently, a hypothesis providing a detailed equation for the Plantower CF_1 algorithm for PM2.5 has been published. The hypothesis was originally validated using eight independent Plantower sensors in four PurpleAir PA-II monitors providing PM2.5 estimates from a single site in 2020. [...] Read more.
Recently, a hypothesis providing a detailed equation for the Plantower CF_1 algorithm for PM2.5 has been published. The hypothesis was originally validated using eight independent Plantower sensors in four PurpleAir PA-II monitors providing PM2.5 estimates from a single site in 2020. If true, the hypothesis makes important predictions regarding PM2.5 measurements using CF_1. Therefore, we test the hypothesis using 18 Plantower sensors from four datasets from two sites in later years (2021–2023). The four general models from these datasets agreed to within 10% with the original model. A competing algorithm known as “pm2.5 alt” has been published and is freely available on the PurpleAir API site. The accuracy, precision, and limit of detection for the two algorithms are compared. The CF_1 algorithm overestimates PM2.5 by about 60–70% compared to two calibrated PurpleAir monitors using the pm2.5 alt algorithm. A requirement that the two sensors in a single monitor agree to within 20% was met by 85–99% of the data using the pm2.5 alt algorithm, but by only 22–74% of the data using the CF_1 algorithm. The limit of detection (LOD) of the CF_1 algorithm was about 10 times the LOD of the pm2.5 alt algorithm, resulting in 71% of the CF_1 data falling below the LOD, compared to 1 % for the pm2.5 alt algorithm. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

15 pages, 5415 KiB  
Article
Using Epidemiological Models to Predict the Spread of Information on Twitter
by Matteo Castiello, Dajana Conte and Samira Iscaro
Algorithms 2023, 16(8), 391; https://doi.org/10.3390/a16080391 - 17 Aug 2023
Cited by 1 | Viewed by 1250
Abstract
In this article, we analyze the spread of information on social media (Twitter) and purpose a strategy based on epidemiological models. It is well known that social media represent a strong tool to spread news and, in particular, fake news, due to the [...] Read more.
In this article, we analyze the spread of information on social media (Twitter) and purpose a strategy based on epidemiological models. It is well known that social media represent a strong tool to spread news and, in particular, fake news, due to the fact that they are free and easy to use. First, we propose an algorithm to create a proper dataset in order to employ the ignorants–spreaders–recovered epidemiological model. Then, we show that to use this model to study the diffusion of real news, parameter estimation is required. We show that it is also possible to accurately predict the evolution of news spread and its peak in terms of the maximum number of people who share it and the time when the peak occurs trough a process of data reduction, i.e., by using only a part of the built dataset to optimize parameters. Numerical results based on the analysis of real news are also provided to confirm the applicability of our proposed model and strategy. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications IV)
Show Figures

Figure 1

18 pages, 375 KiB  
Article
A Generalized Framework for the Estimation of Edge Infection Probabilities
by András Bóta and Lauren Gardner
Algorithms 2023, 16(8), 390; https://doi.org/10.3390/a16080390 - 16 Aug 2023
Viewed by 824
Abstract
Modeling the spread of infections in networks is a well-studied and important field of research. Most infection and diffusion models require a real value or probability at the edges of the network as an input, but this is rarely available in real-life applications. [...] Read more.
Modeling the spread of infections in networks is a well-studied and important field of research. Most infection and diffusion models require a real value or probability at the edges of the network as an input, but this is rarely available in real-life applications. The Generalized Inverse Infection Model (GIIM) has previously been used in real-world applications to solve this problem. However, these applications were limited to the specifics of the corresponding case studies, and the theoretical properties, as well as the wider applicability of the model, are yet to be investigated. Here, we show that the general model works with the most widely used infection models and is able to handle an arbitrary number of observations on such processes. We evaluate the accuracy and speed of the GIIM on a large variety of realistic infection scenarios. Full article
Show Figures

Figure 1

20 pages, 1283 KiB  
Article
Systematic Analysis and Design of Control Systems Based on Lyapunov’s Direct Method
by Rick Voßwinkel and Klaus Röbenack
Algorithms 2023, 16(8), 389; https://doi.org/10.3390/a16080389 - 14 Aug 2023
Viewed by 969
Abstract
This paper deals with systematic approaches for the analysis of stability properties and controller design for nonlinear dynamical systems. Numerical methods based on sum-of-squares decomposition or algebraic methods based on quantifier elimination are used. Starting from Lyapunov’s direct method, these methods can be [...] Read more.
This paper deals with systematic approaches for the analysis of stability properties and controller design for nonlinear dynamical systems. Numerical methods based on sum-of-squares decomposition or algebraic methods based on quantifier elimination are used. Starting from Lyapunov’s direct method, these methods can be used to derive conditions for the automatic verification of Lyapunov functions as well as for the structural determination of control laws. This contribution describes methods for the automatic verification of (control) Lyapunov functions as well as for the constructive determination of control laws. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms)
Show Figures

Figure 1

21 pages, 1706 KiB  
Article
Decision Making under Conditions of Uncertainty and Risk in the Formation of Warehouse Stock of an Automotive Service Enterprise
by Irina Makarova, Polina Buyvol, Larisa Gabsalikhova, Eduard Belyaev and Eduard Mukhametdinov
Algorithms 2023, 16(8), 388; https://doi.org/10.3390/a16080388 - 13 Aug 2023
Viewed by 1089
Abstract
This article is devoted to the problem of determining the rational amount of spare parts in the warehouse of a service center of an automobile manufacturer’s branded network used for maintenance and current repairs. This problem was solved on the basis of the [...] Read more.
This article is devoted to the problem of determining the rational amount of spare parts in the warehouse of a service center of an automobile manufacturer’s branded network used for maintenance and current repairs. This problem was solved on the basis of the accumulated statistical data of failures that occurred during the warranty period of vehicle operation. In the calculation, game methods were used. This took into account the stochastic need for spare parts and the consequences of their presence or absence in stock, which are expressed in the form of a profit and an additional possible payment of a fine in case of a discrepancy between the current level of demand for spare parts and the available spare parts. Two cases of decision making are considered: under conditions of risk and uncertainty, the occurrence of which depends on the amount of information about the input flow of enters to the service center. If such statistics are accumulated, then the decision is made taking into account the possible risk associated with the uncertainty of a specific need for spare parts. Otherwise, the probability of a particular need is calculated on the basis of special criteria. To optimize the collection of information about the state of warehouse stocks, the transfer of information, and the assessment and forecasting of stocks, well-organized feedback is needed, which is shown in the form of an algorithm. Full article
(This article belongs to the Special Issue Optimization Algorithms in Logistics, Transportation, and SCM)
Show Figures

Figure 1

26 pages, 18314 KiB  
Article
Model Predictive Evolutionary Temperature Control via Neural-Network-Based Digital Twins
by Cihan Ates, Dogan Bicat, Radoslav Yankov, Joel Arweiler, Rainer Koch and Hans-Jörg Bauer
Algorithms 2023, 16(8), 387; https://doi.org/10.3390/a16080387 - 12 Aug 2023
Viewed by 1635
Abstract
In this study, we propose a population-based, data-driven intelligent controller that leverages neural-network-based digital twins for hypothesis testing. Initially, a diverse set of control laws is generated using genetic programming with the digital twin of the system, facilitating a robust response to unknown [...] Read more.
In this study, we propose a population-based, data-driven intelligent controller that leverages neural-network-based digital twins for hypothesis testing. Initially, a diverse set of control laws is generated using genetic programming with the digital twin of the system, facilitating a robust response to unknown disturbances. During inference, the trained digital twin is utilized to virtually test alternative control actions for a multi-objective optimization task associated with each control action. Subsequently, the best policy is applied to the system. To evaluate the proposed model predictive control pipeline, experiments are conducted on a multi-mode heat transfer test rig. The objective is to achieve homogeneous cooling over the surface, minimizing the occurrence of hot spots and energy consumption. The measured variable vector comprises high dimensional infrared camera measurements arranged as a sequence (655,360 inputs), while the control variable includes power settings for fans responsible for convective cooling (3 outputs). Disturbances are induced by randomly altering the local heat loads. The findings reveal that by utilizing an evolutionary algorithm on measured data, a population of control laws can be effectively learned in the virtual space. This empowers the system to deliver robust performance. Significantly, the digital twin-assisted, population-based model predictive control (MPC) pipeline emerges as a superior approach compared to individual control models, especially when facing sudden and random changes in local heat loads. Leveraging the digital twin to virtually test alternative control policies leads to substantial improvements in the controller’s performance, even with limited training data. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms)
Show Figures

Figure 1

16 pages, 3314 KiB  
Article
Reducing Nervousness in Master Production Planning: A Systematic Approach Incorporating Product-Driven Strategies
by Patricio Sáez, Carlos Herrera and Victor Parada
Algorithms 2023, 16(8), 386; https://doi.org/10.3390/a16080386 - 11 Aug 2023
Viewed by 1036
Abstract
Manufacturing companies face a significant challenge when developing their master production schedule, navigating unforeseen disruptions during daily operations. Moreover, fluctuations in demand pose a substantial risk to scheduling and are the main cause of instability and uncertainty in the system. To address these [...] Read more.
Manufacturing companies face a significant challenge when developing their master production schedule, navigating unforeseen disruptions during daily operations. Moreover, fluctuations in demand pose a substantial risk to scheduling and are the main cause of instability and uncertainty in the system. To address these challenges, employing flexible systems to mitigate uncertainty without incurring additional costs and generate sustainable responses in industrial applications is crucial. This paper proposes a product-driven system to complement the master production plan generated by a mathematical model. This system incorporates intelligent agents that make production decisions with a function capable of reducing uncertainty without significantly increasing production costs. The agents modify or determine the forecasted production quantities for each cycle or period. In the case study conducted, a master production plan was established for 12 products over a one-year time horizon. The proposed solution achieved an 11.42% reduction in uncertainty, albeit with a 2.39% cost increase. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
Show Figures

Figure 1

14 pages, 559 KiB  
Article
Constant-Beamwidth LCMV Beamformer with Rectangular Arrays
by Vitor Probst Curtarelli and Israel Cohen
Algorithms 2023, 16(8), 385; https://doi.org/10.3390/a16080385 - 10 Aug 2023
Cited by 1 | Viewed by 1013
Abstract
This paper presents a novel approach utilizing uniform rectangular arrays to design a constant-beamwidth (CB) linearly constrained minimum variance (LCMV) beamformer, which also improves white noise gain and directivity. By employing a generalization of the convolutional Kronecker product beamforming technique, we decompose a [...] Read more.
This paper presents a novel approach utilizing uniform rectangular arrays to design a constant-beamwidth (CB) linearly constrained minimum variance (LCMV) beamformer, which also improves white noise gain and directivity. By employing a generalization of the convolutional Kronecker product beamforming technique, we decompose a physical array into virtual subarrays, each tailored to achieve a specific desired feature, and we subsequently synthesize the original array’s beamformer. Through simulations, we demonstrate that the proposed approach successfully achieves the desired beamforming characteristics while maintaining favorable levels of white noise gain and directivity. A comparative analysis against existing methods from the literature reveals that the proposed method performs better than the existing methods. Full article
(This article belongs to the Special Issue Digital Signal Processing Algorithms and Applications)
Show Figures

Figure 1

20 pages, 1198 KiB  
Article
Reinforcement Learning Derived High-Alpha Aerobatic Manoeuvres for Fixed Wing Operation in Confined Spaces
by Robert Clarke, Liam Fletcher, Sebastian East and Thomas Richardson
Algorithms 2023, 16(8), 384; https://doi.org/10.3390/a16080384 - 10 Aug 2023
Cited by 1 | Viewed by 1049
Abstract
Reinforcement learning has been used on a variety of control tasks for drones, including, in previous work at the University of Bristol, on perching manoeuvres with sweep-wing aircraft. In this paper, a new aircraft model is presented representing flight up to very high [...] Read more.
Reinforcement learning has been used on a variety of control tasks for drones, including, in previous work at the University of Bristol, on perching manoeuvres with sweep-wing aircraft. In this paper, a new aircraft model is presented representing flight up to very high angles of attack where the aerodynamic models are highly nonlinear. The model is employed to develop high-alpha manoeuvres, using reinforcement learning to exploit the nonlinearities at the edge of the flight envelope, enabling fixed-wing operations in tightly confined spaces. Training networks for multiple manoeuvres is also demonstrated. The approach is shown to generate controllers that take full advantage of the aircraft capability. It is suggested that a combination of these neural network-based controllers, together with classical model predictive control, could be used to operate efficiently within the low alpha flight regime and, yet, respond rapidly in confined spaces where high alpha, agile manoeuvres are required. Full article
(This article belongs to the Special Issue Advancements in Reinforcement Learning Algorithms)
Show Figures

Figure 1

11 pages, 775 KiB  
Article
A Neural-Network-Based Competition between Short-Lived Particle Candidates in the CBM Experiment at FAIR
by Artemiy Belousov, Ivan Kisel and Robin Lakos
Algorithms 2023, 16(8), 383; https://doi.org/10.3390/a16080383 - 9 Aug 2023
Viewed by 1036
Abstract
Fast and efficient algorithms optimized for high performance computers are crucial for the real-time analysis of data in heavy-ion physics experiments. Furthermore, the application of neural networks and other machine learning techniques has become more popular in physics experiments over the last years. [...] Read more.
Fast and efficient algorithms optimized for high performance computers are crucial for the real-time analysis of data in heavy-ion physics experiments. Furthermore, the application of neural networks and other machine learning techniques has become more popular in physics experiments over the last years. For that reason, a fast neural network package called ANN4FLES is developed in C++, which will be optimized to be used on a high performance computer farm for the future Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR, Darmstadt, Germany). This paper describes the first application of ANN4FLES used in the reconstruction chain of the CBM experiment to replace the existing particle competition between Ks-mesons and Λ-hyperons in the KF Particle Finder by a neural network based approach. The raw classification performance of the neural network reaches over 98% on the testing set. Furthermore, it is shown that the background noise was reduced by the neural network-based competition and therefore improved the quality of the physics analysis. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

19 pages, 10277 KiB  
Article
Learning to Extrapolate Using Continued Fractions: Predicting the Critical Temperature of Superconductor Materials
by Pablo Moscato, Mohammad Nazmul Haque, Kevin Huang, Julia Sloan and Jonathon Corrales de Oliveira
Algorithms 2023, 16(8), 382; https://doi.org/10.3390/a16080382 - 8 Aug 2023
Viewed by 1515
Abstract
In the field of Artificial Intelligence (AI) and Machine Learning (ML), a common objective is the approximation of unknown target functions y=f(x) using limited instances S=(x(i),y(i)) [...] Read more.
In the field of Artificial Intelligence (AI) and Machine Learning (ML), a common objective is the approximation of unknown target functions y=f(x) using limited instances S=(x(i),y(i)), where x(i)D and D represents the domain of interest. We refer to S as the training set and aim to identify a low-complexity mathematical model that can effectively approximate this target function for new instances x. Consequently, the model’s generalization ability is evaluated on a separate set T={x(j)}D, where TS, frequently with TS=, to assess its performance beyond the training set. However, certain applications require accurate approximation not only within the original domain D but in an extended domain D that encompasses D as well. This becomes particularly relevant in scenarios involving the design of new structures, where minimizing errors in approximations is crucial. For example, when developing new materials through data-driven approaches, the AI/ML system can provide valuable insights to guide the design process by serving as a surrogate function. Consequently, the learned model can be employed to facilitate the design of new laboratory experiments. In this paper, we propose a method for multivariate regression based on iterative fitting of a continued fraction, incorporating additive spline models. We compare the performance of our method with established techniques, including AdaBoost, Kernel Ridge, Linear Regression, Lasso Lars, Linear Support Vector Regression, Multi-Layer Perceptrons, Random Forest, Stochastic Gradient Descent, and XGBoost. To evaluate these methods, we focus on an important problem in the field, namely, predicting the critical temperature of superconductors based on their physical–chemical characteristics. Full article
(This article belongs to the Special Issue Machine Learning Algorithms and Methods for Predictive Analytics)
Show Figures

Figure 1

23 pages, 4398 KiB  
Review
Investigating Routing in the VANET Network: Review and Classification of Approaches
by Arun Kumar Sangaiah, Amir Javadpour, Chung-Chian Hsu, Anandakumar Haldorai and Ahmad Zeynivand
Algorithms 2023, 16(8), 381; https://doi.org/10.3390/a16080381 - 7 Aug 2023
Cited by 5 | Viewed by 1847
Abstract
Vehicular Ad Hoc Network (VANETs) need methods to control traffic caused by a high volume of traffic during day and night, the interaction of vehicles, and pedestrians, vehicle collisions, increasing travel delays, and energy issues. Routing is one of the most critical problems [...] Read more.
Vehicular Ad Hoc Network (VANETs) need methods to control traffic caused by a high volume of traffic during day and night, the interaction of vehicles, and pedestrians, vehicle collisions, increasing travel delays, and energy issues. Routing is one of the most critical problems in VANET. One of the machine learning categories is reinforcement learning (RL), which uses RL algorithms to find a more optimal path. According to the feedback they get from the environment, these methods can affect the system through learning from previous actions and reactions. This paper provides a comprehensive review of various methods such as reinforcement learning, deep reinforcement learning, and fuzzy learning in the traffic network, to obtain the best method for finding optimal routing in the VANET network. In fact, this paper deals with the advantages, disadvantages and performance of the methods introduced. Finally, we categorize the investigated methods and suggest the proper performance of each of them. Full article
(This article belongs to the Collection Featured Reviews of Algorithms)
Show Figures

Figure 1

25 pages, 16908 KiB  
Article
Design Optimization of Truss Structures Using a Graph Neural Network-Based Surrogate Model
by Navid Nourian, Mamdouh El-Badry and Maziar Jamshidi
Algorithms 2023, 16(8), 380; https://doi.org/10.3390/a16080380 - 7 Aug 2023
Cited by 1 | Viewed by 2637
Abstract
One of the primary objectives of truss structure design optimization is to minimize the total weight by determining the optimal sizes of the truss members while ensuring structural stability and integrity against external loads. Trusses consist of pin joints connected by straight members, [...] Read more.
One of the primary objectives of truss structure design optimization is to minimize the total weight by determining the optimal sizes of the truss members while ensuring structural stability and integrity against external loads. Trusses consist of pin joints connected by straight members, analogous to vertices and edges in a mathematical graph. This characteristic motivates the idea of representing truss joints and members as graph vertices and edges. In this study, a Graph Neural Network (GNN) is employed to exploit the benefits of graph representation and develop a GNN-based surrogate model integrated with a Particle Swarm Optimization (PSO) algorithm to approximate nodal displacements of trusses during the design optimization process. This approach enables the determination of the optimal cross-sectional areas of the truss members with fewer finite element model (FEM) analyses. The validity and effectiveness of the GNN-based optimization technique are assessed by comparing its results with those of a conventional FEM-based design optimization of three truss structures: a 10-bar planar truss, a 72-bar space truss, and a 200-bar planar truss. The results demonstrate the superiority of the GNN-based optimization, which can achieve the optimal solutions without violating constraints and at a faster rate, particularly for complex truss structures like the 200-bar planar truss problem. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

27 pages, 10487 KiB  
Article
A Multi-Objective Tri-Level Algorithm for Hub-and-Spoke Network in Short Sea Shipping Transportation
by Panagiotis Farmakis, Athanasios Chassiakos and Stylianos Karatzas
Algorithms 2023, 16(8), 379; https://doi.org/10.3390/a16080379 - 7 Aug 2023
Viewed by 1507
Abstract
Hub-and-Spoke (H&S) network modeling is a form of transport topology optimization in which network joins are connected through intermediate hub nodes. The Short Sea Shipping (SSS) problem aims to efficiently disperse passenger flows involving multiple vessel routes and intermediary hubs through which passengers [...] Read more.
Hub-and-Spoke (H&S) network modeling is a form of transport topology optimization in which network joins are connected through intermediate hub nodes. The Short Sea Shipping (SSS) problem aims to efficiently disperse passenger flows involving multiple vessel routes and intermediary hubs through which passengers are transferred to their final destination. The problem contains elements of the Hub-and-Spoke and Travelling Salesman, with different levels of passenger flows among islands, making it more demanding than the typical H&S one, as the hub selection within nodes and the shortest routes among islands are internal optimization goals. This work introduces a multi-objective tri-level optimization algorithm for the General Network of Short Sea Shipping (GNSSS) problem to reduce travel distances and transportation costs while improving travel quality and user satisfaction, mainly by minimizing passenger hours spent on board. The analysis is performed at three levels of decisions: (a) the hub node assignment, (b) the island-to-line assignment, and (c) the island service sequence within each line. Due to the magnitude and complexity of the problem, a genetic algorithm is employed for the implementation. The algorithm performance has been tested and evaluated through several real and simulated case studies of different sizes and operational scenarios. The results indicate that the algorithm provides rational solutions in accordance with the desired sub-objectives. The multi-objective consideration leads to solutions that are quite scattered in the solution space, indicating the necessity of employing formal optimization methods. Typical Pareto diagrams present non-dominated solutions varying at a range of 30 percent in terms of the total distance traveled and more than 50 percent in relation to the cumulative passenger hours. Evaluation results further indicate satisfactory algorithm performance in terms of result stability (repeatability) and computational time requirements. In conclusion, the work provides a tool for assisting network operation and transport planning decisions by shipping companies in the directions of cost reduction and traveler service upgrade. In addition, the model can be adapted to other applications in transportation and in the supply chain. Full article
(This article belongs to the Special Issue Optimization Algorithms for Decision Support Systems)
Show Figures

Graphical abstract

32 pages, 2679 KiB  
Review
An Overview of Privacy Dimensions on the Industrial Internet of Things (IIoT)
by Vasiliki Demertzi, Stavros Demertzis and Konstantinos Demertzis
Algorithms 2023, 16(8), 378; https://doi.org/10.3390/a16080378 - 6 Aug 2023
Cited by 7 | Viewed by 1606
Abstract
The rapid advancements in technology have given rise to groundbreaking solutions and practical applications in the field of the Industrial Internet of Things (IIoT). These advancements have had a profound impact on the structures of numerous industrial organizations. The IIoT, a seamless integration [...] Read more.
The rapid advancements in technology have given rise to groundbreaking solutions and practical applications in the field of the Industrial Internet of Things (IIoT). These advancements have had a profound impact on the structures of numerous industrial organizations. The IIoT, a seamless integration of the physical and digital realms with minimal human intervention, has ushered in radical changes in the economy and modern business practices. At the heart of the IIoT lies its ability to gather and analyze vast volumes of data, which is then harnessed by artificial intelligence systems to perform intelligent tasks such as optimizing networked units’ performance, identifying and correcting errors, and implementing proactive maintenance measures. However, implementing IIoT systems is fraught with difficulties, notably in terms of security and privacy. IIoT implementations are susceptible to sophisticated security attacks at various levels of networking and communication architecture. The complex and often heterogeneous nature of these systems makes it difficult to ensure availability, confidentiality, and integrity, raising concerns about mistrust in network operations, privacy breaches, and potential loss of critical, personal, and sensitive information of the network's end-users. To address these issues, this study aims to investigate the privacy requirements of an IIoT ecosystem as outlined by industry standards. It provides a comprehensive overview of the IIoT, its advantages, disadvantages, challenges, and the imperative need for industrial privacy. The research methodology encompasses a thorough literature review to gather existing knowledge and insights on the subject. Additionally, it explores how the IIoT is transforming the manufacturing industry and enhancing industrial processes, incorporating case studies and real-world examples to illustrate its practical applications and impact. Also, the research endeavors to offer actionable recommendations on implementing privacy-enhancing measures and establishing a secure IIoT ecosystem. Full article
(This article belongs to the Special Issue Computational Intelligence in Wireless Sensor Networks and IoT)
Show Figures

Figure 1

13 pages, 2214 KiB  
Article
Ensemble Transfer Learning for Distinguishing Cognitively Normal and Mild Cognitive Impairment Patients Using MRI
by Pratham Grover, Kunal Chaturvedi, Xing Zi, Amit Saxena, Shiv Prakash, Tony Jan and Mukesh Prasad
Algorithms 2023, 16(8), 377; https://doi.org/10.3390/a16080377 - 6 Aug 2023
Cited by 2 | Viewed by 1856
Abstract
Alzheimer’s disease is a chronic neurodegenerative disease that causes brain cells to degenerate, resulting in decreased physical and mental abilities and, in severe cases, permanent memory loss. It is considered as the most common and fatal form of dementia. Although mild cognitive impairment [...] Read more.
Alzheimer’s disease is a chronic neurodegenerative disease that causes brain cells to degenerate, resulting in decreased physical and mental abilities and, in severe cases, permanent memory loss. It is considered as the most common and fatal form of dementia. Although mild cognitive impairment (MCI) precedes Alzheimer’s disease (AD), it does not necessarily show the obvious symptoms of AD. As a result, it becomes challenging to distinguish between mild cognitive impairment and cognitively normal. In this paper, we propose an ensemble of deep learners based on convolutional neural networks for the early diagnosis of Alzheimer’s disease. The proposed approach utilises simple averaging ensemble and weighted averaging ensemble methods. The ensemble-based transfer learning model demonstrates enhanced generalization and performance for AD diagnosis compared to traditional transfer learning methods. Extensive experiments on the OASIS-3 dataset validate the effectiveness of the proposed model, showcasing its superiority over state-of-the-art transfer learning approaches in terms of accuracy, robustness, and efficiency. Full article
Show Figures

Figure 1

18 pages, 5509 KiB  
Article
Comparison of Meta-Heuristic Optimization Algorithms for Global Maximum Power Point Tracking of Partially Shaded Solar Photovoltaic Systems
by Timmidi Nagadurga, Ramesh Devarapalli and Łukasz Knypiński
Algorithms 2023, 16(8), 376; https://doi.org/10.3390/a16080376 - 5 Aug 2023
Cited by 5 | Viewed by 1467
Abstract
Partial shading conditions lead to power mismatches among photovoltaic (PV) panels, resulting in the generation of multiple peak power points on the P-V curve. At this point, conventional MPPT algorithms fail to operate effectively. This research work mainly focuses on the exploration of [...] Read more.
Partial shading conditions lead to power mismatches among photovoltaic (PV) panels, resulting in the generation of multiple peak power points on the P-V curve. At this point, conventional MPPT algorithms fail to operate effectively. This research work mainly focuses on the exploration of performance optimization and harnessing more power during the partial shading environment of solar PV systems with a single-objective non-linear optimization problem subjected to different operations formulated and solved using recent metaheuristic algorithms such as Cat Swarm Optimization (CSO), Grey Wolf Optimization (GWO) and the proposed Chimp Optimization algorithm (ChOA). This research work is implemented on a test system with the help of MATLAB/SIMULINK, and the obtained results are discussed. From the overall results, the metaheuristic methods used by the trackers based on their analysis showed convergence towards the global Maximum Power Point (MPP). Additionally, the proposed ChOA technique shows improved performance over other existing algorithms. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)
Show Figures

Figure 1

15 pages, 11652 KiB  
Article
Ascertaining the Ideality of Photometric Stereo Datasets under Unknown Lighting
by Elisa Crabu, Federica Pes, Giuseppe Rodriguez and Giuseppa Tanda
Algorithms 2023, 16(8), 375; https://doi.org/10.3390/a16080375 - 5 Aug 2023
Cited by 1 | Viewed by 995
Abstract
The standard photometric stereo model makes several assumptions that are rarely verified in experimental datasets. In particular, the observed object should behave as a Lambertian reflector, and the light sources should be positioned at an infinite distance from it, along a known direction. [...] Read more.
The standard photometric stereo model makes several assumptions that are rarely verified in experimental datasets. In particular, the observed object should behave as a Lambertian reflector, and the light sources should be positioned at an infinite distance from it, along a known direction. Even when Lambert’s law is approximately fulfilled, an accurate assessment of the relative position between the light source and the target is often unavailable in real situations. The Hayakawa procedure is a computational method for estimating such information directly from data images. It occasionally breaks down when some of the available images excessively deviate from ideality. This is generally due to observing a non-Lambertian surface, or illuminating it from a close distance, or both. Indeed, in narrow shooting scenarios, typical, e.g., of archaeological excavation sites, it is impossible to position a flashlight at a sufficient distance from the observed surface. It is then necessary to understand if a given dataset is reliable and which images should be selected to better reconstruct the target. In this paper, we propose some algorithms to perform this task and explore their effectiveness. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Computer Vision Applications)
Show Figures

Figure 1

17 pages, 523 KiB  
Article
A Greedy Pursuit Hierarchical Iteration Algorithm for Multi-Input Systems with Colored Noise and Unknown Time-Delays
by Ruijuan Du and Taiyang Tao
Algorithms 2023, 16(8), 374; https://doi.org/10.3390/a16080374 - 4 Aug 2023
Viewed by 1097
Abstract
This paper focuses on the joint estimation of parameters and time delays for multi-input systems that contain unknown input delays and colored noise. A greedy pursuit hierarchical iteration algorithm is proposed, which can reduce the estimation cost. Firstly, an over-parameterized approach is employed [...] Read more.
This paper focuses on the joint estimation of parameters and time delays for multi-input systems that contain unknown input delays and colored noise. A greedy pursuit hierarchical iteration algorithm is proposed, which can reduce the estimation cost. Firstly, an over-parameterized approach is employed to construct a sparse system model of multi-input systems even in the absence of prior knowledge of time delays. Secondly, the hierarchical principle is applied to replace the unknown true noise items with their estimation values, and a greedy pursuit search based on compressed sensing is employed to find key parameters using limited sampled data. The greedy pursuit search can effectively reduce the scale of the system model and improve the identification efficiency. Then, the parameters and time delays can be estimated simultaneously while considering the known orders and found locations of key parameters by utilizing iterative methods with limited sampled data. Finally, some simulations are provided to illustrate the effectiveness of the presented algorithm in this paper. Full article
Show Figures

Figure 1

16 pages, 3396 KiB  
Article
Data-Driven Deployment of Cargo Drones: A U.S. Case Study Identifying Key Markets and Routes
by Raj Bridgelall
Algorithms 2023, 16(8), 373; https://doi.org/10.3390/a16080373 - 3 Aug 2023
Viewed by 1215
Abstract
Electric and autonomous aircraft (EAA) are set to disrupt current cargo-shipping models. To maximize the benefits of this technology, investors and logistics managers need information on target commodities, service location establishment, and the distribution of origin–destination pairs within EAA’s range limitations. This research [...] Read more.
Electric and autonomous aircraft (EAA) are set to disrupt current cargo-shipping models. To maximize the benefits of this technology, investors and logistics managers need information on target commodities, service location establishment, and the distribution of origin–destination pairs within EAA’s range limitations. This research introduces a three-phase data-mining and geographic information system (GIS) algorithm to support data-driven decision-making under uncertainty. Analysts can modify and expand this workflow to scrutinize origin–destination commodity flow datasets representing various locations. The algorithm identifies four commodity categories contributing to more than one-third of the value transported by aircraft across the contiguous United States, yet only 5% of the weight. The workflow highlights 8 out of 129 regional locations that moved more than 20% of the weight of those four commodity categories. A distance band of 400 miles among these eight locations accounts for more than 80% of the transported weight. This study addresses a literature gap, identifying opportunities for supply chain redesign using EAA. The presented methodology can guide planners and investors in identifying prime target markets for emerging EAA technologies using regional datasets. Full article
Show Figures

Figure 1

30 pages, 2668 KiB  
Article
Applying Particle Swarm Optimization Variations to Solve the Transportation Problem Effectively
by Chrysanthi Aroniadi and Grigorios N. Beligiannis
Algorithms 2023, 16(8), 372; https://doi.org/10.3390/a16080372 - 3 Aug 2023
Viewed by 911
Abstract
The Transportation Problem (TP) is a special type of linear programming problem, where the objective is to minimize the cost of distributing a product from a number of sources to a number of destinations. Many methods for solving the TP have been studied [...] Read more.
The Transportation Problem (TP) is a special type of linear programming problem, where the objective is to minimize the cost of distributing a product from a number of sources to a number of destinations. Many methods for solving the TP have been studied over time. However, exact methods do not always succeed in finding the optimal solution or a solution that effectively approximates the optimal one. This paper introduces two new variations of the well-established Particle Swarm Optimization (PSO) algorithm named the Trigonometric Acceleration Coefficients-PSO (TrigAc-PSO) and the Four Sectors Varying Acceleration Coefficients PSO (FSVAC-PSO) and applies them to solve the TP. The performances of the proposed variations are examined and validated by carrying out extensive experimental tests. In order to demonstrate the efficiency of the proposed PSO variations, thirty two problems with different sizes have been solved to evaluate and demonstrate their performance. Moreover, the proposed PSO variations were compared with exact methods such as Vogel’s Approximation Method (VAM), the Total Differences Method 1 (TDM1), the Total Opportunity Cost Matrix-Minimal Total (TOCM-MT), the Juman and Hoque Method (JHM) and the Bilqis Chastine Erma method (BCE). Last but not least, the proposed variations were also compared with other PSO variations that are well known for their completeness and efficiency, such as Decreasing Weight Particle Swarm Optimization (DWPSO) and Time Varying Acceleration Coefficients (TVAC). Experimental results show that the proposed variations achieve very satisfactory results in terms of their efficiency and effectiveness compared to existing either exact or heuristic methods. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)
Show Figures

Figure 1

23 pages, 2236 KiB  
Article
Identification of Mechanical Parameters in Flexible Drive Systems Using Hybrid Particle Swarm Optimization Based on the Quasi-Newton Method
by Ishaq Hafez and Rached Dhaouadi
Algorithms 2023, 16(8), 371; https://doi.org/10.3390/a16080371 - 31 Jul 2023
Cited by 1 | Viewed by 1290
Abstract
This study presents hybrid particle swarm optimization with quasi-Newton (HPSO-QN), a hybrid optimization method for accurately identifying mechanical parameters in two-mass model (2MM) systems. These systems are commonly used to model and control high-performance electric drive systems with elastic joints, which are prevalent [...] Read more.
This study presents hybrid particle swarm optimization with quasi-Newton (HPSO-QN), a hybrid optimization method for accurately identifying mechanical parameters in two-mass model (2MM) systems. These systems are commonly used to model and control high-performance electric drive systems with elastic joints, which are prevalent in modern industrial production. The proposed method combines the global exploration capabilities of particle swarm optimization (PSO) with the local exploitation abilities of the quasi-Newton (QN) method to precisely estimate the motor and load inertias, shaft stiffness, and friction coefficients of the 2MM system. By integrating these two optimization techniques, the HPSO-QN method exhibits superior accuracy and performance compared to standard PSO algorithms. Experimental validation using a 2MM system demonstrates the effectiveness of the proposed method in accurately identifying and improving the mechanical parameters of these complex systems. The HPSO-QN method offers significant implications for enhancing the modeling, performance, and stability of 2MM systems and can be extended to other systems with flexible shafts and couplings. This study contributes to the development of accurate and effective parameter identification methods for complex systems, emphasizing the crucial role of precise parameter estimation in achieving optimal control performance and stability. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms for Optimization)
Show Figures

Figure 1

17 pages, 1330 KiB  
Article
Hardware Suitability of Complex Natural Resonances Extraction Algorithms in Backscattered Radar Signals
by Andres Gallego and Francisco Roman
Algorithms 2023, 16(8), 370; https://doi.org/10.3390/a16080370 - 31 Jul 2023
Viewed by 1047
Abstract
Complex natural resonances (CNRs) extraction methods such as matrix pencil method (MPM), Cauchy, vector-fitting Cauchy method (VCM), or Prony’s method decompose a signal in terms of frequency components and damping factors based on Baum’s singularity expansion method (SEM) either in the time or [...] Read more.
Complex natural resonances (CNRs) extraction methods such as matrix pencil method (MPM), Cauchy, vector-fitting Cauchy method (VCM), or Prony’s method decompose a signal in terms of frequency components and damping factors based on Baum’s singularity expansion method (SEM) either in the time or frequency domain. The validation of these CNRs is accomplished through a reconstruction of the signal based on these complex poles and residues and a comparison with the input signal. Here, we perform quantitative performance metrics in order to have an evaluation of each method’s hardware suitability factor before selecting a hardware platform using benchmark signals, simulations of backscattering scenarios, and experiments. Full article
(This article belongs to the Special Issue Digital Signal Processing Algorithms and Applications)
Show Figures

Figure 1

22 pages, 2661 KiB  
Article
Human Action Representation Learning Using an Attention-Driven Residual 3DCNN Network
by Hayat Ullah and Arslan Munir
Algorithms 2023, 16(8), 369; https://doi.org/10.3390/a16080369 - 31 Jul 2023
Cited by 1 | Viewed by 1029
Abstract
The recognition of human activities using vision-based techniques has become a crucial research field in video analytics. Over the last decade, there have been numerous advancements in deep learning algorithms aimed at accurately detecting complex human actions in video streams. While these algorithms [...] Read more.
The recognition of human activities using vision-based techniques has become a crucial research field in video analytics. Over the last decade, there have been numerous advancements in deep learning algorithms aimed at accurately detecting complex human actions in video streams. While these algorithms have demonstrated impressive performance in activity recognition, they often exhibit a bias towards either model performance or computational efficiency. This biased trade-off between robustness and efficiency poses challenges when addressing complex human activity recognition problems. To address this issue, this paper presents a computationally efficient yet robust approach, exploiting saliency-aware spatial and temporal features for human action recognition in videos. To achieve effective representation of human actions, we propose an efficient approach called the dual-attentional Residual 3D Convolutional Neural Network (DA-R3DCNN). Our proposed method utilizes a unified channel-spatial attention mechanism, allowing it to efficiently extract significant human-centric features from video frames. By combining dual channel-spatial attention layers with residual 3D convolution layers, the network becomes more discerning in capturing spatial receptive fields containing objects within the feature maps. To assess the effectiveness and robustness of our proposed method, we have conducted extensive experiments on four well-established benchmark datasets for human action recognition. The quantitative results obtained validate the efficiency of our method, showcasing significant improvements in accuracy of up to 11% as compared to state-of-the-art human action recognition methods. Additionally, our evaluation of inference time reveals that the proposed method achieves up to a 74× improvement in frames per second (FPS) compared to existing approaches, thus showing the suitability and effectiveness of the proposed DA-R3DCNN for real-time human activity recognition. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

3 pages, 158 KiB  
Editorial
Special Issue “Algorithms for Feature Selection”
by Muhammad Adnan Khan
Algorithms 2023, 16(8), 368; https://doi.org/10.3390/a16080368 - 31 Jul 2023
Cited by 1 | Viewed by 804
Abstract
This Special Issue of the open access journal Algorithms is dedicated to showcasing cutting-edge research in algorithms for feature selection [...] Full article
(This article belongs to the Special Issue Algorithms for Feature Selection)
19 pages, 5844 KiB  
Article
Design and Development of Energy Efficient Algorithm for Smart Beekeeping Device to Device Communication Based on Data Aggregation Techniques
by Elias Ntawuzumunsi, Santhi Kumaran, Louis Sibomana and Kambombo Mtonga
Algorithms 2023, 16(8), 367; https://doi.org/10.3390/a16080367 - 30 Jul 2023
Cited by 3 | Viewed by 1207
Abstract
Bees, like other insects, indirectly contribute to job creation, food security, and poverty reduction. However, across many parts of the world, bee populations are in decline, affecting crop yields due to reduced pollination and ultimately impacting human nutrition. Technology holds promise for countering [...] Read more.
Bees, like other insects, indirectly contribute to job creation, food security, and poverty reduction. However, across many parts of the world, bee populations are in decline, affecting crop yields due to reduced pollination and ultimately impacting human nutrition. Technology holds promise for countering the impacts of human activities and climatic change on bees’ survival and honey production. However, considering that smart beekeeping activities mostly operate in remote areas where the use of grid power is inaccessible and the use of batteries to power is not feasible, there is thus a need for such systems to be energy efficient. This work explores the integration of device-to-device communication with 5G technology as a solution to overcome the energy and throughput concerns in smart beekeeping technology. Mobile-based device-to-device communication facilitates devices to communicate directly without the need of immediate infrastructure. This type of communication offers advantages in terms of delay reduction, increased throughput, and reduced energy consumption. The faster data transmission capabilities and low-power modes of 5G networks would significantly enhance the energy efficiency during the system’s idle or standby states. Additionally, the paper analyzes the application of both the discovery and communication services offered by 5G in device-to-device-based smart bee farming. A novel, energy-efficient algorithm for smart beekeeping was developed using data integration and data scheduling and its performance was compared to existing algorithms. The simulation results demonstrated that the proposed smart beekeeping device-to-device communication with data integration guarantees a good quality of service while enhancing energy efficiency. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

12 pages, 1932 KiB  
Article
Machine-Learning Techniques for Predicting Phishing Attacks in Blockchain Networks: A Comparative Study
by Kunj Joshi, Chintan Bhatt, Kaushal Shah, Dwireph Parmar, Juan M. Corchado, Alessandro Bruno and Pier Luigi Mazzeo
Algorithms 2023, 16(8), 366; https://doi.org/10.3390/a16080366 - 29 Jul 2023
Cited by 6 | Viewed by 3214
Abstract
Security in the blockchain has become a topic of concern because of the recent developments in the field. One of the most common cyberattacks is the so-called phishing attack, wherein the attacker tricks the miner into adding a malicious block to the chain [...] Read more.
Security in the blockchain has become a topic of concern because of the recent developments in the field. One of the most common cyberattacks is the so-called phishing attack, wherein the attacker tricks the miner into adding a malicious block to the chain under genuine conditions to avoid detection and potentially destroy the entire blockchain. The current attempts at detection include the consensus protocol; however, it fails when a genuine miner tries to add a new block to the blockchain. Zero-trust policies have started making the rounds in the field as they ensure the complete detection of phishing attempts; however, they are still in the process of deployment, which may take a significant amount of time. A more accurate measure of phishing detection involves machine-learning models that use specific features to automate the entire process of classifying an attempt as either a phishing attempt or a safe attempt. This paper highlights several models that may give safe results and help eradicate blockchain phishing attempts. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Computer Security Problems)
Show Figures

Figure 1

Previous Issue
Back to TopTop