Special Issue "Machine Learning and Artificial Intelligence in Engineering Applications"

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: 15 March 2024 | Viewed by 4393

Special Issue Editors

MicroComputer Systems Laboratory Team, Department of Mathematics, University of Ioannina, 45110 Ioannina, Greece
Interests: distributed systems; sensor networks; IoT; embedded systems; IoT protocols and algorithms; big data; cluster-based systems; databases; load balancing algorithms; middleware protocol design; network mod
Special Issues, Collections and Topics in MDPI journals
Systems Reliability and Industrial Safety Laboratory, Institute for Nuclear and Radiological Sciences, Energy, Technology and Safety, National Center for Scientific Research “DEMOKRITOS”, 15310 Athens, Greece
Interests: human reliability; quantitative risk assessment; risk management; accident analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence is at our doorstep with machine learning services and applications incorporated in industrial, agricultural, energy, financial, healthcare, manufacturing, transportation, and logistic systems. These technological capabilities bring about tremendous world changes, boosting the economy, increasing productivity, and providing new opportunities. Moreover, all applications now rely heavily on data. As a result, information has become an essential commodity. Furthermore, the development of artificial intelligence and deep learning models with the ever-increasing human–machine interactions in everyday applications is a crucial aspect of the next Industrial Revolution.

The rapid deployment of the Internet of Things (IoT) and the integration of cloud big data concentrations lead to the ever-increasing engagement of information, which requires training in new intelligent algorithms, protocols, and processes. The growth of AI with the incorporation of machine learning and deep learning in engineering applications has enabled developers to create machines that can carry out complex manufacturing tasks. The ultimate goal is to develop systems that can learn and improve without human intervention.

Many engineering systems and applications will benefit from such unsupervised intelligent processes. In addition, natural language processing capabilities and the extensive exploitation of neural networks will provide new human–machine interactions for robotics, agriculture, process and manufacturing, and the transportation industry, while further promoting the extensive use of augmented, virtual, and mixed reality applications.

This Special Issue aims to emerge new distributed or cloud-based engineering applications, which involve smart algorithms and services targeting holistic, innovative, and sustainable systems. We encourage contributors to publish their work related to intelligent Information Systems, decision support systems, incident response systems, distributed data collection processes, and deep learning/machine learning architectures and algorithms provided as a service, associated with (but not limited to):

  • Machine learning and deep learning algorithm services, and processes for logistic, manufacturing, industrial, and safety applications;
  • Smart cities and smart home automation services and applications;
  • Smart transportation systems and services;
  • Smart medical systems and services;
  • Smart agricultural decision support systems and services;
  • Human–machine interactive and cognitive services;
  • Augmented reality, virtual reality, and mixed reality systems, services, and applications;
  • Internet of Things, smart algorithms, provided as services over distributed and cloud-based decision support systems;
  • Design, evaluation, and implementation of novel Internet of Things solutions incorporating machine learning, deep learning, and data mining logic.

We look forward to receiving your contributions.

Dr. Sotirios Kontogiannis
Dr. Myrto Konstantinidou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent information systems
  • intelligent engineering applications
  • machine learning and deep learning algorithms and applications
  • distributed information systems
  • Industry 5.0
  • cloud-based decision support systems and services
  • IoT
  • machine learning and deep learning services and applications

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Comparison of Different Radial Basis Function Networks for the Electrical Impedance Tomography (EIT) Inverse Problem
Algorithms 2023, 16(10), 461; https://doi.org/10.3390/a16100461 - 28 Sep 2023
Viewed by 223
Abstract
This paper aims to determine whether regularization improves image reconstruction in electrical impedance tomography (EIT) using a radial basis network. The primary purpose is to investigate the effect of regularization to estimate the network parameters of the radial basis function network to solve [...] Read more.
This paper aims to determine whether regularization improves image reconstruction in electrical impedance tomography (EIT) using a radial basis network. The primary purpose is to investigate the effect of regularization to estimate the network parameters of the radial basis function network to solve the inverse problem in EIT. Our approach to studying the efficacy of the radial basis network with regularization is to compare the performance among several different regularizations, mainly Tikhonov, Lasso, and Elastic Net regularization. We vary the network parameters, including the fixed and variable widths for the Gaussian used for the network. We also perform a robustness study for comparison of the different regularizations used. Our results include (1) determining the optimal number of radial basis functions in the network to avoid overfitting; (2) comparison of fixed versus variable Gaussian width with or without regularization; (3) comparison of image reconstruction with or without regularization, in particular, no regularization, Tikhonov, Lasso, and Elastic Net; (4) comparison of both mean square and mean absolute error and the corresponding variance; and (5) comparison of robustness, in particular, the performance of the different methods concerning noise level. We conclude that by looking at the R2 score, one can determine the optimal number of radial basis functions. The fixed-width radial basis function network with regularization results in improved performance. The fixed-width Gaussian with Tikhonov regularization performs very well. The regularization helps reconstruct the images outside of the training data set. The regularization may cause the quality of the reconstruction to deteriorate; however, the stability is much improved. In terms of robustness, the RBF with Lasso and Elastic Net seem very robust compared to Tikhonov. Full article
Show Figures

Figure 1

Article
Using an Opportunity Matrix to Select Centers for RBF Neural Networks
Algorithms 2023, 16(10), 455; https://doi.org/10.3390/a16100455 - 23 Sep 2023
Viewed by 248
Abstract
When designed correctly, radial basis function (RBF) neural networks can approximate mathematical functions to any arbitrary degree of precision. Multilayer perceptron (MLP) neural networks are also universal function approximators, but RBF neural networks can often be trained several orders of magnitude more quickly [...] Read more.
When designed correctly, radial basis function (RBF) neural networks can approximate mathematical functions to any arbitrary degree of precision. Multilayer perceptron (MLP) neural networks are also universal function approximators, but RBF neural networks can often be trained several orders of magnitude more quickly than an MLP network with an equivalent level of function approximation capability. The primary challenge with designing a high-quality RBF neural network is selecting the best values for the network’s “centers”, which can be thought of as geometric locations within the input space. Traditionally, the locations for the RBF nodes’ centers are chosen either through random sampling of the training data or by using k-means clustering. The current paper proposes a new algorithm for selecting the locations of the centers by relying on a structure known as an “opportunity matrix”. The performance of the proposed algorithm is compared against that of the random sampling and k-means clustering methods using a large set of experiments involving both a real-world dataset from the steel industry and a variety of mathematical and statistical functions. The results indicate that the proposed opportunity matrix algorithm is almost always much better at selecting locations for an RBF network’s centers than either of the two traditional techniques, yielding RBF neural networks with superior function approximation capabilities. Full article
Show Figures

Figure 1

Article
An Aspect-Oriented Approach to Time-Constrained Strategies in Smart City IoT Applications
Algorithms 2023, 16(10), 454; https://doi.org/10.3390/a16100454 - 23 Sep 2023
Viewed by 261
Abstract
The Internet of Things (IoT) is growing rapidly in various domains, including smart city applications. In many cases, IoT data in smart city applications have time constraints in which they are relevant and acceptable to the task at hand—a window of validity (WoV). [...] Read more.
The Internet of Things (IoT) is growing rapidly in various domains, including smart city applications. In many cases, IoT data in smart city applications have time constraints in which they are relevant and acceptable to the task at hand—a window of validity (WoV). Existing algorithms, such as ex post facto adjustment, data offloading, fog computing, and blockchain applications, generally focus on managing the time-validity of data. In this paper, we consider that the functional components of the IoT devices’ decision-making strategies themselves may also be defined in terms of a WoV. We propose an aspect-oriented mechanism to supervise the execution of the IoT device’s strategy, manage the WoV constraints, and resolve invalidated functional components through communication in the multi-agent system. The applicability of our proposed approach is considered with respect to the improved cost, service life, and environmental outcomes for IoT devices in a smart cities context. Full article
Show Figures

Graphical abstract

Article
Deep Learning Stranded Neural Network Model for the Detection of Sensory Triggered Events
Algorithms 2023, 16(4), 202; https://doi.org/10.3390/a16040202 - 10 Apr 2023
Viewed by 1148
Abstract
Maintenance processes are of high importance for industrial plants. They have to be performed regularly and uninterruptedly. To assist maintenance personnel, industrial sensors monitored by distributed control systems observe and collect several machinery parameters in the cloud. Then, machine learning algorithms try to [...] Read more.
Maintenance processes are of high importance for industrial plants. They have to be performed regularly and uninterruptedly. To assist maintenance personnel, industrial sensors monitored by distributed control systems observe and collect several machinery parameters in the cloud. Then, machine learning algorithms try to match patterns and classify abnormal behaviors. This paper presents a new deep learning model called stranded-NN. This model uses a set of NN models of variable layer depths depending on the input. This way, the proposed model can classify different types of emergencies occurring in different time intervals; real-time, close-to-real-time, or periodic. The proposed stranded-NN model has been compared against existing fixed-depth MLPs and LSTM networks used by the industry. Experimentation has shown that the stranded-NN model can outperform fixed depth MLPs 15–21% more in terms of accuracy for real-time events and at least 10–14% more for close-to-real-time events. Regarding LSTMs of the same memory depth as the NN strand input, the stranded NN presents similar results in terms of accuracy for a specific number of strands. Nevertheless, the stranded-NN model’s ability to maintain multiple trained strands makes it a superior and more flexible classification and prediction solution than its LSTM counterpart, as well as being faster at training and classification. Full article
Show Figures

Figure 1

Article
Hyperparameter Optimization Using Successive Halving with Greedy Cross Validation
Algorithms 2023, 16(1), 17; https://doi.org/10.3390/a16010017 - 27 Dec 2022
Cited by 2 | Viewed by 1580
Abstract
Training and evaluating the performance of many competing Artificial Intelligence (AI)/Machine Learning (ML) models can be very time-consuming and expensive. Furthermore, the costs associated with this hyperparameter optimization task grow exponentially when cross validation is used during the model selection process. Finding ways [...] Read more.
Training and evaluating the performance of many competing Artificial Intelligence (AI)/Machine Learning (ML) models can be very time-consuming and expensive. Furthermore, the costs associated with this hyperparameter optimization task grow exponentially when cross validation is used during the model selection process. Finding ways of quickly identifying high-performing models when conducting hyperparameter optimization with cross validation is hence an important problem in AI/ML research. Among the proposed methods of accelerating hyperparameter optimization, successive halving has emerged as a popular, state-of-the-art early stopping algorithm. Concurrently, recent work on cross validation has yielded a greedy cross validation algorithm that prioritizes the most promising candidate AI/ML models during the early stages of the model selection process. The current paper proposes a greedy successive halving algorithm in which greedy cross validation is integrated into successive halving. An extensive series of experiments is then conducted to evaluate the comparative performance of the proposed greedy successive halving algorithm. The results show that the quality of the AI/ML models selected by the greedy successive halving algorithm is statistically identical to those selected by standard successive halving, but that greedy successive halving is typically more than 3.5 times faster than standard successive halving. Full article
Show Figures

Figure 1

Back to TopTop