Advances in Artificial Intelligence Engineering

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 March 2025 | Viewed by 15950

Special Issue Editors


E-Mail Website
Guest Editor
1. Entwicklungs Professur (Eqv. Junior Professor), Faculty of Engineering and Computer Science, University of Applied Sciences Osnabrueck, Osnabrueck, Germany
2. Functional Safety Engineer, Innotec GmbH, Erlenweg 12, 49324 Melle, Germany
Interests: software engineering and quality assurance (with focus on AI); functional safety software engineering; model-based software development; embedded software engineering

E-Mail Website
Guest Editor
Institute for Computer Science, Software Engineering Research Group, Wachsbleiche 27, 49090 Osnabrück, Germany
Interests: quality assurance; automation in software development; embedded software engineering

Special Issue Information

Dear Colleagues,

In the last decades, Software Engineering (SE) has had a profound impact on various fields such as classic software industry, transformation towards digitization and Industry 4.0, IT-heavy service sectors such as banks, insurance companies, telecom providers, specialized fields such as embedded software and industrial and scientific research department and institutes. On the other hand, Artificial Intelligence (AI) and its sub-area Machine Learning (ML) is beginning to have a metamorphic impact on almost every major industry today. In this context, with recent advances in ML, there is widespread interest around integrating AI capabilities and software engineering. The convergence of two fields such as AI and SE can give rise to collaboration in two main ways such as (a) AI-guided SE and (b) SE for AI.

SE can benefit from integration of AI related technologies (such as for reasoning, problem solving, planning, and learning, among others) to increase its power, flexibility, user experience and quality. For instance, even simple AI/ML methods can help remove a lot of inefficiencies in the day-to-day life of software developer. Thus, it is intuitive to perceive that AI-powered SE should significantly increase the benefits and reduce the costs of adopting SE artefacts. In the case of using SE for AI, AI can primarily benefit from SE by integrating concepts and ideas from SE.

This special issue is aimed at addressing issues the opportunities and challenges derived by integration of AI and SE for both (a) and (b). This includes (but not limited to):

  • AI planning applied to SE development process;
  • Self-adapting code generators;
  • AI-based code analyzers for detecting code smells and anti-patterns;
  • Using ML of models, meta-models, and model transformation through search-based approaches in model-based software engineering (MBSE);
  • AI-based assistants such as bots for SE tools;
  • AI assistants for human-in-the loop modeling such as conversational virtual assistants for dialog-based optimization of SE tasks;
  • AI support for various stages of SE development process;
  • AI-based and automated natural language processing (NLP) (e.g., applied to various stages of any SE development process and model-based development);
  • Application of AI in semantic reasoning platforms;
  • Code recommendation engines;
  • ML-based automated code review and assessing the risk of a code change;
  • AI techniques for data, process and model mining and categorization;
  • Challenges in choice, evaluation, and adaptation of AI techniques to SE, such that they provide a compelling improvement to current systems during the entire software development process;
  • Automated frameworks and supporting environments for ML workflows and ML processes;
  • Model-driven processes for AI systems development and testing;
  • Automatic code generators for AI libraries;
  • Domain-specific modeling for ML;
  • Case studies of applications of AI/ML in SE and vice versa.

Dr. Padma Iyenghar
Prof. Dr. Elke Pulvermüller
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • software engineering
  • application of AI in software engineering
  • SE for AI
  • AI-based assistants
  • chatbots
  • AI assistant

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 1810 KiB  
Article
Efficient Speech Signal Dimensionality Reduction Using Complex-Valued Techniques
by Sungkyun Ko and Minho Park
Electronics 2024, 13(15), 3046; https://doi.org/10.3390/electronics13153046 - 1 Aug 2024
Viewed by 449
Abstract
In this study, we propose the CVMFCC-DR (Complex-Valued Mel-Frequency Cepstral Coefficients Dimensionality Reduction) algorithm as an efficient method for reducing the dimensionality of speech signals. By utilizing the complex-valued MFCC technique, which considers both real and imaginary components, our algorithm enables dimensionality reduction [...] Read more.
In this study, we propose the CVMFCC-DR (Complex-Valued Mel-Frequency Cepstral Coefficients Dimensionality Reduction) algorithm as an efficient method for reducing the dimensionality of speech signals. By utilizing the complex-valued MFCC technique, which considers both real and imaginary components, our algorithm enables dimensionality reduction without information loss while decreasing computational costs. The efficacy of the proposed algorithm is validated through experiments which demonstrate its effectiveness in building a speech recognition model using a complex-valued neural network. Additionally, a complex-valued softmax interpretation method for complex numbers is introduced. The experimental results indicate that the approach yields enhanced performance compared to traditional MFCC-based techniques, thereby highlighting its potential in the field of speech recognition. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

25 pages, 1652 KiB  
Article
Toward Safer Roads: Predicting the Severity of Traffic Accidents in Montreal Using Machine Learning
by Bappa Muktar and Vincent Fono
Electronics 2024, 13(15), 3036; https://doi.org/10.3390/electronics13153036 - 1 Aug 2024
Viewed by 974
Abstract
Traffic accidents are among the most common causes of death worldwide. According to statistics from the World Health Organization (WHO), 50 million people are involved in traffic accidents every year. Canada, particularly Montreal, is not immune to this problem. Data from the Société [...] Read more.
Traffic accidents are among the most common causes of death worldwide. According to statistics from the World Health Organization (WHO), 50 million people are involved in traffic accidents every year. Canada, particularly Montreal, is not immune to this problem. Data from the Société de l’Assurance Automobile du Québec (SAAQ) show that there were 392 deaths on Québec roads in 2022, 38 of them related to the city of Montreal. This value represents an increase of 29.3% for the city of Montreal compared with the average for the years 2017 to 2021. In this context, it is important to take concrete measures to improve traffic safety in the city of Montreal. In this article, we present a web-based solution based on machine learning that predicts the severity of traffic accidents in Montreal. This solution uses a dataset of traffic accidents that occurred in Montreal between 2012 and 2021. By predicting the severity of accidents, our approach aims to identify key factors that influence whether an accident is serious or not. Understanding these factors can help authorities implement targeted interventions to prevent severe accidents and allocate resources more effectively during emergency responses. Classification algorithms such as eXtreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), Random Forest (RF), and Gradient Boosting (GB) were used to develop the prediction model. Performance metrics such as precision, recall, F1 score, and accuracy were used to evaluate the prediction model. The performance analysis shows an excellent accuracy of 96% for the prediction model based on the XGBoost classifier. The other models (CatBoost, RF, GB) achieved 95%, 93%, and 89% accuracy, respectively. The prediction model based on the XGBoost classifier was deployed using a client–server web application managed by Swagger-UI, Angular, and the Flask Python framework. This study makes significant contributions to the field by employing an ensemble of supervised machine learning algorithms, achieving a high prediction accuracy, and developing a real-time prediction web application. This application enables quicker and more effective responses from emergency services, potentially reducing the impact of severe accidents and improving overall traffic safety. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

17 pages, 8571 KiB  
Article
Robotic Manipulator in Dynamic Environment with SAC Combing Attention Mechanism and LSTM
by Xinghong Kuang and Sucheng Zhou
Electronics 2024, 13(10), 1969; https://doi.org/10.3390/electronics13101969 - 17 May 2024
Viewed by 884
Abstract
The motion planning task of the manipulator in a dynamic environment is relatively complex. This paper uses the improved Soft Actor Critic Algorithm (SAC) with the maximum entropy advantage as the benchmark algorithm to implement the motion planning of the manipulator. In order [...] Read more.
The motion planning task of the manipulator in a dynamic environment is relatively complex. This paper uses the improved Soft Actor Critic Algorithm (SAC) with the maximum entropy advantage as the benchmark algorithm to implement the motion planning of the manipulator. In order to solve the problem of insufficient robustness in dynamic environments and difficulty in adapting to environmental changes, it is proposed to combine Euclidean distance and distance difference to improve the accuracy of approaching the target. In addition, in order to solve the problem of non-stability and uncertainty of the input state in the dynamic environment, which leads to the inability to fully express the state information, we propose an attention network fused with Long Short-Term Memory (LSTM) to improve the SAC algorithm. We conducted simulation experiments and present the experimental results. The results prove that the use of fused neural network functions improved the success rate of approaching the target and improved the SAC algorithm at the same time, which improved the convergence speed, success rate, and avoidance capabilities of the algorithm. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

23 pages, 608 KiB  
Article
AI-Driven Refactoring: A Pipeline for Identifying and Correcting Data Clumps in Git Repositories
by Nils Baumgartner, Padma Iyenghar, Timo Schoemaker and Elke Pulvermüller
Electronics 2024, 13(9), 1644; https://doi.org/10.3390/electronics13091644 - 25 Apr 2024
Viewed by 1238
Abstract
Data clumps, groups of variables that repeatedly appear together across different parts of a software system, are indicative of poor code structure and can lead to potential issues such as maintenance challenges, testing complexity, and scalability concerns, among others. Addressing this, our study [...] Read more.
Data clumps, groups of variables that repeatedly appear together across different parts of a software system, are indicative of poor code structure and can lead to potential issues such as maintenance challenges, testing complexity, and scalability concerns, among others. Addressing this, our study introduces an innovative AI-driven pipeline specifically designed for the refactoring of data clumps in software repositories. This pipeline leverages the capabilities of Large Language Models (LLM), such as ChatGPT, to automate the detection and resolution of data clumps, thereby enhancing code quality and maintainability. In developing this pipeline, we have taken into consideration the new European Union (EU)-Artificial Intelligence (AI) Act, ensuring that our pipeline complies with the latest regulatory requirements and ethical standards for use of AI in software development by outsourcing decisions to a human in the loop. Preliminary experiments utilizing ChatGPT were conducted to validate the effectiveness and efficiency of our approach. These tests demonstrate promising results in identifying and refactoring data clumps, but also the challenges using LLMs. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

18 pages, 1359 KiB  
Article
Utilizing Latent Diffusion Model to Accelerate Sampling Speed and Enhance Text Generation Quality
by Chenyang Li, Long Zhang and Qiusheng Zheng
Electronics 2024, 13(6), 1093; https://doi.org/10.3390/electronics13061093 - 15 Mar 2024
Viewed by 1220
Abstract
Diffusion models have achieved tremendous success in modeling continuous data modalities, such as images, audio, and video, yet their application in discrete data domains (e.g., natural language) has been limited. Existing methods primarily represent discrete text in a continuous diffusion space, incurring significant [...] Read more.
Diffusion models have achieved tremendous success in modeling continuous data modalities, such as images, audio, and video, yet their application in discrete data domains (e.g., natural language) has been limited. Existing methods primarily represent discrete text in a continuous diffusion space, incurring significant computational overhead during training and resulting in slow sampling speeds. This paper introduces LaDiffuSeq, a latent diffusion-based text generation model incorporating an encoder–decoder structure. Specifically, it first employs a pretrained encoder to map sequences composed of attributes and corresponding text into a low-dimensional latent vector space. Then, without the guidance of a classifier, it performs the diffusion process for the sequence’s corresponding latent space. Finally, a pretrained decoder is used to decode the newly generated latent vectors, producing target texts that are relevant to themes and possess multiple emotional granularities. Compared to the benchmark model, DiffuSeq, this model achieves BERTScore improvements of 0.105 and 0.009 on two public real-world datasets (ChnSentiCorp and a debate dataset), respectively; perplexity falls by 3.333 and 4.562; and it effectively quadruples the text generation sampling speed. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

19 pages, 2039 KiB  
Article
PatchRLNet: A Framework Combining a Vision Transformer and Reinforcement Learning for The Separation of a PTFE Emulsion and Paraffin
by Xinxin Wang, Lei Wu, Bingyu Hu, Xinduoji Yang, Xianghui Fan, Meng Liu, Kai Cheng, Song Wang, Jianqiang Miao and Haigang Gong
Electronics 2024, 13(2), 339; https://doi.org/10.3390/electronics13020339 - 12 Jan 2024
Viewed by 1021
Abstract
During the production of a PolyTetraFluoroEthylene(PTFE) emulsion, it is crucial to detect the separation between the PTFE emulsion and liquid paraffin in order to purify the PTFE emulsion and facilitate subsequent polymerization. However, the current practice heavily relies on visual inspections conducted by [...] Read more.
During the production of a PolyTetraFluoroEthylene(PTFE) emulsion, it is crucial to detect the separation between the PTFE emulsion and liquid paraffin in order to purify the PTFE emulsion and facilitate subsequent polymerization. However, the current practice heavily relies on visual inspections conducted by on-site personnel, resulting in not only low efficiency and accuracy, but also posing potential threats to personnel safety. The incorporation of artificial intelligence for the automated detection of paraffin separation holds the promise of significantly improving detection accuracy and mitigating potential risks to personnel. Thus, we propose an automated detection framework named PatchRLNet, which leverages a combination of a vision transformer and reinforcement learning. Reinforcement learning is integrated into the embedding layer of the vision transformer in PatchRLNet, providing attention scores for each patch. This strategic integration compels the model to allocate greater attention to the essential features of the target, effectively filtering out ambient environmental factors and background noise. Building upon this foundation, we introduce a multimodal integration mechanism to further enhance the prediction accuracy of the model. To validate the efficacy of our proposed framework, we conducted performance testing using authentic data from China’s largest PTFE material production base. The results are compelling, demonstrating that the framework achieved an impressive accuracy rate of over 99% on the test set. This underscores its significant practical application value. To the best of our knowledge, this represents the first instance of automated detection applied to the separation of the PTFE emulsion and paraffin. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

15 pages, 2160 KiB  
Article
Safe and Trustful AI for Closed-Loop Control Systems
by Julius Schöning and Hans-Jürgen Pfisterer
Electronics 2023, 12(16), 3489; https://doi.org/10.3390/electronics12163489 - 17 Aug 2023
Cited by 4 | Viewed by 2764
Abstract
In modern times, closed-loop control systems (CLCSs) play a prominent role in a wide application range, from production machinery via automated vehicles to robots. CLCSs actively manipulate the actual values of a process to match predetermined setpoints, typically in real time and with [...] Read more.
In modern times, closed-loop control systems (CLCSs) play a prominent role in a wide application range, from production machinery via automated vehicles to robots. CLCSs actively manipulate the actual values of a process to match predetermined setpoints, typically in real time and with remarkable precision. However, the development, modeling, tuning, and optimization of CLCSs barely exploit the potential of artificial intelligence (AI). This paper explores novel opportunities and research directions in CLCS engineering, presenting potential designs and methodologies incorporating AI. Combining these opportunities and directions makes it evident that employing AI in developing and implementing CLCSs is indeed feasible. Integrating AI into CLCS development or AI directly within CLCSs can lead to a significant improvement in stakeholder confidence. Integrating AI in CLCSs raises the question: How can AI in CLCSs be trusted so that its promising capabilities can be used safely? One does not trust AI in CLCSs due to its unknowable nature caused by its extensive set of parameters that defy complete testing. Consequently, developers working on AI-based CLCSs must be able to rate the impact of the trainable parameters on the system accurately. By following this path, this paper highlights two key aspects as essential research directions towards safe AI-based CLCSs: (I) the identification and elimination of unproductive layers in artificial neural networks (ANNs) for reducing the number of trainable parameters without influencing the overall outcome, and (II) the utilization of the solution space of an ANN to define the safety-critical scenarios of an AI-based CLCS. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

28 pages, 1230 KiB  
Article
Robust Optimization with Interval Uncertainties Using Hybrid State Transition Algorithm
by Haochuan Zhang, Jie Han, Xiaojun Zhou and Yuxuan Zheng
Electronics 2023, 12(14), 3035; https://doi.org/10.3390/electronics12143035 - 11 Jul 2023
Cited by 1 | Viewed by 1215
Abstract
Robust optimization is concerned with finding an optimal solution that is insensitive to uncertainties and has been widely used in solving real-world optimization problems. However, most robust optimization methods suffer from high computational costs and poor convergence. To alleviate the above problems, an [...] Read more.
Robust optimization is concerned with finding an optimal solution that is insensitive to uncertainties and has been widely used in solving real-world optimization problems. However, most robust optimization methods suffer from high computational costs and poor convergence. To alleviate the above problems, an improved robust optimization algorithm is proposed. First, to reduce the computational cost, the second-order Taylor series surrogate model is used to approximate the robustness indices. Second, to strengthen the convergence, the state transition algorithm is studied to explore the whole search space for candidate solutions, while sequential quadratic programming is adopted to exploit the local area. Third, to balance the robustness and optimality of candidate solutions, a preference-based selection mechanism is investigated which effectively determines the promising solution. The proposed robust optimization method is applied to obtain the optimal solutions of seven examples that are subject to decision variables and parameter uncertainties. Comparative studies with other robust optimization algorithms (robust genetic algorithm, Kriging metamodel-assisted robust optimization method, etc.) show that the proposed method can obtain accurate and robust solutions with less computational cost. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

17 pages, 2280 KiB  
Article
An Enhanced Method on Transformer-Based Model for ONE2SEQ Keyphrase Generation
by Lingyun Shen and Xiaoqiu Le
Electronics 2023, 12(13), 2968; https://doi.org/10.3390/electronics12132968 - 5 Jul 2023
Viewed by 1213
Abstract
Keyphrase generation is a long-standing task in scientific literature retrieval. The Transformer-based model outperforms other baseline models in this challenge dramatically. In cross-domain keyphrase generation research, topic information plays a guiding role during generation, while in keyphrase generation of individual text, titles can [...] Read more.
Keyphrase generation is a long-standing task in scientific literature retrieval. The Transformer-based model outperforms other baseline models in this challenge dramatically. In cross-domain keyphrase generation research, topic information plays a guiding role during generation, while in keyphrase generation of individual text, titles can replace topic roles and convey more semantic information. As a result, we proposed an enhanced model architecture named TAtrans. In this research, we investigate the advantages of title attention and sequence code representing phrase order in keyphrase sequence in improving Transformer-based keyphrase generation. We conduct experiments on five widely-used English datasets specifically designed for keyphrase generation. Our method achieves an F1 score in the top five, surpassing the Transformer-based model by 3.2% in KP20k. The results demonstrate that the proposed method outperforms all the previous models on prediction present keyphrases. To evaluate the performance of the proposed model in the Chinese dataset, we construct a new Chinese abstract dataset called CNKIL, which contains a total of 54,546 records. The F1 score of the top five for predicting present keyphrases on the CNKIL dataset exceeds 2.2% compared to the Transformer-based model. However, there is no significant improvement in the model’s performance in predicting absent keyphrases. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

18 pages, 7072 KiB  
Article
An Intelligent Detection Method for Obstacles in Agricultural Soil with FDTD Modeling and MSVMs
by Yuanhong Li, Congyue Wang, Chaofeng Wang, Yangfan Luo and Yubin Lan
Electronics 2023, 12(11), 2447; https://doi.org/10.3390/electronics12112447 - 29 May 2023
Viewed by 1107
Abstract
Unknown objects in agricultural soil can be important because they may impact the health and productivity of the soil and the crops that grow in it. Challenges in collecting soil samples present opportunities to utilize Ground Penetrating Radar (GPR) image processing and artificial [...] Read more.
Unknown objects in agricultural soil can be important because they may impact the health and productivity of the soil and the crops that grow in it. Challenges in collecting soil samples present opportunities to utilize Ground Penetrating Radar (GPR) image processing and artificial intelligence techniques to identify and locate unidentified objects in agricultural soil, which are important for agriculture. In this study, we used finite-difference time-domain (FDTD) simulated models to gather training data and predict actual soil conditions. Additionally, we propose a multi-class support vector machine (MSVM) that employs a semi-supervised algorithm to classify buried object materials and locate their position in soil. Then, we extract echo signals from the electromagnetic features of the FDTD simulation model, including soil type, parabolic shape, location, and energy magnitude changes. Lastly, we compare the performance of various MSVM models with different kernel functions (linear, polynomial, and radial basis function). The results indicate that the FDTD-Yee method enhances the accuracy of simulating real agricultural soils. The average recognition rate of the hyperbola position formed by the GPR echo signal is 91.13%, which can be utilized to detect the position and material of unknown and underground objects. For material identification, the directed acyclic graph support vector machine (DAG-SVM) model attains the highest classification accuracy among all soil layers when using an RBF kernel. Overall, our study demonstrates that an artificial intelligence model trained with the FDTD forward simulation model can effectively detect objects in farmland soil. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

18 pages, 2725 KiB  
Article
Multi-Strategy Fusion of Sine Cosine and Arithmetic Hybrid Optimization Algorithm
by Lisang Liu, Hui Xu, Bin Wang and Chengyang Ke
Electronics 2023, 12(9), 1961; https://doi.org/10.3390/electronics12091961 - 23 Apr 2023
Cited by 3 | Viewed by 1310
Abstract
The goal was to address the problems of slow convergence speed, low solution accuracy and insufficient performance in solving complex functions in the search process of an arithmetic optimization algorithm (AOA). A multi-strategy improved arithmetic optimization algorithm (SSCAAOA) is suggested in this study. [...] Read more.
The goal was to address the problems of slow convergence speed, low solution accuracy and insufficient performance in solving complex functions in the search process of an arithmetic optimization algorithm (AOA). A multi-strategy improved arithmetic optimization algorithm (SSCAAOA) is suggested in this study. By enhancing the population’s initial distribution, optimizing the control parameters, integrating the positive cosine algorithm with improved parameters, and adding inertia weight coefficients and a population history information sharing mechanism to the PSO algorithm, the optimization accuracy and convergence speed of the AOA algorithm are improved. This increases the algorithm’s ability to perform a global search and prevents it from hitting a local optimum. Simulations of SSCAAOA using other optimization algorithms are used to examine their efficacy on benchmark test functions and engineering challenges. The analysis of the experimental data reveals that, when compared to other comparative algorithms, the improved algorithm presented in this paper has a convergence speed and accuracy that are tens of orders of magnitude faster for the unimodal function and significantly better for the multimodal function. Practical engineering tests also demonstrate that the revised approach performs better. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

Review

Jump to: Research

28 pages, 462 KiB  
Review
Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey
by Sajad Moosavi, Maryam Farajzadeh-Zanjani, Roozbeh Razavi-Far, Vasile Palade and Mehrdad Saif
Electronics 2024, 13(17), 3497; https://doi.org/10.3390/electronics13173497 - 3 Sep 2024
Viewed by 993
Abstract
This survey explores applications of explainable artificial intelligence in manufacturing and industrial cyber–physical systems. As technological advancements continue to integrate artificial intelligence into critical infrastructure and industrial processes, the necessity for clear and understandable intelligent models becomes crucial. Explainable artificial intelligence techniques play [...] Read more.
This survey explores applications of explainable artificial intelligence in manufacturing and industrial cyber–physical systems. As technological advancements continue to integrate artificial intelligence into critical infrastructure and industrial processes, the necessity for clear and understandable intelligent models becomes crucial. Explainable artificial intelligence techniques play a pivotal role in enhancing the trustworthiness and reliability of intelligent systems applied to industrial systems, ensuring human operators can comprehend and validate the decisions made by these intelligent systems. This review paper begins by highlighting the imperative need for explainable artificial intelligence, and, subsequently, classifies explainable artificial intelligence techniques systematically. The paper then investigates diverse explainable artificial-intelligence-related works within a wide range of industrial applications, such as predictive maintenance, cyber-security, fault detection and diagnosis, process control, product development, inventory management, and product quality. The study contributes to a comprehensive understanding of the diverse strategies and methodologies employed in integrating explainable artificial intelligence within industrial contexts. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

Back to TopTop