Advancements in Artificial Intelligence (AI) for Engineering Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 October 2025 | Viewed by 5308

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Computer Science and Technology, Wroclaw University of Science and Technology, 50-370 Wroclaw, Poland
Interests: artificial intelligence; optimization techniques; fuzzy logic; natural language processing; reinforcement learning

Special Issue Information

Dear Colleagues,

We are excited to issue a call for papers for a Special Issue of Electronics focusing on the intersection of artificial intelligence (AI) with engineering applications. This Special Issue aims to explore the latest advancements and practical implementations within the realms of swarm intelligence, optimization, fuzzy logic, natural language processing (NLP), computer vision, and reinforcement learning.

The overall focus, scope, and purpose of this Special Issue are delineated as follows:

(a) Focus: The Special Issue will concentrate on exploring innovative methodologies and algorithms within the fields of swarm intelligence, optimization, fuzzy logic, NLP, computer vision, and reinforcement learning, with a specific emphasis on their application in engineering domains.

(b) Scope: We welcome contributions that present novel research findings, methodologies, case studies, and applications related to the aforementioned AI fields in engineering contexts. Topics of interest include, but are not limited to, the following:

  • Development of advanced AI-based optimization techniques for engineering problems;
  • Integration of fuzzy logic principles into engineering systems to enhance adaptability and decision-making;
  • Utilization of NLP for improving human–computer interaction in engineering applications;
  • Application of computer vision techniques for object recognition, image analysis, and visual perception in engineering tasks;
  • Implementation of reinforcement learning algorithms for autonomous decision-making and control in engineering systems.

(c) Purpose: The purpose of this Special Issue is to provide a platform for researchers to disseminate their latest findings, exchange insights, and foster collaborations for advancing the state of the art in AI-driven engineering applications. By showcasing practical implementations and case studies, we aim to bridge the gap between theoretical advancements in AI and their real-world applications in engineering.

This Special Issue will complement the existing literature by taking the following steps:

  • Offering a comprehensive overview of the latest trends and advancements in AI techniques as applied to engineering problems;
  • Providing in-depth discussions and analyses of practical case studies and applications, thereby offering valuable insights for both academia and industry practitioners;
  • Stimulating further research and innovation in this field by identifying emerging challenges and potential areas for future exploration.

We encourage researchers from academia, industry, and other relevant sectors to contribute their original research articles for publication in this Special Issue.

We eagerly anticipate your contributions to this Special Issue and advance our collective understanding of AI-driven engineering applications.

Dr. Hubert Zarzycki
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • swarm intelligence
  • optimization
  • fuzzy logic
  • natural language processing (NLP)
  • computer vision
  • reinforcement learning
  • engineering applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 3646 KiB  
Article
A Meta-Learner Based on the Combination of Stacking Ensembles and a Mixture of Experts for Balancing Action Unit Recognition
by Andrew Sumsion and Dah-Jye Lee
Electronics 2025, 14(13), 2665; https://doi.org/10.3390/electronics14132665 - 30 Jun 2025
Abstract
Facial action units (AUs) are used throughout animation, clinical settings, and robotics. AU recognition usually works better for these downstream tasks when it achieves high performance across all AUs. Current facial AU recognition approaches tend to perform unevenly across all AUs. Among other [...] Read more.
Facial action units (AUs) are used throughout animation, clinical settings, and robotics. AU recognition usually works better for these downstream tasks when it achieves high performance across all AUs. Current facial AU recognition approaches tend to perform unevenly across all AUs. Among other potential reasons, one cause is their focus on improving the overall average F1 score, where good performance on a small number of AUs increases the overall average F1 score even with poor performance on other AUs. Building on our previous success, which achieved the highest average F1 score, this work focuses on improving its performance across all AUs to address this challenge. We propose a mixture of experts as the meta-learner to combine the outputs of an explicit stacking ensemble. For our ensemble, we use a heterogeneous, negative correlation, explicit stacking ensemble. We introduce an additional measurement called Borda ranking to better evaluate the overall performance across all AUs. As indicated by this additional metric, our method not only maintains the best overall average F1 score but also achieves the highest performance across all AUs on the BP4D and DISFA datasets. We also release a synthetic dataset as additional training data, the first with balanced AU labels. Full article
20 pages, 3502 KiB  
Article
Explainable AI Models for IoT-Based Shaft Power Prediction and Comprehensive Performance Monitoring
by Sotiris Zikas, Katerina Gkirtzou, Ioannis Filippopoulos, Dimitris Kalatzis, Theodor Panagiotakopoulos, Zoran Lajic, Dimitris Papathanasiou and Yiannis Kiouvrekis
Electronics 2025, 14(13), 2561; https://doi.org/10.3390/electronics14132561 - 24 Jun 2025
Viewed by 221
Abstract
This paper presents a comparative analysis of machine learning-based methods for predicting shaft power in ships, a key factor in optimizing ship performance. Accurate shaft power prediction facilitates efficient operations, reducing fuel consumption, emissions, and maintenance costs, aligning with environmental regulations and promoting [...] Read more.
This paper presents a comparative analysis of machine learning-based methods for predicting shaft power in ships, a key factor in optimizing ship performance. Accurate shaft power prediction facilitates efficient operations, reducing fuel consumption, emissions, and maintenance costs, aligning with environmental regulations and promoting sustainable maritime practices. The proposed approach evaluates three machine learning methods, analyzing 431 models to determine the most accurate and reliable option for VLCC tankers. XGBoost emerged as the top-performing model, delivering a 13% improvement in accuracy over traditional methods. Using the SHAP framework, key factors influencing shaft power predictions—such as GPS speed, draft, days from dry dock, and wave height—were identified, enhancing model transparency and decision-making clarity. This explainability fosters trust in the use of AI within marine engineering. The results demonstrate that machine learning can optimize maintenance scheduling by reducing unnecessary cleaning procedures, mitigating propulsion system wear, and improving reliability. By using predictive insights, ship operators can achieve better fuel efficiency, lower emissions, and cost savings. The study underscores the potential of explainable machine learning models as transformative tools for ship performance monitoring, supporting greener and more efficient maritime operations. Full article
Show Figures

Figure 1

30 pages, 1455 KiB  
Article
Automated Formative Feedback for Algorithm and Data Structure Self-Assessment
by Lourdes Araujo, Fernando Lopez-Ostenero, Laura Plaza and Juan Martinez-Romo
Electronics 2025, 14(5), 1034; https://doi.org/10.3390/electronics14051034 - 5 Mar 2025
Viewed by 943
Abstract
Self-evaluation empowers students to progress independently and adapt their pace according to their unique circumstances. A critical facet of self-assessment and personalized learning lies in furnishing learners with formative feedback. This feedback, dispensed following their responses to self-assessment questions, constitutes a pivotal component [...] Read more.
Self-evaluation empowers students to progress independently and adapt their pace according to their unique circumstances. A critical facet of self-assessment and personalized learning lies in furnishing learners with formative feedback. This feedback, dispensed following their responses to self-assessment questions, constitutes a pivotal component of formative assessment systems. We hypothesize that it is possible to generate explanations that are useful as formative feedback using different techniques depending on the type of self-assessment question under consideration. This study focuses on a subject taught in a computer science program at a Spanish distance learning university. Specifically, it delves into advanced data structures and algorithmic frameworks, which serve as overarching principles for addressing complex problems. The generation of these explanatory resources hinges on the specific nature of the question at hand, whether theoretical, practical, related to computational cost, or focused on selecting optimal algorithmic approaches. Our work encompasses a thorough analysis of each question type, coupled with tailored solutions for each scenario. To automate this process as much as possible, we leverage natural language processing techniques, incorporating advanced methods of semantic similarity. The results of the assessment of the feedback generated for a subset of theoretical questions validate the effectiveness of the proposed methods, allowing us to seamlessly integrate this feedback into the self-assessment system. According to a survey, students found the resulting tool highly useful. Full article
Show Figures

Figure 1

20 pages, 5610 KiB  
Article
Graph Neural Network (GNN) for Joint Detection–Decoder MAP–LDPC in Bit-Patterned Media Recording Systems
by Thien An Nguyen and Jaejin Lee
Electronics 2024, 13(23), 4811; https://doi.org/10.3390/electronics13234811 - 5 Dec 2024
Cited by 3 | Viewed by 1451
Abstract
With its high area density, bit-patterned media recording (BPMR) is emerging as a leading technology for next-generation storage systems. However, as area density increases, magnetic islands are positioned closer together, causing significant two-dimensional (2D) interference. To address this, detection methods are used to [...] Read more.
With its high area density, bit-patterned media recording (BPMR) is emerging as a leading technology for next-generation storage systems. However, as area density increases, magnetic islands are positioned closer together, causing significant two-dimensional (2D) interference. To address this, detection methods are used to interpret the received signal and mitigate 2D interference. Recently, the maximum a posteriori (MAP) detection algorithm has shown promise in improving BPMR performance, though it requires extrinsic information to effectively reduce interference. In this paper, to solve the 2D interference and improve the performance of BPMR systems, a model using low-density parity-check (LDPC) coding was introduced to supply the MAP detector with the needed extrinsic information, enhancing detection in a joint decoding model we call MAP–LDPC. Additionally, leveraging similarities between LDPC codes and graph neural networks (GNNs), we replace the traditional sum–product algorithm in LDPC decoding with a GNN, creating a new model, MAP–GNN. The simulation results demonstrate that MAP–GNN achieves superior performance, particularly when using the deep learning-based GNN approach over conventional techniques. Full article
Show Figures

Figure 1

23 pages, 770 KiB  
Article
Computationally Efficient Inference via Time-Aware Modular Control Systems
by Dmytro Shchyrba and Hubert Zarzycki
Electronics 2024, 13(22), 4416; https://doi.org/10.3390/electronics13224416 - 11 Nov 2024
Viewed by 1223
Abstract
Control in multi-agent decision-making systems is an important issue with a wide variety of existing approaches. In this work, we offer a new comprehensive framework for distributed control. The main contributions of this paper are summarized as follows. First, we propose PHIMEC (physics-informed [...] Read more.
Control in multi-agent decision-making systems is an important issue with a wide variety of existing approaches. In this work, we offer a new comprehensive framework for distributed control. The main contributions of this paper are summarized as follows. First, we propose PHIMEC (physics-informed meta control)—an architecture for learning optimal control by employing a physics-informed neural network when the state space is too large for reward-based learning. Second, we offer a way to leverage impulse response as a tool for system modeling and control. We propose IMPULSTM, a novel approach for incorporating time awareness into recurrent neural networks designed to accommodate irregular sampling rates in the signal. Third, we propose DIMAS, a modular approach to increasing computational efficiency in distributed control systems via domain-knowledge integration. We analyze the performance of the first two contributions on a set of corresponding benchmarks and then showcase their combined performance as a domain-informed distributed control system. The proposed approaches show satisfactory performance both individually in their respective applications and as a connected system. Full article
Show Figures

Figure 1

Review

Jump to: Research

54 pages, 2065 KiB  
Review
Edge Intelligence: A Review of Deep Neural Network Inference in Resource-Limited Environments
by Dat Ngo, Hyun-Cheol Park and Bongsoon Kang
Electronics 2025, 14(12), 2495; https://doi.org/10.3390/electronics14122495 - 19 Jun 2025
Viewed by 377
Abstract
Deploying deep neural networks (DNNs) in resource-limited environments—such as smartwatches, IoT nodes, and intelligent sensors—poses significant challenges due to constraints in memory, computing power, and energy budgets. This paper presents a comprehensive review of recent advances in accelerating DNN inference on edge platforms, [...] Read more.
Deploying deep neural networks (DNNs) in resource-limited environments—such as smartwatches, IoT nodes, and intelligent sensors—poses significant challenges due to constraints in memory, computing power, and energy budgets. This paper presents a comprehensive review of recent advances in accelerating DNN inference on edge platforms, with a focus on model compression, compiler optimizations, and hardware–software co-design. We analyze the trade-offs between latency, energy, and accuracy across various techniques, highlighting practical deployment strategies on real-world devices. In particular, we categorize existing frameworks based on their architectural targets and adaptation mechanisms and discuss open challenges such as runtime adaptability and hardware-aware scheduling. This review aims to guide the development of efficient and scalable edge intelligence solutions. Full article
Show Figures

Figure 1

Back to TopTop