Adaptive Decision Making Across Industries with AI and Machine Learning: Frameworks, Challenges, and Innovations

A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "AI-Driven Innovations".

Deadline for manuscript submissions: closed (31 October 2025) | Viewed by 16928

Special Issue Editor


E-Mail Website
Guest Editor
Shannon School of Business, Cape Breton University, Sydney, NS B1M 1A2, Canada
Interests: supply chain management; operations management; data science and predictive analytics; machine learning; text classification; sentiment analysis

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) and machine learning (ML) are transforming decision-making processes across various industries, from manufacturing and supply chain management to healthcare and finance. By enabling data-driven insights, predictive analytics, and automation, AI-driven adaptive decision making enhances efficiency, reduces risks, and improves overall operational performance. However, challenges such as model reliability, ethical considerations, interpretability, and real-time adaptability remain critical in industrial applications. This Special Issue aims to explore cutting-edge AI and ML methodologies that drive adaptive decision making across industries. We invite original research articles, review papers, and case studies that address the advancements, challenges, and practical implementations of AI-driven decision systems. Submissions focused on optimization techniques, forecasting, decision making under uncertainty, real-time analytics, risk management, and ethical AI applications in industry are particularly encouraged.

Topics of interest include, but are not limited to, the following:

  • AI-driven predictive analytics for industrial decision making;
  • Optimization and reinforcement learning in industrial applications;
  • Ethical considerations and fairness in AI-driven decisions;
  • Real-time adaptive decision models in manufacturing and logistics;
  • AI in supply chain resilience and risk management;
  • Human-AI collaboration in industrial automation;
  • Scalable AI solutions for dynamic industrial environments;
  • Trust, explainability, and transparency in AI-based decision systems.

Dr. Samiul Islam
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning in Industry 5.0
  • optimization
  • reinforcement learning
  • automation
  • predictive analytics
  • decision science

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

25 pages, 7143 KB  
Article
MoviGestion: Automating Fleet Management for Personnel Transport Companies Using a Conversational System and IoT Powered by AI
by Elias Torres-Espinoza, Luiggi Raúl Juarez-Vasquez and Vicky Huillca-Ayza
Computers 2026, 15(2), 71; https://doi.org/10.3390/computers15020071 - 23 Jan 2026
Viewed by 576
Abstract
The increasing complexity of fleet operations often forces drivers and administrators to alternate between fragmented tools for geolocation, messaging, and spreadsheet-based reporting, which slows response times and increases cognitive load. This study evaluates a comprehensive architectural framework designed to automate fleet management in [...] Read more.
The increasing complexity of fleet operations often forces drivers and administrators to alternate between fragmented tools for geolocation, messaging, and spreadsheet-based reporting, which slows response times and increases cognitive load. This study evaluates a comprehensive architectural framework designed to automate fleet management in personnel transport companies. The research proposes a unified methodology integrating Internet-of-Things (IoT) telemetry, cloud analytics, and Conversational AI to mitigate information fragmentation. Through a Lean UX iterative process, the proposed system was modeled and validated, with 30 participants (10 administrators and 20 drivers) who performed representative operational tasks in a simulated environment. Usability was assessed through the System Usability Scale (SUS), obtaining a score of 71.5 out of 100, classified as “Good Usability”. The results demonstrate that combining conversational interfaces with centralized operational data reduces friction, accelerates decision-making, and improves the overall user experience in fleet management contexts. Full article
Show Figures

Figure 1

17 pages, 466 KB  
Article
Breaking the Speed–Accuracy Trade-Off: A Novel Embedding-Based Framework with Coarse Screening-Refined Verification for Zero-Shot Named Entity Recognition
by Meng Yang, Shuo Wang, Hexin Yang and Ning Chen
Computers 2026, 15(1), 36; https://doi.org/10.3390/computers15010036 - 7 Jan 2026
Viewed by 515
Abstract
Although fine-tuning pretrained language models has brought remarkable progress to zero-shot named entity recognition (NER), current generative approaches still suffer from inherent limitations. Their autoregressive decoding mechanism requires token-by-token generation, resulting in low inference efficiency, while the massive parameter scale leads to high [...] Read more.
Although fine-tuning pretrained language models has brought remarkable progress to zero-shot named entity recognition (NER), current generative approaches still suffer from inherent limitations. Their autoregressive decoding mechanism requires token-by-token generation, resulting in low inference efficiency, while the massive parameter scale leads to high computational and deployment costs. In contrast, span-based methods avoid autoregressive decoding but often face large candidate spaces and severe noise redundancy, which hinder efficient entity localization in long-text scenarios. To overcome these challenges, we propose an efficient Embedding-based NER framework that achieves an optimal balance between performance and efficiency. Specifically, the framework first introduces a lightweight dynamic feature matching module for coarse-grained entity localization, enabling rapid filtering of potential entity regions. Then, a hierarchical progressive entity filtering mechanism is applied for fine-grained recognition and noise suppression. Experimental results demonstrate that the proposed model, which is trained on a single RTX 5090 GPU for only 24 h, attains approximately 90% of the performance of the SOTA GNER-T5 11B model while using only one-seventh of its parameters. Moreover, by eliminating the redundancy of autoregressive decoding, the proposed framework achieves a 17× faster inference speed compared to GNER-T5 11B and significantly surpasses traditional span-based approaches in efficiency. Full article
Show Figures

Figure 1

15 pages, 1844 KB  
Article
Artificial Intelligence Agent-Enabled Predictive Maintenance: Conceptual Proposal and Basic Framework
by Wenyu Jiang and Fuwen Hu
Computers 2025, 14(8), 329; https://doi.org/10.3390/computers14080329 - 15 Aug 2025
Cited by 6 | Viewed by 8070
Abstract
Predictive maintenance (PdM) represents a significant evolution in maintenance strategies. However, challenges such as system integration complexity, data quality, and data availability are intricately intertwined, collectively impacting the successful deployment of PdM systems. Recently, large model-based agents, or agentic artificial intelligence (AI), have [...] Read more.
Predictive maintenance (PdM) represents a significant evolution in maintenance strategies. However, challenges such as system integration complexity, data quality, and data availability are intricately intertwined, collectively impacting the successful deployment of PdM systems. Recently, large model-based agents, or agentic artificial intelligence (AI), have evolved from simple task automation to active problem-solving and strategic decision-making. As such, we propose an AI agent-enabled PdM method that leverages an agentic AI development platform to streamline the development of a multimodal data-based fault detection agent, a RAG (retrieval-augmented generation)-based fault classification agent, a large model-based fault diagnosis agent, and a digital twin-based fault handling simulation agent. This approach breaks through the limitations of traditional PdM, which relies heavily on single models. This combination of “AI workflow + large reasoning models + operational knowledge base + digital twin” integrates the concepts of BaaS (backend as a service) and LLMOps (large language model operations), constructing an end-to-end intelligent closed loop from data perception to decision execution. Furthermore, a tentative prototype is demonstrated to show the technology stack and the system integration methods of the agentic AI-based PdM. Full article
Show Figures

Figure 1

23 pages, 1590 KB  
Article
A Decision Support System for Classifying Suppliers Based on Machine Learning Techniques: A Case Study in the Aeronautics Industry
by Ana Claudia Andrade Ferreira, Alexandre Ferreira de Pinho, Matheus Brendon Francisco, Laercio Almeida de Siqueira, Jr. and Guilherme Augusto Vilas Boas Vasconcelos
Computers 2025, 14(7), 271; https://doi.org/10.3390/computers14070271 - 10 Jul 2025
Cited by 1 | Viewed by 2151
Abstract
This paper presents the application of four machine learning algorithms to segment suppliers in a real case. The algorithms used were K-Means, Hierarchical K-Means, Agglomerative Nesting (AGNES), and Fuzzy Clustering. The analyzed company has suppliers that have been clustered using responses such as [...] Read more.
This paper presents the application of four machine learning algorithms to segment suppliers in a real case. The algorithms used were K-Means, Hierarchical K-Means, Agglomerative Nesting (AGNES), and Fuzzy Clustering. The analyzed company has suppliers that have been clustered using responses such as the number of non-conformities, location, and quantity supplied, among others. The CRISP-DM methodology was used for the work development. The proposed methodology is important for both industry and academia, as it helps managers make decisions about the quality of their suppliers and compares the use of four different algorithms for this purpose, which is an important insight for new studies. The K-Means algorithm obtained the best performance both for the metrics obtained and the simplicity of use. It is important to highlight that no studies to date have been conducted using the four algorithms proposed here applied in an industrial case, and this work shows this application. The use of artificial intelligence in industry is essential in this Industry 4.0 era for companies to make decisions, i.e., to have ways to make better decisions using data-driven concepts. Full article
Show Figures

Figure 1

23 pages, 1222 KB  
Article
A Data Quality Pipeline for Industrial Environments: Architecture and Implementation
by Teresa Peixoto, Óscar Oliveira, Eliana Costa e Silva, Bruno Oliveira and Fillipe Ribeiro
Computers 2025, 14(7), 241; https://doi.org/10.3390/computers14070241 - 20 Jun 2025
Cited by 6 | Viewed by 2928
Abstract
In modern industrial environments, data-driven decision-making plays a crucial role in ensuring operational efficiency, predictive maintenance, and process optimization. However, the effectiveness of these decisions is highly dependent on the quality of the data. Industrial data is typically generated in real time by [...] Read more.
In modern industrial environments, data-driven decision-making plays a crucial role in ensuring operational efficiency, predictive maintenance, and process optimization. However, the effectiveness of these decisions is highly dependent on the quality of the data. Industrial data is typically generated in real time by sensors integrated into IoT devices and smart manufacturing systems, resulting in high-volume, heterogeneous, and rapidly changing data streams. This paper presents the design and implementation of a data quality pipeline specifically adapted to such industrial contexts. The proposed pipeline includes modular components responsible for data ingestion, profiling, validation, and continuous monitoring, and is guided by a comprehensive set of data quality dimensions, including accuracy, completeness, consistency, and timeliness. For each dimension, appropriate metrics are applied, including accuracy measures based on dynamic intervals and validations based on consistency rules. To evaluate its effectiveness, we conducted a case study in a real manufacturing environment. By continuously monitoring data quality, problems can be proactively identified before they impact downstream processes, resulting in more reliable and timely decisions. Full article
Show Figures

Figure 1

Other

Jump to: Research

30 pages, 3710 KB  
Systematic Review
Machine Learning and Ensemble Methods for Cardiovascular Disease Prediction: A Systematic Review of Approaches, Performance Trends, and Research Challenges
by Ghazala Gul, Imtiaz Ali Korejo, Dil Nawaz Hakro, Haitham Alqahtani, Abdullah Abbasi, Muhammad Babar, Osama Al Rahbi and Najma Imtiaz Ali
Computers 2026, 15(1), 25; https://doi.org/10.3390/computers15010025 - 5 Jan 2026
Viewed by 2081
Abstract
Knowledge discovery helps mitigate the shortcomings of classical machine learning, especially those so-called imbalanced, high-dimensional, and noisy data challenges. Adaptive combination of multiple models, voting and other data fusion strategies, and the incorporation of other disparate information fusion methods characterize ensemble learning, which [...] Read more.
Knowledge discovery helps mitigate the shortcomings of classical machine learning, especially those so-called imbalanced, high-dimensional, and noisy data challenges. Adaptive combination of multiple models, voting and other data fusion strategies, and the incorporation of other disparate information fusion methods characterize ensemble learning, which addresses the improvement of a predictive model’s accuracy, stability, and generalization. This paper provides a summary of the important approaches to ensemble learning and their real-world uses, emphasizing challenges and opportunities for future work. This paper also discusses how ensemble learning integrates with emergent areas such as deep learning and reinforcement learning. This paper also describes the most important machine learning methods for predicting heart disease, which include decision trees, support vector machines, artificial neural networks, Naïve Bayes, random forest, and K-nearest neighbors. Full article
Show Figures

Graphical abstract

Back to TopTop