Intelligent Systems and Dynamic Scheduling: Optimization and Management

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E2: Control Theory and Mechanics".

Deadline for manuscript submissions: 30 September 2025 | Viewed by 3279

Special Issue Editors

School of Transportation and Logistics Engineering, Wuhan University of Technology, Wuhan 430063, China
Interests: reliability optimization; intelligent fault diagnosis and prediction; intelligent maintenance decision; intelligent dynamic scheduling optimization

E-Mail Website
Guest Editor
School of Management, Zhejiang University of Finance and Economics, Zhejiang 310018, China
Interests: combinatorial optimization; operations research; intelligent computing

Special Issue Information

Dear Colleagues,

This Special Issue, "Intelligent Systems and Dynamic Scheduling: Optimization and Management", delves deeply into the amalgamation of mathematical theories and practical methodologies to drive advancements in manufacturing, logistics, and transportation systems. It aims to provide further insights into the mathematical aspects covered in the collection, including specific mathematical models and algorithms.

This Special Issue explores novel mathematical approaches that can significantly enhance the efficiency of dynamic scheduling and intelligent manufacturing/logistics processes. Various optimization algorithms are investigated, ranging from linear programming, which uses mathematical models to optimize resource allocation, to evolutionary computation, which employs iterative processes inspired by biological evolution to solve complex optimization problems.

The intricate interplay between predictive analytics, which uses statistical models and forecasting techniques to analyze current and historical facts, and real-time decision making is a recurring theme. This integration is crucial for dynamic scheduling, where decisions must be made rapidly and accurately based on current system conditions.

The importance of stochastic models is emphasized throughout the volume. These models, which incorporate randomness and probabilistic distributions, are essential for capturing the uncertainties inherent in production systems and supply chains. By modeling these uncertainties, decision makers can better anticipate and mitigate risks.

Collaborative discussions also highlight the role of discrete-event simulations, which use mathematical models to simulate the operation of complex systems. These simulations are crucial for understanding the nuances of scheduling policies and for testing the effectiveness of different scheduling strategies before implementation.

In summary, this Special Issue showcases pioneering research that bridges the gap between theoretical mathematics and practical applications in the field of intelligent systems and dynamic scheduling. The aim is not only to highlight current innovations but also to inspire future research that continues to push the boundaries of what is achievable within this interdisciplinary domain.

Dr. Yaqiong Lv
Dr. Shuzhu Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mathematical modeling
  • intelligent systems
  • dynamic scheduling
  • evolutionary computation
  • intelligent fault diagnosis
  • combinatorial optimization
  • operations research
  • intelligent computing
  • real-time decision making
  • reinforcement learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4093 KiB  
Article
Large Language Model-Guided SARSA Algorithm for Dynamic Task Scheduling in Cloud Computing
by Bhargavi Krishnamurthy and Sajjan G. Shiva
Mathematics 2025, 13(6), 926; https://doi.org/10.3390/math13060926 - 11 Mar 2025
Viewed by 663
Abstract
Nowadays, more enterprises are rapidly transitioning to cloud computing as it has become an ideal platform to perform the development and deployment of software systems. Because of its growing popularity, around ninety percent of enterprise applications rely on cloud computing solutions. The inherent [...] Read more.
Nowadays, more enterprises are rapidly transitioning to cloud computing as it has become an ideal platform to perform the development and deployment of software systems. Because of its growing popularity, around ninety percent of enterprise applications rely on cloud computing solutions. The inherent dynamic and uncertain nature of cloud computing makes it difficult to accurately measure the exact state of a system at any given point in time. Potential challenges arise with respect to task scheduling, load balancing, resource allocation, governance, compliance, migration, data loss, and lack of resources. Among all challenges, task scheduling is one of the main problems as it reduces system performance due to improper utilization of resources. State Action Reward Action (SARSA) learning, a policy variant of Q learning, which learns the value function based on the current policy action, has been utilized in task scheduling. But it lacks the ability to provide better heuristics for state and action pairs, resulting in biased solutions in a highly dynamic and uncertain computing environment like cloud. In this paper, the SARSA learning ability is enriched by the guidance of the Large Language Model (LLM), which uses LLM heuristics to formulate the optimal Q function. This integration of the LLM and SARSA for task scheduling provides better sampling efficiency and also reduces the bias in task allocation. The heuristic value generated by the LLM is capable of mitigating the performance bias and also ensuring the model is not susceptible to hallucination. This paper provides the mathematical modeling of the proposed LLM_SARSA for performance in terms of the rate of convergence, reward shaping, heuristic values, under-/overestimation on non-optimal actions, sampling efficiency, and unbiased performance. The implementation of the LLM_SARSA is carried out using the CloudSim express open-source simulator by considering the Google cloud dataset composed of eight different types of clusters. The performance is compared with recent techniques like reinforcement learning, optimization strategy, and metaheuristic strategy. The LLM_SARSA outperforms the existing works with respect to the makespan time, degree of imbalance, cost, and resource utilization. The experimental results validate the inference of mathematical modeling in terms of the convergence rate and better estimation of the heuristic value to optimize the value function of the SARSA learning algorithm. Full article
Show Figures

Figure 1

17 pages, 2128 KiB  
Article
Discrete Dynamic Berth Allocation Optimization in Container Terminal Based on Deep Q-Network
by Peng Wang, Jie Li and Xiaohua Cao
Mathematics 2024, 12(23), 3742; https://doi.org/10.3390/math12233742 - 28 Nov 2024
Cited by 1 | Viewed by 1278
Abstract
Effective berth allocation in container terminals is crucial for optimizing port operations, given the limited space and the increasing volume of container traffic. This study addresses the discrete dynamic berth allocation problem (DDBAP) under uncertain ship arrival times and varying load capacities. A [...] Read more.
Effective berth allocation in container terminals is crucial for optimizing port operations, given the limited space and the increasing volume of container traffic. This study addresses the discrete dynamic berth allocation problem (DDBAP) under uncertain ship arrival times and varying load capacities. A novel deep Q-network (DQN)-based model is proposed, leveraging a custom state space, rule-based actions, and an optimized reward function to dynamically allocate berths and schedule vessel arrivals. Comparative experiments were conducted with traditional algorithms, including ant colony optimization (ACO), parallel ant colony optimization (PACO), and ant colony optimization combined with genetic algorithm (ACOGA). The results show that DQN outperforms these methods significantly, achieving superior efficiency and effectiveness, particularly under high variability in ship arrivals and load conditions. Specifically, the DQN model reduced the total waiting time of vessels by 58.3% compared to ACO (262.85 h), by 57.9% compared to PACO (259.5 h), and by 57.4% compared to ACOGA (257.4 h), with a total waiting time of 109.45 h. Despite its impressive performance, DQN requires substantial computational power during the training phase and is sensitive to data quality. These findings underscore the potential of reinforcement learning to optimize berth allocation under dynamic conditions. Future work will explore multi-agent reinforcement learning (MARL) and real-time adaptive mechanisms to further enhance the robustness and scalability of the model. Full article
Show Figures

Figure 1

29 pages, 5045 KiB  
Article
Intelligent Fault Diagnosis Across Varying Working Conditions Using Triplex Transfer LSTM for Enhanced Generalization
by Misbah Iqbal, Carman K. M. Lee, Kin Lok Keung and Zhonghao Zhao
Mathematics 2024, 12(23), 3698; https://doi.org/10.3390/math12233698 - 26 Nov 2024
Viewed by 905
Abstract
Fault diagnosis plays a pivotal role in ensuring the reliability and efficiency of industrial machinery. While various machine/deep learning algorithms have been employed extensively for diagnosing faults in bearings and gears, the scarcity of data and the limited availability of labels have become [...] Read more.
Fault diagnosis plays a pivotal role in ensuring the reliability and efficiency of industrial machinery. While various machine/deep learning algorithms have been employed extensively for diagnosing faults in bearings and gears, the scarcity of data and the limited availability of labels have become a major bottleneck in developing data-driven diagnosis approaches, restricting the accuracy of deep networks. To overcome the limitations of insufficient labeled data and domain shift problems, an intelligent, data-driven approach based on the Triplex Transfer Long Short-Term Memory (TTLSTM) network is presented, which leverages transfer learning and fine-tuning strategies. Our proposed methodology uses empirical mode decomposition (EMD) to extract pertinent features from raw vibrational signals and utilizes Pearson correlation coefficients (PCC) for feature selection. L2 regularization transfer learning is utilized to mitigate the overfitting problem and to improve the model’s adaptability in diverse working conditions, especially in scenarios with limited labeled data. Compared with traditional transfer learning approaches, such as TCA, BDA, and JDA, which demonstrate accuracies in the range of 40–50%, our proposed model excels in identifying machinery faults with minimal labeled data by achieving 99.09% accuracy. Moreover, it performs significantly better than classical methods like SVM, RF, and CNN-based networks found in the literature, demonstrating the improved performance of our approach in fault diagnosis under varying working conditions and proving its applicability in real-world applications. Full article
Show Figures

Figure 1

Back to TopTop