Complex Process Modeling and Control Based on AI Technology

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 20 December 2025 | Viewed by 9295

Special Issue Editors


E-Mail Website
Guest Editor
School of Automation, China University of Geosciences, Wuhan 430074, China
Interests: process control; intelligent control; computational intelligence

E-Mail Website
Guest Editor
School of Information Science and Technology, Beijing University of Technology, Beijing 100124, China
Interests: robot control; multi-agent cooperative control; high-precision control of electromechanical systems; active disturbance rejection control; advanced robust control; control theory and application
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, with the rapid development of big data and artificial intelligence technology, how to apply these new technologies to complex systems has attracted the attention of many scholars. The aim of this Special Issue is to explore complex process modeling, optimization and control, as well as the important support for the application of advanced methods and techniques of machine learning and artificial intelligence to complex processes.

This Special Issue will focus on complex process modeling and control based on AI technology and will present considerable novelty in both theoretical background and practical design. Papers should provide original ideas and new approaches, and should clearly indicate progress made in problem formulation, methodology, or application. Research areas may include (but are not limited to) the following:

  • Hybrid intelligent modeling techniques;
  • Data-driven modelling techniques;
  • Modeling and optimization of complex industrial processes;
  • Online measurement, process control, and optimization for cyber–physical systems;
  • Data mining and management methods for massive volumes of data;
  • Machine learning applications to manufacturing automation.

Prof. Dr. Jie Hu
Prof. Dr. Sheng Du
Dr. Pan Yu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data-driven modeling
  • artificial intelligence
  • process control
  • dynamical systems modelling
  • machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

27 pages, 5478 KiB  
Article
Hybrid LSTM–Transformer Architecture with Multi-Scale Feature Fusion for High-Accuracy Gold Futures Price Forecasting
by Yali Zhao, Yingying Guo and Xuecheng Wang
Mathematics 2025, 13(10), 1551; https://doi.org/10.3390/math13101551 - 8 May 2025
Viewed by 301
Abstract
Amidst global economic fluctuations and escalating geopolitical risks, gold futures, as a pivotal safe-haven asset, demonstrate price dynamics that directly impact investor decision-making and risk mitigation effectiveness. Traditional forecasting models face significant limitations in capturing long-term trends, addressing abrupt volatility, and mitigating multi-source [...] Read more.
Amidst global economic fluctuations and escalating geopolitical risks, gold futures, as a pivotal safe-haven asset, demonstrate price dynamics that directly impact investor decision-making and risk mitigation effectiveness. Traditional forecasting models face significant limitations in capturing long-term trends, addressing abrupt volatility, and mitigating multi-source noise within complex market environments characterized by nonlinear interactions and extreme events. Current research predominantly focuses on single-model approaches (e.g., ARIMA or standalone neural networks), inadequately addressing the synergistic effects of multimodal market signals (e.g., cross-market index linkages, exchange rate fluctuations, and policy shifts) and lacking the systematic validation of model robustness under extreme events. Furthermore, feature selection often relies on empirical assumptions, failing to uncover non-explicit correlations between market factors and gold futures prices. A review of the global literature reveals three critical gaps: (1) the insufficient integration of temporal dependency and global attention mechanisms, leading to imbalanced predictions of long-term trends and short-term volatility; (2) the neglect of dynamic coupling effects among cross-market risk factors, such as energy ETF-metal market spillovers; and (3) the absence of hybrid architectures tailored for high-frequency noise environments, limiting predictive utility for decision support. This study proposes a three-stage LSTM–Transformer–XGBoost fusion framework. Firstly, XGBoost-based feature importance ranking identifies six key drivers from thirty-six candidate indicators: the NASDAQ Index, S&P 500 closing price, silver futures, USD/CNY exchange rate, China’s 1-year Treasury yield, and Guotai Zhongzheng Coal ETF. Second, a dual-channel deep learning architecture integrates LSTM for long-term temporal memory and Transformer with multi-head self-attention to decode implicit relationships in unstructured signals (e.g., market sentiment and climate policies). Third, rolling-window forecasting is conducted using daily gold futures prices from the Shanghai Futures Exchange (2015–2025). Key innovations include the following: (1) a bidirectional LSTM–Transformer interaction architecture employing cross-attention mechanisms to dynamically couple global market context with local temporal features, surpassing traditional linear combinations; (2) a Dynamic Hierarchical Partition Framework (DHPF) that stratifies data into four dimensions (price trends, volatility, external correlations, and event shocks) to address multi-driver complexity; (3) a dual-loop adaptive mechanism enabling endogenous parameter updates and exogenous environmental perception to minimize prediction error volatility. This research proposes innovative cross-modal fusion frameworks for gold futures forecasting, providing financial institutions with robust quantitative tools to enhance asset allocation optimization and strengthen risk hedging strategies. It also provides an interpretable hybrid framework for derivative pricing intelligence. Future applications could leverage high-frequency data sharing and cross-market risk contagion models to enhance China’s influence in global gold pricing governance. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

16 pages, 468 KiB  
Article
Mode Selection for Device to Device Communication in Dynamic Network: A Statistical and Deep Learning Method
by Daqian Liu, Guiqi Kang, Yuntao Shi, Yingying Wang and Zhenwu Lei
Mathematics 2025, 13(3), 343; https://doi.org/10.3390/math13030343 - 22 Jan 2025
Viewed by 646
Abstract
A major challenge in device to device (D2D) communications is determining the appropriate communication modes for each potential D2D pair. In dynamic networks, the continuous movement of devices increases the complexity of channel state modeling, which makes it difficult to predict the quality [...] Read more.
A major challenge in device to device (D2D) communications is determining the appropriate communication modes for each potential D2D pair. In dynamic networks, the continuous movement of devices increases the complexity of channel state modeling, which makes it difficult to predict the quality of network service and select appropriate switching thresholds, ultimately affecting the accuracy of D2D mode selection. This paper proposes a novel D2D mode selection method, which integrates deep learning with statistical learning and includes three modules: signal-to-interference-plus-noise-ratio (SINR) prediction, error analysis, and threshold selection. Specifically, the SINR prediction module employs the gated recurrent unit (GRU) method to predict future SINR values; the error analysis module applies a non-parametric method to construct a probability density function of the prediction error. The combination of these two modules provides a significant improvement in prediction value accuracy. Additionally, in the threshold selection module, two constraints are innovatively introduced to mitigate the problem of frequent switching: average reliability (AR) and probably correct reliability (PCR). Simulation results demonstrate that the proposed method achieves higher system throughput, longer D2D mode residence time, and a lower mode switching frequency compared to other methods. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

19 pages, 2126 KiB  
Article
A Dual-Path Neural Network for High-Impedance Fault Detection
by Keqing Ning, Lin Ye, Wei Song, Wei Guo, Guanyuan Li, Xiang Yin and Mingze Zhang
Mathematics 2025, 13(2), 225; https://doi.org/10.3390/math13020225 - 10 Jan 2025
Cited by 1 | Viewed by 771
Abstract
High-impedance fault detection poses significant challenges for distribution network maintenance and operation. We propose a dual-path neural network for high-impedance fault detection. To enhance feature extraction, we use a Gramian Angular Field algorithm to transform 1D zero-sequence voltage signals into 2D images. Our [...] Read more.
High-impedance fault detection poses significant challenges for distribution network maintenance and operation. We propose a dual-path neural network for high-impedance fault detection. To enhance feature extraction, we use a Gramian Angular Field algorithm to transform 1D zero-sequence voltage signals into 2D images. Our dual-branch network simultaneously processes both representations: the CNN extracts spatial features from the transformed images, while the GRU captures temporal features from the raw signals. To optimize model performance, we integrate the Crested Porcupine Optimizer (CPO) algorithm for the adaptive optimization of key network hyperparameters. The experimental results demonstrate that our method achieves a 99.70% recognition accuracy on a dataset comprising high-impedance faults, capacitor switching, and load connections. Furthermore, it maintains robust performance under various test conditions, including different noise levels and network topology changes. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

18 pages, 3370 KiB  
Article
Start Time Planning for Cyclic Queuing and Forwarding in Time-Sensitive Networks
by Daqian Liu, Zhewei Zhang, Yuntao Shi, Yingying Wang, Jingcheng Guo and Zhenwu Lei
Mathematics 2024, 12(21), 3382; https://doi.org/10.3390/math12213382 - 29 Oct 2024
Viewed by 1153
Abstract
Time-sensitive networking (TSN) is a kind of network communication technology applied in fields such as industrial internet and intelligent transportation, capable of meeting the application requirements for precise time synchronization and low-latency deterministic forwarding. In TSN, cyclic queuing and forwarding (CQF) is a [...] Read more.
Time-sensitive networking (TSN) is a kind of network communication technology applied in fields such as industrial internet and intelligent transportation, capable of meeting the application requirements for precise time synchronization and low-latency deterministic forwarding. In TSN, cyclic queuing and forwarding (CQF) is a traffic shaping mechanism that has been extensively discussed in the recent literature, which allows the delay of time-triggered (TT) flow to be definite and easily calculable. In this paper, two algorithms are designed to tackle the start time planning issue with the CQF mechanism, namely the flow–path–offset joint scheduling (FPOJS) algorithm and congestion-aware scheduling algorithm, to improve the scheduling success ratio of TT flows. The FPOJS algorithm, which adopts a novel scheduling object—a combination of flow, path, and offset—implements scheduling in descending order of a well-designed priority that considers the resource capacity and resource requirements of ports. The congestion-aware scheduling algorithm identifies and optimizes congested ports during scheduling and substantially improves the scheduling success ratio by dynamically configuring port resources. The experimental results demonstrate that the FPOJS algorithm achieves a 39% improvement in the scheduling success ratio over the naive algorithm, 13% over the Tabu-ITP algorithm, and 10% over the MSS algorithm. Moreover, the algorithm exhibits a higher scheduling success ratio under large-scale TSN. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

14 pages, 2724 KiB  
Article
Improved Real-Time Detection Transformer-Based Rail Fastener Defect Detection Algorithm
by Wei Song, Bin Liao, Keqing Ning and Xiaoyu Yan
Mathematics 2024, 12(21), 3349; https://doi.org/10.3390/math12213349 - 25 Oct 2024
Cited by 3 | Viewed by 1179
Abstract
To address the issues of the Real-Time DEtection TRansformer (RT-DETR) object detection model, including poor defect feature extraction in the task of rail fastener defect detection, inefficient use of computational resources, and suboptimal channel attention in the self-attention mechanism, the following improvements were [...] Read more.
To address the issues of the Real-Time DEtection TRansformer (RT-DETR) object detection model, including poor defect feature extraction in the task of rail fastener defect detection, inefficient use of computational resources, and suboptimal channel attention in the self-attention mechanism, the following improvements were made. Firstly, a Super-Resolution Convolutional Module (SRConv) was designed as a separate component and integrated into the Backbone network, which enhances the image details and clarity while preserving the original image structure and semantic content. This integration improves the model’s ability to extract defect features. Secondly, a channel attention mechanism was integrated into the self-attention module of RT-DETR to enhance the focus on feature map channels, addressing the problem of sparse attention maps caused by the lack of channel attention while saving computational resources. Finally, the experimental results show that compared to the original model, the improved RT-DETR-based rail fastener defect detection algorithm, with an additional 0.4 MB of parameters, achieved a higher accuracy, with a 2.8 percentage point increase in the Mean Average Precision (mAP) across IoU thresholds from 0.5 to 0.9 and a 1.7 percentage point increase in the Average Recall (AR) across the same thresholds. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

12 pages, 483 KiB  
Article
Anti-Disturbance Bumpless Transfer Control for a Switched Systems via a Switched Equivalent-Input-Disturbance Approach
by Jiawen Wu, Qian Liu and Pan Yu
Mathematics 2024, 12(15), 2307; https://doi.org/10.3390/math12152307 - 23 Jul 2024
Viewed by 658
Abstract
This paper concentrates on the issue of anti-disturbance bumpless transfer (ADBT) control design for switched systems. The ADBT control design problem refers to designing a continuous controller and a switching rule to ensure the switched system satisfies the ADBT property. First, the concept [...] Read more.
This paper concentrates on the issue of anti-disturbance bumpless transfer (ADBT) control design for switched systems. The ADBT control design problem refers to designing a continuous controller and a switching rule to ensure the switched system satisfies the ADBT property. First, the concept of the ADBT property is introduced. Then, via a switched equivalent-input-disturbance (EID) methodology, a switched EID estimator is formulated to estimate the impact of external disturbances within the switched system. Second, a bumpless transfer control is then constructed via a compensator integrating an EID estimation. Finally, the effectiveness of the presented control scheme is verified by controlling a switching resistor–inductor–capacitor circuit on the Matlab platform. Above all, a new configuration for ADBT control of switched systems is established via a switched EID methodology. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

12 pages, 955 KiB  
Article
Time-Series Prediction of Electricity Load for Charging Piles in a Region of China Based on Broad Learning System
by Liansong Yu and Xiaohu Ge
Mathematics 2024, 12(13), 2147; https://doi.org/10.3390/math12132147 - 8 Jul 2024
Cited by 1 | Viewed by 1265
Abstract
This paper introduces a novel electricity load time-series prediction model, utilizing a broad learning system to tackle the challenge of low prediction accuracy caused by the unpredictable nature of electricity load sequences in a specific region of China. First, a correlation analysis with [...] Read more.
This paper introduces a novel electricity load time-series prediction model, utilizing a broad learning system to tackle the challenge of low prediction accuracy caused by the unpredictable nature of electricity load sequences in a specific region of China. First, a correlation analysis with mutual information is utilized to identify the key factors affecting the electricity load. Second, variational mode decomposition is employed to obtain different mode information, and then a broad learning system is utilized to build a prediction model with different mode information. Finally, particle swarm optimization is used to fuse the prediction models under different modes. Simulation experiments using real data validate the efficiency of the proposed method, demonstrating that it offers higher accuracy compared to advanced modeling techniques and can assist in optimal electricity-load scheduling decision-making. Additionally, the R2 of the proposed model is 0.9831, the PRMSE is 21.8502, the PMAE is 17.0097, and the PMAPE is 2.6468. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

13 pages, 1279 KiB  
Article
Fault Distance Measurement in Distribution Networks Based on Markov Transition Field and Darknet-19
by Haozhi Wang, Wei Guo and Yuntao Shi
Mathematics 2024, 12(11), 1665; https://doi.org/10.3390/math12111665 - 27 May 2024
Cited by 3 | Viewed by 1030
Abstract
The modern distribution network system is gradually becoming more complex and diverse, and traditional fault location methods have difficulty in quickly and accurately locating the fault location after a single-phase ground fault occurs. Therefore, this study proposes a new solution based on the [...] Read more.
The modern distribution network system is gradually becoming more complex and diverse, and traditional fault location methods have difficulty in quickly and accurately locating the fault location after a single-phase ground fault occurs. Therefore, this study proposes a new solution based on the Markov transfer field and deep learning to predict the fault location, which can accurately predict the location of a single-phase ground fault in the distribution network. First, a new phase-mode transformation matrix is used to take the fault current of the distribution network as the modulus 1 component, avoiding complex calculations in the complex field; then, the extracted modulus 1 component of the current is transformed into a Markov transfer field and converted into an image using pseudo-color coding, thereby fully exploiting the fault signal characteristics; finally, the Darknet-19 network is used to automatically extract fault features and predict the distance of the fault occurrence. Through simulations on existing models and training and testing with a large amount of data, the experimental results show that this method has good stability, high accuracy, and strong anti-interference ability. This solution can effectively predict the distance of ground faults in distribution networks. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 841 KiB  
Review
Review on System Identification, Control, and Optimization Based on Artificial Intelligence
by Pan Yu, Hui Wan, Bozhi Zhang, Qiang Wu, Bohao Zhao, Chen Xu and Shangbin Yang
Mathematics 2025, 13(6), 952; https://doi.org/10.3390/math13060952 - 13 Mar 2025
Viewed by 1355
Abstract
Control engineering plays an indispensable role in enhancing safety, improving comfort, and reducing fuel consumption and emissions for various industries, for which system identification, control, and optimization are primary topics. Alternatively, artificial intelligence (AI) is a leading, multi-disciplinary technology, which tries to incorporate [...] Read more.
Control engineering plays an indispensable role in enhancing safety, improving comfort, and reducing fuel consumption and emissions for various industries, for which system identification, control, and optimization are primary topics. Alternatively, artificial intelligence (AI) is a leading, multi-disciplinary technology, which tries to incorporate human learning and reasoning into machines or systems. AI exploits data to improve accuracy, efficiency, and intelligence, which is beneficial, especially in complex and challenging cases. The rapid progress of AI facilitates major changes in control engineering and is helping advance the next generation of system identification, control, and optimization methods. In this study, we review the developments, key technologies, and recent advancements of AI-based system identification, control, and optimization methods, as well as present potential future research directions. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

Back to TopTop